CN110852289A - Method for extracting information of vehicle and driver based on mobile video - Google Patents
Method for extracting information of vehicle and driver based on mobile video Download PDFInfo
- Publication number
- CN110852289A CN110852289A CN201911123162.3A CN201911123162A CN110852289A CN 110852289 A CN110852289 A CN 110852289A CN 201911123162 A CN201911123162 A CN 201911123162A CN 110852289 A CN110852289 A CN 110852289A
- Authority
- CN
- China
- Prior art keywords
- image
- team
- vehicle
- extracting
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/59—Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
- G06V20/597—Recognising the driver's state or behaviour, e.g. attention or drowsiness
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/50—Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/08—Detecting or categorising vehicles
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Computational Biology (AREA)
- Probability & Statistics with Applications (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a method for extracting vehicle and driver information based on a mobile video, which is based on color histogram characteristics and image edge characteristics, utilizes a clustering algorithm to automatically extract key frames from a lens unit, calculates subsequent vehicle information and driver information based on the key frames, does not need to calculate image frames in all video data, and effectively reduces the complexity and the calculated amount of the calculation of the subsequent vehicle information and the driver information; in the process of extracting the key frame, the image classification type does not need to be set manually, the key frame is extracted automatically by calculating the difference degree between the images from the global angle and the local angle by combining the color histogram features, the accuracy and the completeness of the extracted key frame are ensured, and the accuracy of subsequent vehicle information and driver information calculation is further ensured.
Description
Technical Field
The invention relates to the technical field of intelligent traffic control, in particular to a method for extracting information of vehicles and drivers based on mobile videos.
Background
In modern traffic control, it is often necessary to identify illegal vehicles; the data sources for identifying illegal vehicles comprise roadside checkpoint equipment, law enforcement recorders, vehicle-mounted video and other image equipment and streaming media equipment; when the basic data comes from mobile video equipment such as vehicle-mounted video of a law enforcement recorder, analyzable image data needs to be extracted from the streaming media data. In the prior art, each frame is taken as analyzable image data and each frame of a video image is analyzed, so that omission does not occur, but the calculated amount is very large; extracting key frames based on clustering analysis (image key frames refer to one or a plurality of frames of images reflecting main information content in a section of video), and then performing subsequent image analysis based on the extracted key frames to find out drivers or vehicles with illegal suspicion; when the key frames are extracted, the image classification types during clustering analysis are mostly determined firstly, then all the images are classified according to the preset types, but the image classification types are mostly manually set, the image classification types are greatly influenced by the capability and experience of technicians, and once the image classification types are not accurately set, the classification effect in the later period is not ideal, and further the subsequent image analysis result is influenced.
Disclosure of Invention
In order to solve the problem that the classification result is not ideal because the preset image classification type is greatly influenced by the capability of technical personnel when the key frame is extracted in the prior art, the invention provides the method for extracting the information of the vehicle and the driver based on the mobile video, which is not influenced by the capability of the technical personnel, can effectively extract the information of the driver with the illegal suspicion, and has the advantages of low complexity of the technical scheme and less calculation amount.
The technical scheme of the invention is as follows: a method for extracting vehicle and driver information based on mobile video comprises the following steps:
s1: acquiring video stream data in video acquisition equipment, and uploading the video data to a streaming media server;
s2: carrying out structured analysis on the video stream data, and decomposing the video stream data into lens units;
s3: extracting key frames from the lens units based on a clustering analysis method;
s4: preprocessing the key frame;
s5: identifying and analyzing the key frame by the existing image identification technology;
if a vehicle is detected, outputting a vehicle region image;
otherwise, ending the operation;
s6: performing face detection at a main driving position in the vehicle region image;
if the human face is detected, outputting a human face area image;
otherwise, ending the operation;
s7: extracting human face features from the human face region image;
s8: extracting vehicle features from the vehicle region image;
s9: retrieving in a related vehicle database and driver registration information by taking the extracted face features and the extracted vehicle features as basic data, so as to obtain effective information of the vehicle and the driver;
the method is characterized in that:
in step S3, extracting key frames from the shot units based on a cluster analysis method includes the following steps:
s3-1: the lens unit includes N image frames, denoted as:
AF={f1,f2,f3,...,fN}
in the formula: f. ofiIs the ith frame image, i is 1, 2.
S3-2: initializing i to 1;
newly building a cluster set T, and initializing the set T to be empty;
s3-3: take out fiCalculating fiImage feature P (Hist)i,Bi);
Wherein: histiFor an image frame fiColor histogram feature of (1), BiIs an image frame fiThe image edge feature of (1);
s3-4: determining whether the clustering cluster set T is empty;
if T is empty, go to step S3-5;
otherwise, if T is not null, executing step S3-6;
s3-5: newly-built clustering teamiSetting P (Hist)i,Bi) Is teamiThe center feature P _ AVG of (a);
teamiAdding into a cluster set T:
T={team1,...,teamM}
wherein M is a positive integer and represents the number of elements in the set;
executing step S3-10;
s3-6: respectively calculate fiObtaining M D with the difference D of the central characteristic P _ AVG of each cluster team in the cluster set T, wherein the minimum difference is set as D _ min;
s3-7: setting a threshold value epsilon, and comparing the minimum difference degree D _ min with the threshold value epsilon;
if D _ min > ε, perform step S3-5;
otherwise, executing step S3-8;
s3-8: image frame fiAdding the cluster team _ min to which the minimum difference D _ min belongs;
s3-9: updating the central feature P _ AVG of the cluster team _ min based on a mean algorithm;
executing step S3-10;
s3-10: assigning i as i + 1;
if i > N, performing step S3-11;
otherwise, executing step S3-3;
s3-11: outputting a final set T containing M clustering clusters:
T={team1,...,teamM};
s3-12: at qth team TqFinding the image frame with the minimum difference from the central feature P _ AVG, and setting the image frame as teamqKey frame Kf ofq;
Calculating the key frame of each cluster in the set T to finally obtain M key frames, wherein the set of all the key frames is as follows:
KF={Kf1,Kf2,...,KfM}
wherein q is a positive integer, q ═ 1, 2.. times.m;
s3-13: and outputting the key frame set KF to perform the subsequent steps.
It is further characterized in that:
in step S3-6, the calculation method of the degree of difference D is as follows:
D(fi,fj)=αDc(fi,fj)+(1-α)Dv(fi,fj)
wherein: dc(fi,fj) And Dv(fi,fj) The expression of (a) is as follows:
wherein α is a weighting coefficient, DcRepresenting two image frames fi、fjDegree of difference in color characteristics between, DvRepresenting two image frames fi、fjThe degree of edge feature difference therebetween; b isiIs through Canny image frame f extracted by algorithmiThe image edge feature of (1);
in step S4, preprocessing the key frame; the pretreatment comprises the following steps: median filtering noise, image anti-jitter and image enhancement;
the format of the video stream data in step S2 is h.264 or h.265 format;
in step S2, decomposing the video stream data into the shot units by using a shot mutation and gradual change detection algorithm;
the lens unit is video data formed by a group of temporally and spatially continuous video image frame sequences in sequence.
According to the method for extracting the vehicle and driver information based on the mobile video, the key frames are automatically extracted from the lens unit by using a clustering algorithm based on the color histogram feature and the image edge feature, the subsequent vehicle information and the driver information are calculated based on the key frames, the image frames in all video data are not required to be calculated, and the complexity and the calculation amount of the subsequent vehicle information and the driver information are effectively reduced; in the process of extracting the key frames, the image classification type does not need to be set manually, and the key frames are extracted automatically by calculating the difference degree between the images from the global angle and the local angle by combining the color histogram characteristics, so that the accuracy and completeness of the extracted key frames are ensured, and the accuracy of subsequent vehicle information and driver information calculation is further ensured; the technical personnel set different threshold values epsilon according to different specific scenes and different used video equipment applied by the technical scheme; by setting the threshold epsilon, the accuracy of the key frame extracted from each cluster is controlled, so that the calculated amount is reduced, the technical scheme of the invention is more flexible, and the method and the device are suitable for different scenes.
Drawings
Fig. 1 is a schematic flow chart illustrating a process of extracting key frames from a shot unit according to the present invention.
Detailed Description
As shown in fig. 1, the present invention provides a method for extracting information of a vehicle and a driver based on a mobile video, which comprises the following steps:
s1: acquiring video stream data in video acquisition equipment (such as a 4G law enforcement recorder and a vehicle-mounted video), and uploading the video data to a streaming media server;
s2: carrying out structured analysis on video stream data, and decomposing the video stream data into lens units; the format of the video stream data is H.264 or H.265 format; dividing video stream data into lens units by using a lens mutation and gradual change detection algorithm; the lens unit is video data sequentially composed of a group of temporally and spatially continuous video image frame sequences.
S3: extracting key frames from the lens unit based on a clustering analysis method;
s4: preprocessing the key frame; the pretreatment comprises the following steps: median filtering noise, image anti-jitter and image enhancement; through preprocessing, the definition of the key frame is improved, and the accuracy of subsequent vehicle information and driver information calculation is further ensured;
s5: performing identification analysis on the key frames by using the existing image identification technology (such as an image identification method based on deep learning and other technologies);
if a vehicle is detected, outputting a vehicle region image;
otherwise, ending the operation;
s6: carrying out face detection at a main driving position in the vehicle area image;
if the human face is detected, outputting a human face area image;
otherwise, ending the operation;
s7: extracting human face features from the human face region image;
s8: extracting vehicle features from the vehicle region image;
s9: and searching in the associated vehicle database and driver registration information by taking the extracted human face characteristics and vehicle characteristics as basic data, so as to obtain the effective information of the vehicle and the driver.
In step S3, the step of extracting key frames from the shot units based on the cluster analysis method includes the following steps:
s3-1: the lens unit includes N image frames, noted:
AF={f1,f2,f3,...,fN}
in the formula: f. ofiIs the ith frame image, i is 1, 2.
S3-2: initializing i to 1;
newly building a cluster set T, and initializing the set T to be empty;
s3-3: take out fiCalculating fiImage feature P (Hist)i,Bi);
Wherein: histiFor an image frame fiColor histogram feature of (1), BiIs an image frame f extracted based on the Canny algorithmiThe image edge feature of (1);
s3-4: determining whether the clustering cluster set T is empty;
if T is empty, go to step S3-5;
otherwise, if T is not null, executing step S3-6;
s3-5: newly-built clustering teamiSetting P (Hist)i,Bi) Is teamiThe center feature P _ AVG of (a);
teamiAdding into a cluster set T:
T={team1,...,teamM}
wherein M is a positive integer and represents the number of elements in the set;
executing step S3-10;
s3-6: respectively calculate fiObtaining M D with the difference D of the central characteristic P _ AVG of each cluster team in the cluster set T, wherein the minimum difference is set as D _ min;
the calculation method of the difference degree D is as follows:
D(fi,fj)=αDc(fi,fj)+(1-α)Dv(fi,fj)
wherein: dc(fi,fj) And Dv(fi,fj) The expression of (a) is as follows:
wherein α is a weighting coefficient, DcRepresenting two image frames fi、fjDegree of difference in color characteristics between, DvRepresenting two image frames fi、fjThe degree of edge feature difference therebetween;
s3-7: setting a threshold value epsilon in advance according to conditions such as the system operation environment, the characteristics of a detection target, the precision requirement of a detection result and the like, and comparing the minimum difference degree D _ min with the threshold value epsilon;
if D _ min > ε, perform step S3-5;
otherwise, executing step S3-8;
s3-8: image frame fiAdding the cluster team _ min to which the minimum difference D _ min belongs;
s3-9: updating the central feature P _ AVG of the cluster team _ min based on a mean algorithm;
executing step S3-10;
s3-10: assigning i as i + 1;
if i > N, performing step S3-11;
otherwise, executing step S3-3;
s3-11: outputting a final set T containing M clustering clusters:
T={team1,...,teamM};
s3-12: at qth team of cluster set TqFinding the image frame with the minimum difference from the central feature P _ AVG, and setting the image frame as teamqKey frame Kf ofq;
Calculating the key frame of each cluster in the cluster set T to finally obtain M key frames, wherein the set of all the key frames is as follows:
KF={Kf1,Kf2,...,KfM}
wherein q is a positive integer, q ═ 1, 2.. times.m;
s3-13: and outputting the key frame set KF, and extracting subsequent vehicle information and driver information.
By using the calculation scheme of the invention, the image frames with similar contents in the lens unit are aggregated into the same cluster through the control of the threshold epsilon based on a cluster analysis method without manual participation, so that the lens unit is divided into a plurality of different clusters, the image frames in the same cluster have higher similarity, and the image frames in different clusters have lower similarity; within the same cluster, using the color histogram feature HistqImage edge feature BqRespectively calculating the difference degree of the image frame and the central feature of the clustering cluster where the image frame is located, and selecting the image frame with the minimum difference degree as a key frame of the clustering cluster, wherein the key frame contains the most main information content of the clustering cluster; the key frame set obtained in this way is the image frame set containing the most main information content of the lens unit.
At present, 20-30 frames of images exist in video stream one second, and the average consumed time for identifying vehicle and face information from one frame of image on a GPU server is hundreds of milliseconds (300-500 ms); if the existing method is used for directly identifying all image frames, the result is obtained only after the 1s video is identified for more than 6s, and if the identification is continuously carried out, the delay of the result is longer and longer; by using the technical scheme of the invention, the length of the divided lens unit is different from one to ten seconds; and after the lens unit is segmented, the time consumption for extracting the key frames is averagely in the order of hundred milliseconds, the extracted key frames are all within 1-4 frames, and then the image recognition is carried out on the key frames, wherein the total time consumption is only about 1 s. And all the image recognition is carried out as serial processing, and the split lens unit and the image recognition are also carried out as parallel processing, so the technical scheme of the invention has the advantages of low computation amount, high efficiency and high recognition speed, can quickly and accurately extract the information of the suspect vehicle and the suspect, further informs the computed result to the police on duty in real time, and ensures the real-time performance of law enforcement.
Claims (6)
1. A method for extracting vehicle and driver information based on mobile video comprises the following steps:
s1: acquiring video stream data in video acquisition equipment, and uploading the video data to a streaming media server;
s2: carrying out structured analysis on the video stream data, and decomposing the video stream data into lens units;
s3: extracting key frames from the lens units based on a clustering analysis method;
s4: preprocessing the key frame;
s5: identifying and analyzing the key frame by the existing image identification technology;
if a vehicle is detected, outputting a vehicle region image;
otherwise, ending the operation;
s6: performing face detection at a main driving position in the vehicle region image;
if the human face is detected, outputting a human face area image;
otherwise, ending the operation;
s7: extracting human face features from the human face region image;
s8: extracting vehicle features from the vehicle region image;
s9: retrieving in a related vehicle database and driver registration information by taking the extracted face features and the extracted vehicle features as basic data, so as to obtain effective information of the vehicle and the driver;
the method is characterized in that:
in step S3, extracting key frames from the shot units based on a cluster analysis method includes the following steps:
s3-1: the lens unit includes N image frames, denoted as:
AF={f1,f2,f3,...,fN}
in the formula: f. ofiIs the ith frame image, i is 1, 2.
S3-2: initializing i to 1;
newly building a cluster set T, and initializing the set T to be empty;
s3-3: take out fiCalculating fiImage feature P (Hist)i,Bi);
Wherein: histiFor an image frame fiColor histogram feature of (1), BiIs an image frame fiThe image edge feature of (1);
s3-4: determining whether the clustering cluster set T is empty;
if T is empty, go to step S3-5;
otherwise, if T is not null, executing step S3-6;
s3-5: newly-built clustering teamiSetting P (Hist)i,Bi) Is teamiThe center feature P _ AVG of (a);
teamiAdding into a cluster set T:
T={team1,...,teamM}
wherein M is a positive integer and represents the number of elements in the set;
executing step S3-10;
s3-6: respectively calculate fiObtaining M D with the difference D of the central characteristic P _ AVG of each cluster team in the cluster set T, wherein the minimum difference is set as D _ min;
s3-7: setting a threshold value epsilon, and comparing the minimum difference degree D _ min with the threshold value epsilon;
if D _ min > ε, perform step S3-5;
otherwise, executing step S3-8;
s3-8: image frame fiAdding the cluster team _ min to which the minimum difference D _ min belongs;
s3-9: updating the central feature P _ AVG of the cluster team _ min based on a mean algorithm;
executing step S3-10;
s3-10: assigning i as i + 1;
if i > N, performing step S3-11;
otherwise, executing step S3-3;
s3-11: outputting a final set T containing M clustering clusters:
T={team1,...,teamM};
s3-12: at qth team TqFinding the image frame with the minimum difference from the central feature P _ AVG, and setting the image frame as teamqKey frame Kf ofq;
Calculating the key frame of each cluster in the set T to finally obtain M key frames, wherein the set of all the key frames is as follows:
KF={Kf1,Kf2,...,KfM}
wherein q is a positive integer, q ═ 1, 2.. times.m;
s3-13: and outputting the key frame set KF to perform the subsequent steps.
2. The method for extracting information of vehicles and drivers based on mobile video as claimed in claim 1, wherein: in step S3-6, the calculation method of the degree of difference D is as follows:
D(fi,fj)=αDc(fi,fj)+(1-α)Dv(fi,fj)
wherein: dc(fi,fj) And Dv(fi,fj) The expression of (a) is as follows:
wherein α is a weighting coefficient, DcRepresenting two image frames fi、fjDegree of difference in color characteristics between, DvRepresenting two image frames fi、fjThe degree of edge feature difference therebetween; b isiIs an image frame f extracted by the Canny algorithmiThe image edge feature of (1).
3. The method for extracting information of vehicles and drivers based on mobile video as claimed in claim 1, wherein: in step S4, preprocessing the key frame; the pretreatment comprises the following steps: median filtering to eliminate noise, image anti-jitter and image enhancement.
4. The method for extracting information of vehicles and drivers based on mobile video as claimed in claim 1, wherein: the format of the video stream data in step S2 is h.264 or h.265 format.
5. The method for extracting information of vehicles and drivers based on mobile video as claimed in claim 1, wherein: in step S2, the video stream data is decomposed into the shot units using a shot break, fade detection algorithm.
6. The method for extracting information of vehicles and drivers based on mobile video as claimed in claim 1, wherein: the lens unit is video data formed by a group of temporally and spatially continuous video image frame sequences in sequence.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911123162.3A CN110852289A (en) | 2019-11-16 | 2019-11-16 | Method for extracting information of vehicle and driver based on mobile video |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911123162.3A CN110852289A (en) | 2019-11-16 | 2019-11-16 | Method for extracting information of vehicle and driver based on mobile video |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110852289A true CN110852289A (en) | 2020-02-28 |
Family
ID=69600661
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911123162.3A Pending CN110852289A (en) | 2019-11-16 | 2019-11-16 | Method for extracting information of vehicle and driver based on mobile video |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110852289A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115115822A (en) * | 2022-06-30 | 2022-09-27 | 小米汽车科技有限公司 | Vehicle-end image processing method and device, vehicle, storage medium and chip |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1477566A (en) * | 2003-07-18 | 2004-02-25 | 北京大学计算机科学技术研究所 | Method for making video search of scenes based on contents |
CN101021904A (en) * | 2006-10-11 | 2007-08-22 | 鲍东山 | Video content analysis system |
CN103150373A (en) * | 2013-03-08 | 2013-06-12 | 北京理工大学 | Generation method of high-satisfaction video summary |
CN104239420A (en) * | 2014-10-20 | 2014-12-24 | 北京畅景立达软件技术有限公司 | Video fingerprinting-based video similarity matching method |
CN105761263A (en) * | 2016-02-19 | 2016-07-13 | 浙江大学 | Video key frame extraction method based on shot boundary detection and clustering |
CN107220585A (en) * | 2017-03-31 | 2017-09-29 | 南京邮电大学 | A kind of video key frame extracting method based on multiple features fusion clustering shots |
CN109002744A (en) * | 2017-06-06 | 2018-12-14 | 中兴通讯股份有限公司 | Image-recognizing method, device and video monitoring equipment |
CN109214315A (en) * | 2018-08-21 | 2019-01-15 | 北京深瞐科技有限公司 | Across the camera tracking method and device of people's vehicle |
CN110379176A (en) * | 2019-08-30 | 2019-10-25 | 公安部交通管理科学研究所 | Illegal activities early warning hold-up interception method of driving without a license based on image recognition |
-
2019
- 2019-11-16 CN CN201911123162.3A patent/CN110852289A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1477566A (en) * | 2003-07-18 | 2004-02-25 | 北京大学计算机科学技术研究所 | Method for making video search of scenes based on contents |
CN101021904A (en) * | 2006-10-11 | 2007-08-22 | 鲍东山 | Video content analysis system |
CN103150373A (en) * | 2013-03-08 | 2013-06-12 | 北京理工大学 | Generation method of high-satisfaction video summary |
CN104239420A (en) * | 2014-10-20 | 2014-12-24 | 北京畅景立达软件技术有限公司 | Video fingerprinting-based video similarity matching method |
CN105761263A (en) * | 2016-02-19 | 2016-07-13 | 浙江大学 | Video key frame extraction method based on shot boundary detection and clustering |
CN107220585A (en) * | 2017-03-31 | 2017-09-29 | 南京邮电大学 | A kind of video key frame extracting method based on multiple features fusion clustering shots |
CN109002744A (en) * | 2017-06-06 | 2018-12-14 | 中兴通讯股份有限公司 | Image-recognizing method, device and video monitoring equipment |
CN109214315A (en) * | 2018-08-21 | 2019-01-15 | 北京深瞐科技有限公司 | Across the camera tracking method and device of people's vehicle |
CN110379176A (en) * | 2019-08-30 | 2019-10-25 | 公安部交通管理科学研究所 | Illegal activities early warning hold-up interception method of driving without a license based on image recognition |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115115822A (en) * | 2022-06-30 | 2022-09-27 | 小米汽车科技有限公司 | Vehicle-end image processing method and device, vehicle, storage medium and chip |
CN115115822B (en) * | 2022-06-30 | 2023-10-31 | 小米汽车科技有限公司 | Vehicle-end image processing method and device, vehicle, storage medium and chip |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109829403B (en) | Vehicle anti-collision early warning method and system based on deep learning | |
WO2019196130A1 (en) | Classifier training method and device for vehicle-mounted thermal imaging pedestrian detection | |
CN106845374B (en) | Pedestrian detection method and detection device based on deep learning | |
WO2019196131A1 (en) | Method and apparatus for filtering regions of interest for vehicle-mounted thermal imaging pedestrian detection | |
CN109948582B (en) | Intelligent vehicle reverse running detection method based on tracking trajectory analysis | |
CN107766821B (en) | Method and system for detecting and tracking full-time vehicle in video based on Kalman filtering and deep learning | |
CN107451553B (en) | It is a kind of based on hypergraph transformation video in incident of violence detection method | |
CN100545867C (en) | Aerial shooting traffic video frequency vehicle rapid checking method | |
CN114170580B (en) | Expressway-oriented abnormal event detection method | |
CN110991283A (en) | Re-recognition and training data acquisition method and device, electronic equipment and storage medium | |
CN107730889B (en) | Target vehicle retrieval method based on traffic video | |
CN107038411A (en) | A kind of Roadside Parking behavior precise recognition method based on vehicle movement track in video | |
Abidin et al. | A systematic review of machine-vision-based smart parking systems | |
CN105957356A (en) | Traffic control system and method based on number of pedestrians | |
CN106919939B (en) | A kind of traffic signboard tracks and identifies method and system | |
CN106778484A (en) | Moving vehicle tracking under traffic scene | |
Yao et al. | Coupled multivehicle detection and classification with prior objectness measure | |
CN107452212B (en) | Crossing signal lamp control method and system | |
CN114463684A (en) | Urban highway network-oriented blockage detection method | |
CN108520528B (en) | Mobile vehicle tracking method based on improved difference threshold and displacement matching model | |
Gad et al. | Real-time lane instance segmentation using SegNet and image processing | |
CN110852289A (en) | Method for extracting information of vehicle and driver based on mobile video | |
Ren et al. | Automatic measurement of traffic state parameters based on computer vision for intelligent transportation surveillance | |
CN114091581A (en) | Vehicle operation behavior type identification method based on sparse track | |
CN116630904B (en) | Small target vehicle detection method integrating non-adjacent jump connection and multi-scale residual error structure |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200228 |
|
RJ01 | Rejection of invention patent application after publication |