CN113065531A - Vehicle identification method for three-dimensional spliced video of expressway service area - Google Patents

Vehicle identification method for three-dimensional spliced video of expressway service area Download PDF

Info

Publication number
CN113065531A
CN113065531A CN202110523304.6A CN202110523304A CN113065531A CN 113065531 A CN113065531 A CN 113065531A CN 202110523304 A CN202110523304 A CN 202110523304A CN 113065531 A CN113065531 A CN 113065531A
Authority
CN
China
Prior art keywords
feature points
plane
points
sampling
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110523304.6A
Other languages
Chinese (zh)
Other versions
CN113065531B (en
Inventor
黄世凤
马逸铭
沈天行
杨菊
刘熠
陈蕴菲
倪昭鑫
宓超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Maritime University
Original Assignee
Shanghai Maritime University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Maritime University filed Critical Shanghai Maritime University
Priority to CN202110523304.6A priority Critical patent/CN113065531B/en
Publication of CN113065531A publication Critical patent/CN113065531A/en
Application granted granted Critical
Publication of CN113065531B publication Critical patent/CN113065531B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/49Segmenting video sequences, i.e. computational techniques such as parsing or cutting the sequence, low-level clustering or determining units such as shots or scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a vehicle identification method for a three-dimensional spliced video of a highway service area, which comprises the following steps: s1, selecting a three-dimensional frame image from the three-dimensional spliced video, carrying out UV expansion on the three-dimensional frame image, selecting a minimum rectangular frame for the expanded plane, and selecting the frame of the minimum rectangular frame as a plane M to be detected; s2, down-sampling a plane M to be detected to obtain a multi-level sampling plane, dividing the multi-level sampling plane into a plurality of small image blocks to determine the feature points and the feature point inner directions of the small image blocks, clustering the feature points to obtain key feature points of the multi-level sampling plane, screening key feature points with high correlation, and matching the screened key feature points; and S3, calibrating the vehicle detected in the sampling plane, and mapping the vehicle to the original three-dimensional frame image through UV to realize vehicle identification.

Description

Vehicle identification method for three-dimensional spliced video of expressway service area
Technical Field
The invention relates to the field of computer vision and the field of intelligent transportation, in particular to a vehicle identification method for a three-dimensional spliced video of a highway service area.
Background
With the rapid development of economy in China, the continuous increase of the total mileage of the expressway and the explosive increase of the number of passenger cars and freight cars, the construction of the expressway service area gradually moves to informatization and modernization. However, due to the large passenger flow in the service area of the expressway, a large number of vehicles are driven, and occasionally traffic events such as vehicle collision and anchorage, road construction, cargo scattering and the like occur, so that transient congestion of roads is caused at light, and the traffic flow is interrupted and the traffic is blocked at heavy. In order to reduce traffic delay and ensure road safety, traffic events need to be detected accurately and quickly.
The expressway service area is a place for providing services such as dining, shopping, refueling, maintaining and vehicle repairing and the like along the expressway, and is very important for safety in trip, fatigue driving prevention and driving efficiency improvement. As the construction mileage of the highway increases year by year, more service areas are put into use, the pedestrian flow and the vehicle flow of the highway service areas are large, and the following safety problem cannot be ignored. The driver keeps high tension when driving on the highway all the time, the driver can feel tired when driving on the highway for a long time, and the driver is easy to be careless and not concentrated when entering the service area, so the high-speed service area is also a high-incidence area of safety accidents. In order to reduce traffic delay and ensure road safety, real-time dynamic statistics on traffic flow information in an area needs to be performed.
Disclosure of Invention
The invention aims to provide a vehicle identification method for a three-dimensional spliced video of a highway service area, which has feasibility and high efficiency for detecting small targets and solves the problem of vehicle detection of the highway service area.
In order to achieve the purpose, the invention is realized by the following technical scheme:
a vehicle identification method for a three-dimensional spliced video of a highway service area is characterized by comprising the following steps:
s1, selecting a three-dimensional frame image from the three-dimensional spliced video, carrying out UV expansion on the three-dimensional frame image, selecting a minimum rectangular frame for the expanded plane, and selecting the frame of the minimum rectangular frame as a plane M to be detected;
s2, down-sampling a plane M to be detected to obtain a multi-level sampling plane, dividing the multi-level sampling plane into a plurality of small image blocks to determine the feature points and the feature point inner directions of the small image blocks, clustering the feature points to obtain key feature points of the multi-level sampling plane, screening key feature points with high correlation, and matching the screened key feature points;
and S3, calibrating the vehicle detected in the sampling plane, and mapping the vehicle to the original three-dimensional frame image through UV to realize vehicle identification.
The down-sampling of the plane M to be detected in step S2 to obtain a multi-level sampling plane includes:
the plane M to be detected is downsampled according to the proportion of an image 1/2 to obtain a first-level image pyramid M1, M1 is downsampled to obtain M2, and 5 levels of image pyramids M, M1, M2, M3 and M4 are obtained by repeated downsampling.
The step S2 of dividing the multi-level sampling plane into a plurality of small image blocks to determine the feature points and the directions within the feature points of each small image block includes:
setting a decomposition threshold, equally dividing the image into m parts to obtain image blocks Ni (i is 1, 2 and 3.. m), equally dividing the image blocks Ni for j times until the image blocks Ni are smaller than the set decomposition threshold, and stopping decomposing the image blocks to obtain h small image blocks, wherein h is m × j;
traversing the brightness of pixel points in the small image block, comparing the brightness of the selected pixel point with the brightness of surrounding pixel points, and if the brightness of the selected pixel point is greater than or less than the brightness of the surrounding pixel points, taking the pixel point as a feature point;
after the image block feature points are found, the feature principal direction of the image feature points is determined, and in each small image block B, the mass center is obtained: where g (X, Y) is a gray scale expression and X, Y are gray scale values, then:
Figure BDA0003064906420000021
the direction of the internal characteristic point is obtained through the position of the center of mass point as follows:
Figure BDA0003064906420000022
before clustering the feature points in step S2, the method further includes:
the extracted characteristic points are standardized by a mean absolute deviation method, and the formula is
Figure BDA0003064906420000023
Wherein n is the number of selected characteristic points in the image, and the influence of outliers is reduced by a mean absolute deviation method so as to reduce the data set D to { a }iEach characteristic point a of 1, 2, …, n | i ═iCalculating the distance sum S between the calculated distance sum and the rest of the feature pointsiAnd the distances of all feature points are equal to H, if Si>H, then aiIs regarded as an isolated point, is deleted, and eliminates the influence of noise and the isolated point, wherein
Figure BDA0003064906420000031
Figure BDA0003064906420000032
And S2, clustering the feature points, obtaining key feature points of a multilevel sampling plane after clustering, screening key feature points with high correlation, and matching the screened key feature points, wherein the matching comprises the following steps:
s231, selecting a proper K value to cluster the obtained feature points in the fixed internal feature direction;
step S232, calculating cosine distances of each point position of the plane, comparing cosine values of plane vectors to measure different individual differences, comparing included angle values and screening direction differences of the characteristic vectors;
step 233, repeating step 231 and step 232 to obtain cluster blocks and cluster centers which reach a stable state, and selecting different colors to label the clusters;
step S234, establishing a characteristic matrix T for the obtained characteristic points, wherein each row in the matrix represents one characteristic point, extracting a plurality of sampling points for each characteristic point, and P sampling pairs are established by the plurality of sampling points, wherein the characteristic matrix T comprises P columns;
step S235, calculating the average value and variance of each row of the matrix, sorting each row of the feature matrix T according to the descending of the variance of each row, selecting the front Q rows, and finishing the screening of key feature points;
step S236, a binary descriptor of the Q columns is obtained, the columns of the binary descriptor are arranged from high variance to low variance, and the first Q/2 columns are selected for matching.
After step S236, the method further includes:
and if the distance between the front Q/2 columns of the two feature points to be matched is smaller than a set threshold value, matching by using the residual column information.
The step S3 further includes:
if a blank area is detected in the sampling plane, removing the blank grids after blank area grids are blanked, and not carrying out subsequent UV (ultraviolet) mapping work;
and if the object filling area is detected in the sampling plane, carrying out vehicle detection on the object filling area, and calibrating the detected grid outer frame of the vehicle area.
Compared with the prior art, the invention has the following advantages:
the method has feasibility and high efficiency for detecting the small target, and solves the problem of vehicle detection in the expressway service area.
Drawings
FIG. 1 is a flow chart of a vehicle identification method for three-dimensional spliced videos of a highway service area, which is disclosed by the invention;
fig. 2 is a flowchart of step S2;
fig. 3 is a flow chart for determining the direction of an inner feature point.
Detailed Description
The present invention will now be further described by way of the following detailed description of a preferred embodiment thereof, taken in conjunction with the accompanying drawings.
As shown in fig. 1, a vehicle identification method facing a three-dimensional spliced video of a highway service area includes the following steps:
s1, selecting a three-dimensional frame image from the three-dimensional spliced video, carrying out UV expansion on the three-dimensional frame image, selecting a minimum rectangular frame for the expanded plane, and selecting the frame of the minimum rectangular frame as a plane M to be detected;
s2, down-sampling a plane M to be detected to obtain a multi-level sampling plane, dividing the multi-level sampling plane into a plurality of small image blocks to determine the feature points and the feature point inner directions of the small image blocks, clustering the feature points to obtain key feature points of the multi-level sampling plane, screening key feature points with high correlation, and matching the screened key feature points;
and S3, calibrating the vehicle detected in the sampling plane, and mapping the vehicle to the original three-dimensional frame image through UV to realize vehicle identification.
As shown in fig. 3, the down-sampling the plane M to be detected in step S2 to obtain a multi-level sampling plane includes:
downsampling a plane M to be detected according to the proportion of an image 1/2 to obtain a first-level image pyramid M1, downsampling M1 to obtain M2, and repeatedly downsampling to obtain 5-level image pyramids M, M1, M2, M3 and M4, wherein M1 is obtained by downsampling M2 is obtained by downsampling M1, and the like, and M4 is obtained by downsampling M3; has the following relationship: the purpose of constructing M, M1, M2, M3, M4 is to get more useful information about the image.
As shown in fig. 2, the step S2 of dividing the multi-level sampling plane into a plurality of small image blocks to determine the feature points and the feature point inner directions of each of the small image blocks includes:
setting a decomposition threshold, equally dividing the image into m parts to obtain image blocks Ni (i is 1, 2 and 3.. m), equally dividing the image blocks Ni for j times until the image blocks Ni are smaller than the set decomposition threshold, and stopping decomposing the image blocks to obtain h small image blocks, wherein h is m × j;
traversing the brightness of pixel points in the image block, comparing the brightness of the selected pixel point with the brightness of surrounding pixel points, and if the brightness of the selected pixel point is greater than or less than the brightness of the surrounding pixel points, taking the pixel point as a feature point;
after the image block feature points are found, the feature principal direction of the image feature points is determined, and in each small image block B, the mass center is obtained: where g (X, Y) is a gray scale expression and X, Y are gray scale values, then:
Figure BDA0003064906420000051
the direction of the internal characteristic point is obtained through the position of the center of mass point as follows:
Figure BDA0003064906420000052
before clustering the feature points in step S2, the method further includes:
the extracted characteristic points are standardized by a mean absolute deviation method, and the formula is
Figure BDA0003064906420000053
Wherein n is the number of selected characteristic points in the image, and the influence of outliers is reduced by a mean absolute deviation method so as to reduce the data set D to { a }i1, 2, …, niCalculating the distance sum S between the calculated distance sum and the rest of the feature pointsiAnd the distances of all feature points are equal to H, if Si>H, then aiIs regarded as an isolated point, is deleted, and eliminates the influence of noise and the isolated point, wherein
Figure BDA0003064906420000054
Figure BDA0003064906420000055
And S2, clustering the feature points, obtaining key feature points of a multilevel sampling plane after clustering, screening key feature points with high correlation, and matching the screened key feature points, wherein the matching comprises the following steps:
s231, selecting a proper K value to cluster the obtained feature points in the fixed internal feature direction, so that the features in the image are more obvious and are convenient to distinguish, wherein the K value is 0.98 in the embodiment, and the clustering center is a point which takes the K value as a central point and is in a certain range of cosine included angle values of the K value, namely, the feature points have similar features, so that the cycle process of mismatching and identification is reduced, the matching accuracy is improved, and the matching rate is accelerated;
step S232, calculating cosine distances of each point position of the plane, comparing cosine values of plane vectors to measure different individual differences, comparing included angle values and screening direction differences of the characteristic vectors;
step 233, repeating step 231 and step 232 to obtain cluster blocks and cluster centers which reach a stable state, and selecting different colors to label the clusters;
step S234, establishing a feature matrix T for the obtained feature points, where each row in the matrix represents a feature point, 43 sampling points are extracted for each feature point, P sampling pairs are established for the 43 sampling points, where P is 903, the feature matrix T includes 903 columns, and there are 6 pairing modes among sampling points, such as three sampling points a1, a2, and a3, and each pairing mode is called a sampling pair;
step S235, calculating the average value and variance of each row of the matrix, sorting each row of the characteristic matrix T according to the variance of each row from large to small, selecting the front Q rows, and finishing the screening of key characteristic points, wherein Q is 256;
step S236, 256 binary descriptors are obtained, the binary descriptors are arranged from high variance to low variance, the high variance represents the fuzzy information, the low variance represents the detail information, and the first 128 columns are selected for matching.
Further, the binary descriptor is a series of binary strings, F denotes the binary descriptor, Pa is a sampling point pair, N denotes a binary code length,
Figure BDA0003064906420000061
Figure BDA0003064906420000062
Figure BDA0003064906420000063
which represents the pixel value of the previous sample point in the pair Pa, and, similarly,
Figure BDA0003064906420000064
representing the pixel value of the next sample point.
After step S236, the method further includes:
and if the distance between the front 128 columns of the two feature points to be matched is smaller than a set threshold value, matching by using the residual column information.
The step S3 further includes:
if a blank area is detected in the sampling plane, removing the blank grids after blank area grids are blanked, and not carrying out subsequent UV (ultraviolet) mapping work;
if an object filling area is detected in the sampling plane, vehicle detection is carried out on the object filling area, the detected grid outer frame of the vehicle area is calibrated, and vehicle calibration frames are divided into three types: car, bus, and truck.
The calibrated vehicle area is mapped into the original three-dimensional video frame image according to the UV, and vehicle detection work on the frame image is completed; and (5) calibrating and identifying the vehicles in the rest frames of the three-dimensional video according to the steps.
In conclusion, the vehicle identification method for the three-dimensional spliced video of the expressway service area has feasibility and high efficiency for detecting small targets, and solves the problem of vehicle detection of the expressway service area.
While the present invention has been described in detail with reference to the preferred embodiments, it should be understood that the above description should not be taken as limiting the invention. Various modifications and alterations to this invention will become apparent to those skilled in the art upon reading the foregoing description. Accordingly, the scope of the invention should be determined from the following claims.

Claims (7)

1. A vehicle identification method for a three-dimensional spliced video of a highway service area is characterized by comprising the following steps:
s1, selecting a three-dimensional frame image from the three-dimensional spliced video, carrying out UV expansion on the three-dimensional frame image, selecting a minimum rectangular frame for the expanded plane, and selecting the frame of the minimum rectangular frame as a plane M to be detected;
s2, down-sampling a plane M to be detected to obtain a multi-level sampling plane, dividing the multi-level sampling plane into a plurality of small image blocks to determine the feature points and the feature point inner directions of the small image blocks, clustering the feature points to obtain key feature points of the multi-level sampling plane, screening key feature points with high correlation, and matching the screened key feature points;
and S3, calibrating the vehicle detected in the sampling plane, and mapping the vehicle to the original three-dimensional frame image through UV to realize vehicle identification.
2. The method for identifying vehicles based on the three-dimensional spliced video of the service area of the expressway as claimed in claim 1, wherein the down-sampling the plane M to be detected to obtain a multi-level sampling plane in step S2 comprises:
the plane M to be detected is downsampled according to the proportion of an image 1/2 to obtain a first-level image pyramid M1, M1 is downsampled to obtain M2, and 5 levels of image pyramids M, M1, M2, M3 and M4 are obtained by repeated downsampling.
3. The method for identifying vehicles oriented to the three-dimensional spliced video of the service area of the expressway as claimed in claim 2, wherein the step S2 of dividing the multi-level sampling plane into a plurality of small image blocks to determine the feature points and the directions within the feature points of each small image block comprises:
setting a decomposition threshold, equally dividing the image into m parts to obtain image blocks Mi (i is 1, 2 and 3.. m), equally dividing the image blocks Mi j times until the image blocks are smaller than the set decomposition threshold, and stopping decomposing the image blocks to obtain h small image blocks, wherein h is m multiplied by j;
traversing the brightness of pixel points in the small image block, comparing the brightness of the selected pixel point with the brightness of surrounding pixel points, and if the brightness of the selected pixel point is greater than or less than the brightness of the surrounding pixel points, taking the pixel point as a feature point;
after the image block feature points are found, the feature principal direction of the image feature points is determined, and in each small image block B, the mass center is obtained: where g (X, Y) is a gray scale expression and X, Y are gray scale values, then:
Figure FDA0003064906410000021
the direction of the internal characteristic point is obtained through the position of the center of mass point as follows:
Figure FDA0003064906410000022
4. the method for identifying vehicles based on the three-dimensional spliced video of the service area of the expressway as claimed in claim 3, wherein before clustering the feature points in step S2, the method further comprises:
the extracted characteristic points are standardized by a mean absolute deviation method, and the formula is
Figure FDA0003064906410000023
Wherein n is the number of selected characteristic points in the image, and the influence of outliers is reduced by a mean absolute deviation method so as to reduce the data set D to { a }iEach characteristic point a of 1, 2, …, n | i ═iCalculating the distance sum S between the calculated distance sum and the rest of the feature pointsiAnd the distances of all feature points are equal to H, if Si>H, then aiIs regarded as an isolated point, is deleted, and eliminates the influence of noise and the isolated point, wherein
Figure FDA0003064906410000024
5. The method for identifying the vehicles oriented to the three-dimensional spliced video of the service area of the expressway as recited in claim 4, wherein the step S2 of clustering the feature points to obtain key feature points of a multilevel sampling plane after clustering, and the step S of screening the key feature points with high correlation comprises the steps of:
s231, selecting a proper K value to cluster the obtained feature points in the fixed internal feature direction;
step S232, calculating cosine distances of each point position of the plane, comparing cosine values of plane vectors to measure different individual differences, comparing included angle values and screening direction differences of the characteristic vectors;
step 233, repeating step 231 and step 232 to obtain cluster blocks and cluster centers which reach a stable state, and selecting different colors to label the clusters;
step S234, establishing a characteristic matrix T for the obtained characteristic points, wherein each row in the matrix represents one characteristic point, extracting a plurality of sampling points for each characteristic point, and P sampling pairs are established by the plurality of sampling points, wherein the characteristic matrix T comprises P columns;
step S235, calculating the average value and variance of each row of the matrix, sorting each row of the feature matrix T according to the descending of the variance of each row, selecting the front Q rows, and finishing the screening of key feature points;
step S236, a binary descriptor of the Q columns is obtained, the columns of the binary descriptor are arranged from high variance to low variance, and the first Q/2 columns are selected for matching.
6. The method for identifying vehicles based on the three-dimensional spliced video of the service area of the expressway as claimed in claim 5, wherein the step S236 is followed by further comprising:
and if the distance between the front Q/2 columns of the two feature points to be matched is smaller than a set threshold value, matching by using the residual column information.
7. The method for identifying vehicles based on the three-dimensional spliced video of the service area of the expressway as claimed in claim 1, wherein the step S3 further comprises:
if a blank area is detected in the sampling plane, removing the blank grids after blank area grids are blanked, and not carrying out subsequent UV (ultraviolet) mapping work;
and if the object filling area is detected in the sampling plane, carrying out vehicle detection on the object filling area, and calibrating the detected grid outer frame of the vehicle area.
CN202110523304.6A 2021-05-13 2021-05-13 Vehicle identification method for three-dimensional spliced video of expressway service area Active CN113065531B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110523304.6A CN113065531B (en) 2021-05-13 2021-05-13 Vehicle identification method for three-dimensional spliced video of expressway service area

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110523304.6A CN113065531B (en) 2021-05-13 2021-05-13 Vehicle identification method for three-dimensional spliced video of expressway service area

Publications (2)

Publication Number Publication Date
CN113065531A true CN113065531A (en) 2021-07-02
CN113065531B CN113065531B (en) 2024-05-14

Family

ID=76568652

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110523304.6A Active CN113065531B (en) 2021-05-13 2021-05-13 Vehicle identification method for three-dimensional spliced video of expressway service area

Country Status (1)

Country Link
CN (1) CN113065531B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115205666A (en) * 2022-09-16 2022-10-18 太平金融科技服务(上海)有限公司深圳分公司 Image analysis method, apparatus, server, medium, and computer program product
CN116523852A (en) * 2023-04-13 2023-08-01 成都飞机工业(集团)有限责任公司 Foreign matter detection method of carbon fiber composite material based on feature matching

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105654507A (en) * 2015-12-24 2016-06-08 北京航天测控技术有限公司 Vehicle outer contour dimension measuring method based on image dynamic feature tracking
WO2018130016A1 (en) * 2017-01-10 2018-07-19 哈尔滨工业大学深圳研究生院 Parking detection method and device based on monitoring video
WO2019054561A1 (en) * 2017-09-15 2019-03-21 서울과학기술대학교 산학협력단 360-degree image encoding device and method, and recording medium for performing same
CN111798560A (en) * 2020-06-09 2020-10-20 同济大学 Three-dimensional real-scene model visualization method for infrared thermal image temperature measurement data of power equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105654507A (en) * 2015-12-24 2016-06-08 北京航天测控技术有限公司 Vehicle outer contour dimension measuring method based on image dynamic feature tracking
WO2018130016A1 (en) * 2017-01-10 2018-07-19 哈尔滨工业大学深圳研究生院 Parking detection method and device based on monitoring video
WO2019054561A1 (en) * 2017-09-15 2019-03-21 서울과학기술대학교 산학협력단 360-degree image encoding device and method, and recording medium for performing same
CN111798560A (en) * 2020-06-09 2020-10-20 同济大学 Three-dimensional real-scene model visualization method for infrared thermal image temperature measurement data of power equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李子健;阮秋琦;: "基于LPP和改进SIFT的copy-move篡改检测", 信号处理, no. 04 *
杨昊瑜;戴华林;王丽;张蕊;: "基于K-means的ORB算法在前方车辆识别中的应用", 传感器与微系统, no. 10 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115205666A (en) * 2022-09-16 2022-10-18 太平金融科技服务(上海)有限公司深圳分公司 Image analysis method, apparatus, server, medium, and computer program product
CN115205666B (en) * 2022-09-16 2023-03-07 太平金融科技服务(上海)有限公司深圳分公司 Image analysis method, image analysis device, image analysis server, and image analysis medium
CN116523852A (en) * 2023-04-13 2023-08-01 成都飞机工业(集团)有限责任公司 Foreign matter detection method of carbon fiber composite material based on feature matching

Also Published As

Publication number Publication date
CN113065531B (en) 2024-05-14

Similar Documents

Publication Publication Date Title
US20210197851A1 (en) Method for building virtual scenario library for autonomous vehicle
CN102208013B (en) Landscape coupling reference data generation system and position measuring system
US9058744B2 (en) Image based detecting system and method for traffic parameters and computer program product thereof
US11208101B2 (en) Device and method for measuring transverse distribution of wheel path
CN113065531A (en) Vehicle identification method for three-dimensional spliced video of expressway service area
Yang et al. Image-based visibility estimation algorithm for intelligent transportation systems
CN111027430B (en) Traffic scene complexity calculation method for intelligent evaluation of unmanned vehicles
CN109448397B (en) Group fog monitoring method based on big data
CN113147733B (en) Intelligent speed limiting system and method for automobile in rain, fog and sand dust weather
CN101567041A (en) Method for recognizing characters of number plate images of motor vehicles based on trimetric projection
CN112381101B (en) Infrared road scene segmentation method based on category prototype regression
CN113378751A (en) Traffic target identification method based on DBSCAN algorithm
CN112084890A (en) Multi-scale traffic signal sign identification method based on GMM and CQFL
CN114612883A (en) Forward vehicle distance detection method based on cascade SSD and monocular depth estimation
CN114926984B (en) Real-time traffic conflict collection and road safety evaluation method
Oh et al. In-depth understanding of lane changing interactions for in-vehicle driving assistance systems
Ries et al. Trajectory-based clustering of real-world urban driving sequences with multiple traffic objects
BARODI et al. Improved deep learning performance for real-time traffic sign detection and recognition applicable to intelligent transportation systems
CN116630702A (en) Pavement adhesion coefficient prediction method based on semantic segmentation network
CN115544888A (en) Dynamic scene boundary assessment method based on physical mechanism and machine learning hybrid theory
CN114912689A (en) Map grid index and XGBOST-based over-limit vehicle destination prediction method and system
Ning et al. Automatic driving scene target detection algorithm based on improved yolov5 network
CN113095387B (en) Road risk identification method based on networking vehicle-mounted ADAS
CN112559968A (en) Driving style representation learning method based on multi-situation data
CN114882393B (en) Road reverse running and traffic accident event detection method based on target detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Fu Chao

Inventor after: Huang Shifeng

Inventor after: Ma Yiming

Inventor after: Shen Tianxing

Inventor after: Yang Ju

Inventor after: Liu Yi

Inventor after: Chen Yunfei

Inventor after: Ni Zhaoxin

Inventor before: Huang Shifeng

Inventor before: Ma Yiming

Inventor before: Shen Tianxing

Inventor before: Yang Ju

Inventor before: Liu Yi

Inventor before: Chen Yunfei

Inventor before: Ni Zhaoxin

Inventor before: Fu Chao

CB03 Change of inventor or designer information
GR01 Patent grant
GR01 Patent grant