KR20170104756A - Local size specific vehicle classifying method and vehicle detection method using the classifying method - Google Patents

Local size specific vehicle classifying method and vehicle detection method using the classifying method Download PDF

Info

Publication number
KR20170104756A
KR20170104756A KR1020160027538A KR20160027538A KR20170104756A KR 20170104756 A KR20170104756 A KR 20170104756A KR 1020160027538 A KR1020160027538 A KR 1020160027538A KR 20160027538 A KR20160027538 A KR 20160027538A KR 20170104756 A KR20170104756 A KR 20170104756A
Authority
KR
South Korea
Prior art keywords
vehicle
area model
size
semantic area
semantic
Prior art date
Application number
KR1020160027538A
Other languages
Korean (ko)
Inventor
노승종
전문구
Original Assignee
광주과학기술원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 광주과학기술원 filed Critical 광주과학기술원
Priority to KR1020160027538A priority Critical patent/KR20170104756A/en
Priority to PCT/KR2017/002534 priority patent/WO2017155315A1/en
Publication of KR20170104756A publication Critical patent/KR20170104756A/en

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/32Determination of transform parameters for the alignment of images, i.e. image registration using correlation-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

A method for classifying a size-specific vehicle by local area according to the present invention comprises: generating a semantic area model; collecting vehicle image samples for each semantic area model; and sorting the vehicle image samples collected for the each semantic area model by size pattern.

Description

BACKGROUND OF THE INVENTION 1. Field of the Invention [0001] The present invention relates to a method of classifying a specific size of a vehicle by a local area and a vehicle detection method using the same. [0002]

The present invention relates to a method for classifying size-specific vehicles by local area and a method of detecting a vehicle using the same. More particularly, the present invention relates to a method for classifying a local size-specific classifier (LSC) The present invention relates to a size specific vehicle classifying method and a vehicle detecting method using the same.

Recently, as the population using vehicles for work and leisure activities has increased, it has been difficult to identify vehicle information required for traffic flow and movement line control, vehicle traffic violation vehicle control, parking management, and crime prevention.

As a general method of confirming a vehicle, there is a method of confirming the vehicle by patrolling the person on the spot, or checking the vehicle information with the naked eye through an additional operation on the image acquired through the photographing device. It is difficult to acquire the vehicle information immediately, so that it is time consuming to overload, illegal vehicle interception and arresting of the pick-up vehicle.

Recently, the need for ITS (Intelligent Transportation System) that can automatically perform traffic surveillance tasks is increasing, and related technologies are being researched and developed.

For the sophisticated detection of the vehicle, it is necessary to effectively solve the problem of the ambiguity of the position and the size and the variety of the appearance of the vehicles observed in the photographed image.

Generally, a sliding window-based image scanning method is applied to solve the above problems in video surveillance systems. Although the sliding window system techniques have been improved in terms of ambiguity processing of position and size, There are still many problems in dealing with the variety of vehicle exterior shapes.

Conventional sliding window techniques first create bounding-boxes corresponding to provisional detection results by locating search windows of a specific size at each position in an input frame. Thereafter, a method of deriving the final detection result by evaluating the visual information of the subimages extracted from the area defined by each bounding box through a preliminarily learned image classifier.

In such an existing detection method, the normalization of the sample images to a certain size greatly affects the performance of the vehicle detector to be ultimately implemented. In the case of the conventional sliding window-based detection technique, There is a problem that the vehicle detection result changes unstably according to the state of a given screen as the user arbitrarily determines the normalized size of the sample image without sufficient consideration.

Korean Patent Publication No. 10-2015-0134548: Vehicle Detection Device Korean Patent No. 10-1264282: Detection method of road vehicles using ROI

 C. Papageorgiou and T. Poggio, "A trainable systme for object detection ", in IJCV, Vol. 38, pp. 15-33, 2000.  N. Dalal, "Finding People in Images and Videos", Phd thesis, Institute National Polytechnique de Grenoble, 2006.  R. Feris, B. Siddiquie, Y. Zhai, J. Petterson, L. Brown and S. Pankanti, "Attribute-based vehicle search in crowded surveillance videos", in Proc. ICMR, 2011.  R. Feris, B. Siddiquie, J. Petterson, Y. Zhai, A. Datta, L. Brown and S. Pankanti, "Large-scale vehicle detection, indexing, and search in urban surveillance videos", in Tran. Multimedia, Vol.14, pp.28-42, 2012.  SJ. Noh, D. Shim and M. Jeon, " Adaptive Sliding-Window Strategy for Vehicle Detection in Highway Environments ", in IEEE Trans. Intell. Trans. Systems, accepted, 2015  S. Noh and M. Jeon, "A new framework for background subtraction using multiple cues ", in Proc. ACCV, pp. 493 --- 506, 2012.  S. Theodoridis and K. Koutroumbas, "Sequential clustering algorithms ", pp. 633-643, in Pattern Recognition-4th Edition, Elsevier, 2008.

The present invention has been made to solve the above problems, and it is an object of the present invention to provide a method and apparatus for estimating a size of a vehicle according to a position of a screen by using a contextual information inherent in a screen, And an object of the present invention is to provide a vehicle detection method using an LSC classifier so as to accurately identify location information.

According to the present invention, there is provided a method for classifying size-specific vehicles classified into local areas, the method comprising: generating a semantic area model; Collecting vehicle image samples for each semantic area model; And classifying the vehicle image samples collected by the semantic area model according to a size pattern.

The size pattern may be generated by a size based on the outline of the vehicle, which is specified by the semantic area model.

The outline of the vehicle can be determined by the type of the vehicle.

The size pattern may be caused by the difference between the classes.

The size pattern classification may be performed using a HOG-SVM (histogram of oriented gradients spport vector machine) classifier.

According to another embodiment of the present invention, there is provided a method comprising: determining a semantic area model having closest relevance to a region of interest; And detecting the vehicle by comparing the vehicle image samples collected and sorted by the semantic area model with the area of interest.

The vehicle detection can be accomplished through an average-shift mode-search based optimization algorithm.

According to another embodiment of the present invention, there is provided a method of generating a semantic domain model, the method comprising: generating a semantic domain model; Collecting vehicle image samples for each semantic area model; Classifying the vehicle image samples collected for each semantic area model by size pattern; Determining a semantic area model having closest relevance to a region of interest; And detecting a vehicle by comparing the area of interest with the vehicle image samples collected and sorted by the semantic area model to detect a local area specific vehicle classification and detection method.

The size pattern may be generated by a size based on the outline of the vehicle, which is specified by the semantic area model.

The outline of the vehicle can be determined by the type of the vehicle.

The size pattern may be caused by the difference between the classes.

According to yet another embodiment of the present invention, there is provided a method of generating a semantic domain model, the method comprising: generating at least one computer program, Collecting vehicle image samples for each semantic area model; And a computer program for causing the computer to implement a step of classifying vehicle image samples collected by the semantic area model by size pattern.

According to yet another embodiment of the present invention, there is provided a method for determining a semantic domain model, the method comprising: determining at least one computer program to be a semantic domain model having closest relevance to a region of interest; And a computer program for causing a computer to implement the step of detecting a vehicle by comparing the vehicle image samples collected and classified by the semantic area model with the area of interest.

According to yet another embodiment of the present invention, there is provided a method of generating a semantic domain model, the method comprising: generating at least one computer program, Collecting vehicle image samples for each semantic area model; Classifying the vehicle image samples collected for each semantic area model by size pattern; Determining a semantic area model having closest relevance to a region of interest; And a computer program for causing a computer to implement the step of detecting a vehicle by comparing the vehicle image samples collected and classified by the semantic area model with the area of interest.

The vehicle detection method according to the present invention has an advantage that accurate vehicle detection can be performed by providing a detailed outer shape model according to the size classification of the vehicle.

In addition, according to the vehicle detection method of the present invention, since the image classifier (LSC) formed through the learning process of the sample image includes the size information of the vehicle, a separate calculation process for predicting the size of the vehicle is omitted, It is possible to increase the detection accuracy while reducing the amount of computation.

Figure 1 illustrates an example of LSCs actually learned in accordance with an embodiment of the present invention.
2 shows the relationship between the sample normalization size and the vehicle detection performance.
FIG. 3 shows a sample collection outline for SRM R 1 represented in FIG.
FIG. 4 shows learning samples assigned to the size patterns registered in R 1 represented in FIG. 1 and examples of LSCs generated through them.
Figure 5 shows detection related results for a particular region of interest.
Figure 6 illustrates an initial detection performance algorithm in accordance with an embodiment of the present invention.
FIG. 7 shows a quantitative vehicle detection result according to an embodiment of the present invention.
FIG. 8 shows the result of comparison of operation speeds between existing methods and the present method.
9 shows a qualitative vehicle detection result according to an embodiment of the present invention.
FIG. 10 shows results of discrete learning for each size pattern according to an embodiment of the present invention.

Hereinafter, a vehicle detection method according to an embodiment of the present invention will be described in detail with reference to the accompanying drawings. The present invention is capable of various modifications and various forms, and specific embodiments are illustrated in the drawings and described in detail in the text. It is to be understood, however, that the invention is not intended to be limited to the particular forms disclosed, but on the contrary, is intended to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention. Like reference numerals are used for like elements in describing each drawing. In the accompanying drawings, the dimensions of the structures are enlarged to illustrate the present invention in order to clarify the present invention.

The terms first, second, etc. may be used to describe various components, but the components should not be limited by the terms. The terms are used only for the purpose of distinguishing one component from another. For example, without departing from the scope of the present invention, the first component may be referred to as a second component, and similarly, the second component may also be referred to as a first component.

The terminology used in this application is used only to describe a specific embodiment and is not intended to limit the invention. The singular expressions include plural expressions unless the context clearly dictates otherwise. In this application, the terms "comprises", "having", and the like are used to specify that a feature, a number, a step, an operation, an element, a part or a combination thereof is described in the specification, But do not preclude the presence or addition of one or more other features, integers, steps, operations, components, parts, or combinations thereof.

Unless defined otherwise, all terms used herein, including technical or scientific terms, have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. Terms such as those defined in commonly used dictionaries are to be interpreted as having a meaning consistent with the contextual meaning of the related art and are to be interpreted as either ideal or overly formal in the sense of the present application Do not.

1. Necessity of LSC

In the following, we will examine how the size of the normalized sample during the learning of the image classifier affects the performance of the vehicle detector. For the experiment, we first normalized the positive and negative sample images for the vehicle collected in advance as shown in Fig. 2 (a) to the sizes of 48x48, 64x64, 80x80 and 96x96. Then, normalized samples were used to learn support vector machine (SVM) classifiers based on the HOG (histogram of oriented gradients) feature for each normalized size. In order to investigate the correlation between each classifier and the size of the vehicle, we set the areas of interest for each scale of the vehicle as shown in FIGS. 2 (a) to 2 (c) (Harmonic mean of detection reliability and sensitivity) was measured as shown in Fig. 2 (d).

First, we can see that a smaller sample normalization size leads to higher accuracy for small-scale vehicles. However, as can be seen in the results for the 48 × 48 and 64 × 64 sizes, it can be seen that if the samples are normalized to an excessively small size, very low detection accuracy can be achieved for large-scale vehicles. On the other hand, if the sample normalization size is sufficiently large, such as 80 x 80 or more, a higher detection accuracy tends to be obtained for a larger scale vehicle. However, as can be seen from the values for the 96x96 size, the problem is that too large a sample normalization size would rather degrade the overall detection accuracy. This is because, as the size of the normalized sample increases, the discriminative power of the classifier improves, so that the detection reliability increases. At the same time, the minimum size of the detectable vehicle becomes larger and the detection sensitivity decreases. Therefore, in order to secure more stable vehicle detection performance, it is necessary to set the sample normalization size very carefully for each region of each screen to be monitored.

The LSC proposed by the present invention is designed to provide a solution suited to this requirement. Instead of relying on user input, the LSC automatically determines the sample normalization size based on the size pattern of the vehicle actually observed for each local region in the screen, which ensures that the system always has good vehicle detection performance (Fig. 2 (d)). In the following, we explain the proposed LCS learning method and the vehicle detection method using the LCS in more detail.

2. LSC Learning

2.1 Generation of external classifier learning data by local region

The diversity of the vehicle exterior on the CCTV screen is mainly caused by changes in the pose of the vehicle, differences within the class (for example, differences in appearance between buses), and differences between classes (for example, differences in appearance between passenger cars and buses).

First of all, to overcome the change of appearance due to the change of pose, Noh, D. Shim and M. Jeon, " Adaptive Sliding-Window Strategy for Vehicle Detection in Highway Environments ", in IEEE Trans. Intell. Trans. Systems, Accepted, 2015 to generate a local area model, or semantic area model (SRM, ellipses in Figure 1 (a)) for a given scene.

In general, the position of the vehicle in the screen is directly related to the vehicle scale and the direction of vehicle movement is directly related to the pose. Therefore, the SRMs generated based on the position and the moving direction characteristics of the vehicles can be regarded as regions showing consistent characteristics in terms of scale and pose of the vehicle.

That is, within each SRM, a particular vehicle has only one scale and a pose pattern. Due to this fact, we limit the outward shape change factors by posing by learning the image classifier by SRM.

Once the creation of the SRMs is complete, we can use a known background removal technique (S. Noh and M. Jeon, "A new framework for background subtraction using multiple cues", in Proc. ACCV, pp. 493-506, 2012 (See Non-Patent Document 04). After that, we identify which SRMs are spatially closely related to each of the collected samples, using the following equation:

Figure pat00001
(One)

Assuming that P (sample) follows a uniform distribution, Eq. (1) can be transformed into a reliability measure of Eq. (2) with equivalent meaning:

Figure pat00002
(2)

Further, if we assume that S = (x, y) T means the position in the input frame from which the samples are extracted (Fig. 3 (a)), equation (2) can again be approximated by the following function:

Figure pat00003
(3)

Here, W SRM represents the weight calculated in the SRM generation process, and m SRM and SRM represent the center vector and the covariance matrix that determine the position and the shape of the SRM, respectively (see Non-Patent Document 05). N (·) denotes a two-dimensional normal distribution.

The confidence value of Equation (3) is calculated between all samples and SRMs, and each sample is utilized for video classifier learning for N S SRMs that yielded the highest reliability (Fig. 3 (b)). Here, N S is a trade-off variable defined between the accuracy of the classifier and the learning efficiency. If the value of N S increases, the difference in the shape of the samples included in each SRM also increases and the accuracy of the classifier decreases. Conversely, as the value of N S decreases, the probability that each sample is assigned to a particular SRM is lowered, increasing the amount of the entire sample that must be collected for classifier learning.

In particular, if the value of N S becomes larger than the number of SRMs defined in the screen, all samples will be used for learning of all SRMs. In this study, we confirmed that the best detection performance was achieved when N s values were set to 2 ~ 4 through a number of experiments.

2.2 Learning Outer Classifier by Vehicle Size Pattern

The size information of the bounding box defined for the on-screen vehicle can be thought of as divided into the scale and aspect-ratio elements, which are mainly determined by the vehicle's position, pose and vehicle type.

More specifically, the vehicle position is directly related to the scale, the pose to the aspect ratio, and the vehicle type to the scale and aspect ratio. However, according to the present invention, classifier learning is performed for SRMs having consistent characteristics in terms of position and pose. Therefore, it can be considered that there is hardly any change in vehicle scale and aspect ratio according to position and pose changes.

Thus, we perform learning of the vehicle size and appearance model assuming that the vehicle's scale and pose information is determined solely by the vehicle type to which it belongs.

Let S SRM = sample k /k=1,..N k be the set of samples collected for each SRM through the above process. First, we calculate the following characteristics for the vector of the sample k collected to create a model for each size SRM:

Figure pat00004
(4)

In this equation, sc k and ar k are factors for quantifying the scale and aspect ratio characteristics and are defined as follows based on the horizontal and vertical length of the sample:

Figure pat00005
(5)

Figure pat00006
(6)

Here,? And? Are data rescaling factors calculated by Equation (7) and Equation (8), and perform a role of converting data values to be included between [0 and 1].

Figure pat00007
(7)

Figure pat00008
(8)

The r k vectors computed for all samples k are clustered through a simple sequential clustering algorithm (see non-patent document 07), and mean vectors for clusters generated through this process

Figure pat00009
Are utilized as size pattern models for the SRM.

Once the size pattern modeling is complete, we determine which samples belonging to s / SRM should be used for the appearance model learning for which size pattern:

Figure pat00010
(9)

Where p is the index of the size model determined to be most relevant to the sample smaple k and N m is the total number of size patterns. Through this process, the samples classified by each size pattern are classified into an average vector

Figure pat00011
SVM image classifier (refer to non-patent document 02), i.e., LSCs, after the size is normalized to fit the LCS.

FIG. 4 shows an example of several LSCs actually learned for each size pattern registered in a specific SRM. In this example, we can confirm that the proposed LSC-based outline model greatly reduces the difference between the classes caused by different types of vehicles, while at the same time it can express the difference in the class of vehicles of similar size very effectively .

In addition to the HOG-SVM used in the present invention, the more recent Adaboost cascaded classifiers, exemplar-based classifiers, deformable part models, and 3D structure-based classifiers can also be used for learning outline model learning by size pattern. However, we have not used these classifiers in the present invention because they are less practical than HOG-SVM in terms of learning time and implementation complexity.

In addition, we have confirmed that a sufficiently reliable vehicle detection performance can be achieved even with a HOG-SVM-based classifier, even if such a complex classifier is not used.

Referring to FIG. 10, a result of classifying and learning by size pattern according to an embodiment of the present invention is shown.

3. LSC-based vehicle detection

To ensure speed efficiency, we first define regions of interest in the input frame using a background removal technique, and then perform vehicle detection independently for each ROI. To perform detection for a particular region of interest, we first select the SRM with the closest relevance to the region of interest through the following equation:

Figure pat00012
(10)

In this equation, Z denotes the center coordinates of the ROI under consideration, and R denotes a set of all SRMs defined in the screen. Next, we sort the LSCs included in the SRM Z in descending order according to the scale value of the corresponding size pattern model, and then perform the algorithm of FIG. 6 to obtain the initial detection result (FIG. 5 (a)).

The most important characteristic of the process specified in Fig. 6 is that it first finds the initial detection responses for the large scale and then finds the initial detection responses for the small scale, and τ 0 ∈ [0,100] To determine how much to allow for redundancy. When τ 0 is small, the probability of detecting small vehicles located around a large vehicle is high, but at the same time, the probability of occurrence of error detection is also high (FIG. 5 (c)). Conversely, if the value of? 0 is increased, the probability of occurrence of error detection is reduced, but at the same time, the probability that small vehicles around large cars are not detected also increases (FIG. 5 (d)).

In the present invention, we recommend setting τ 0 to a value between 40 and 50 (FIG. 5 (b)).

Given the values of d'∈ D init (Fig. 5 (a)) vectors corresponding to the initial detection responses for a given region of interest and the matching score sc d for them, The final vehicle detection result d '' ∈ D final (Fig. 5 (b)) can be obtained through an average-shift mode-search based optimization algorithm.

More specifically, in non-patent document 02, d 'vectors are set as data points and sc d values are set as weights for data points, and then an average-shift algorithm is applied to the data points so that a probability distribution function of points - points are found, at which time the mode-points found correspond directly to the final vehicle detection result d ". A more detailed description can be found in Non-Patent Document 02.

4 . Experimental Results and Analysis

Experiments were performed on five actual highway traffic surveillance screens without any restriction on the type, size, and position of the vehicles displayed on the screen (Fig. 9). For each scene, a data set consisting of 20,000 learning frames and 4,000 test frames was constructed, and ground truth information search was set up for vehicles in arbitrarily selected test frames.

The implementation of the simulator was implemented through a VisualStudio2010 compiler that supports the Parallel Pattern Library (PPL) library utilizing the workstation of the Intel Xeon CPU E5-2670.

Figure 9 shows the results of qualitative vehicle detection using the proposed technique. From these results, we can confirm that the proposed vehicle detection technique can guarantee very accurate performance for various vehicles with various shapes, sizes and resolutions. In addition, since the main contribution of the proposed scheme is a new type of s-window based vehicle detection scheme based on LSCs, we can use the most basic s-window (CSW) (Non-patent document 02) and the latest s-window (ASW) (non-patent document 05). From the results, we can see that the proposed vehicle detection method always provides reliable detection results, unlike the CSW and ASW, in which the detection accuracy varies sensitively according to the state of each traffic monitoring screen. Here, for each s-window technique, the outline model was learned by the HOG-SVM classifier based on the same learning samples as for the LSC. 8, it can be seen that the proposed technique overcomes the performance of existing s-windows.

5 . conclusion

In the present invention, we have proposed a more efficient vehicle detection technique based on a new type of classifier, i.e. LSC, which is learned through automatically sized normalized samples. As can be seen from the experiment, the proposed method can provide very fast and accurate vehicle detection compared with the conventional s-window based vehicle detection methods. However, in this study, we did not consider the change of the vehicle detection performance due to the change of environment on the road such as the change of illumination and the change of weather. Future research will also improve the proposed techniques so that these issues can be handled with precision.

The description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features presented herein.

Claims (14)

Generating a semantic area model;
Collecting vehicle image samples for each semantic area model; And
Classifying the vehicle image samples collected for each semantic area model by size pattern
Wherein the size of the vehicle is less than a predetermined size.
The method according to claim 1,
Wherein the size pattern is generated by a size based on an external shape of the vehicle, the size being specified for each of the semantic area models.
3. The method of claim 2,
Wherein the external shape of the vehicle is determined by the type of the vehicle.
The method according to claim 1,
Wherein the size pattern is caused by a difference between classes.
The method according to claim 1,
Wherein the classification by size pattern is performed using a HOG-SVM (histogram of oriented gradients spport vector machine) classifier.
Determining a semantic area model having closest relevance to a region of interest; And
Detecting a vehicle by comparing the vehicle image samples collected and sorted by the semantic area model with the area of interest,
Wherein the local area vehicle detection method comprises the steps of:
The method according to claim 6,
Wherein the vehicle detection is performed through an average-shift mode-search based optimization algorithm.
Generating a semantic area model;
Collecting vehicle image samples for each semantic area model;
Classifying the vehicle image samples collected for each semantic area model by size pattern;
Determining a semantic area model having closest relevance to a region of interest; And
Detecting a vehicle by comparing the vehicle image samples collected and sorted by the semantic area model with the area of interest,
A local size specific vehicle classification and detection method.
9. The method of claim 8,
Wherein the size pattern is generated by a size based on the outline of the vehicle specified by the semantic area model.
10. The method of claim 9,
Wherein the external shape of the vehicle is determined by the type of the vehicle.
9. The method of claim 8,
Wherein the size pattern is caused by a difference between classes.
At least one computer program,
Generating a semantic area model;
Collecting vehicle image samples for each semantic area model; And
Classifying the vehicle image samples collected for each semantic area model by size pattern
The computer program product comprising: a computer readable medium;
At least one computer program,
Determining a semantic area model having closest relevance to a region of interest; And
Detecting a vehicle by comparing the vehicle image samples collected and sorted by the semantic area model with the area of interest,
The computer program product comprising: a computer readable medium;
At least one computer program,
Generating a semantic area model;
Collecting vehicle image samples for each semantic area model;
Classifying the vehicle image samples collected for each semantic area model by size pattern;
Determining a semantic area model having closest relevance to a region of interest; And
Detecting a vehicle by comparing the vehicle image samples collected and sorted by the semantic area model with the area of interest,
The computer program product comprising: a computer readable medium;
KR1020160027538A 2016-03-08 2016-03-08 Local size specific vehicle classifying method and vehicle detection method using the classifying method KR20170104756A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
KR1020160027538A KR20170104756A (en) 2016-03-08 2016-03-08 Local size specific vehicle classifying method and vehicle detection method using the classifying method
PCT/KR2017/002534 WO2017155315A1 (en) 2016-03-08 2017-03-08 Size-specific vehicle classification method for local area, and vehicle detection method using same

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020160027538A KR20170104756A (en) 2016-03-08 2016-03-08 Local size specific vehicle classifying method and vehicle detection method using the classifying method

Publications (1)

Publication Number Publication Date
KR20170104756A true KR20170104756A (en) 2017-09-18

Family

ID=59789621

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020160027538A KR20170104756A (en) 2016-03-08 2016-03-08 Local size specific vehicle classifying method and vehicle detection method using the classifying method

Country Status (2)

Country Link
KR (1) KR20170104756A (en)
WO (1) WO2017155315A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116758063A (en) * 2023-08-11 2023-09-15 南京航空航天大学 Workpiece size detection method based on image semantic segmentation

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108154127A (en) * 2017-12-27 2018-06-12 天津智芯视界科技有限公司 A kind of vehicle identification method based on video and radar

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100485419B1 (en) * 2004-06-09 2005-04-27 주식회사 윤익씨엔씨 System For Acquiring Information On A Vehicle
KR100790867B1 (en) * 2005-01-14 2008-01-03 삼성전자주식회사 Method and apparatus for category-based photo clustering using photographic region templates of digital photo
KR100818317B1 (en) * 2006-07-07 2008-03-31 주식회사 피엘케이 테크놀로지 Method for recognizing a vehicle using the difference image
KR101038669B1 (en) * 2009-06-15 2011-06-02 (주) 알티솔루션 The automatic vehicle identification system of non-trigger type based on image processing and that of using identification method
KR101407901B1 (en) * 2010-12-06 2014-06-16 주식회사 만도 Vehicle Recognition System

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116758063A (en) * 2023-08-11 2023-09-15 南京航空航天大学 Workpiece size detection method based on image semantic segmentation
CN116758063B (en) * 2023-08-11 2023-11-07 南京航空航天大学 Workpiece size detection method based on image semantic segmentation

Also Published As

Publication number Publication date
WO2017155315A1 (en) 2017-09-14

Similar Documents

Publication Publication Date Title
US8509478B2 (en) Detection of objects in digital images
US9443320B1 (en) Multi-object tracking with generic object proposals
US9008365B2 (en) Systems and methods for pedestrian detection in images
Ryan et al. Scene invariant multi camera crowd counting
US20150131861A1 (en) Multi-view object detection using appearance model transfer from similar scenes
Derpanis et al. Classification of traffic video based on a spatiotemporal orientation analysis
US20070058856A1 (en) Character recoginition in video data
CN107273832B (en) License plate recognition method and system based on integral channel characteristics and convolutional neural network
CN102609720B (en) Pedestrian detection method based on position correction model
CN102163290A (en) Method for modeling abnormal events in multi-visual angle video monitoring based on temporal-spatial correlation information
CN104063719A (en) Method and device for pedestrian detection based on depth convolutional network
CN104615986A (en) Method for utilizing multiple detectors to conduct pedestrian detection on video images of scene change
Saran et al. Traffic video surveillance: Vehicle detection and classification
CN102693427A (en) Method and device for forming detector for detecting images
Masmoudi et al. Vision based system for vacant parking lot detection: Vpld
Naik et al. Implementation of YOLOv4 algorithm for multiple object detection in image and video dataset using deep learning and artificial intelligence for urban traffic video surveillance application
KR20170104756A (en) Local size specific vehicle classifying method and vehicle detection method using the classifying method
Chen et al. Vision-based traffic surveys in urban environments
Gawande et al. Scale invariant mask r-cnn for pedestrian detection
Le et al. Detection and classification of vehicle types from moving backgrounds
Kamal et al. Human detection based on HOG-LBP for monitoring sterile zone
Guo et al. ANMS: attention-based non-maximum suppression
Xiao et al. An Efficient Crossing-Line Crowd Counting Algorithm with Two-Stage Detection.
Sun et al. Intelligent traffic accident detection system using surveillance video
NGUYEN License plate detection and refinement based on deep convolutional neural network