KR101813783B1 - Method and System for Traffic Measurement using Computer Vision - Google Patents

Method and System for Traffic Measurement using Computer Vision Download PDF

Info

Publication number
KR101813783B1
KR101813783B1 KR1020160010496A KR20160010496A KR101813783B1 KR 101813783 B1 KR101813783 B1 KR 101813783B1 KR 1020160010496 A KR1020160010496 A KR 1020160010496A KR 20160010496 A KR20160010496 A KR 20160010496A KR 101813783 B1 KR101813783 B1 KR 101813783B1
Authority
KR
South Korea
Prior art keywords
image data
interest
region
extracting
traffic
Prior art date
Application number
KR1020160010496A
Other languages
Korean (ko)
Other versions
KR20170090081A (en
Inventor
권장우
이옥민
이상민
Original Assignee
인하대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 인하대학교 산학협력단 filed Critical 인하대학교 산학협력단
Priority to KR1020160010496A priority Critical patent/KR101813783B1/en
Publication of KR20170090081A publication Critical patent/KR20170090081A/en
Application granted granted Critical
Publication of KR101813783B1 publication Critical patent/KR101813783B1/en

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
    • G06K9/6204
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • G06K2209/23
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]

Abstract

A method and system for measuring traffic volume using computer vision is presented. A traffic volume measurement method using a computer vision proposed in the present invention includes the steps of extracting contours of image data acquired through a camera using a computer vision and dividing image data to designate a region of interest, Recognizing and tracking vehicles moving within the designated area of interest, and calculating the traffic density according to the vehicles.

Description

TECHNICAL FIELD The present invention relates to a method and system for measuring traffic volume using computer vision,

The present invention relates to a traffic volume measurement system, which measures the traffic volume of vehicles traveling on public roads and expressways using computer vision, and provides services such as determining travel time and lag time, speeding vehicle compartmentalization, ≪ / RTI >

Demand for automobiles has increased sharply as we move into modern society. Together with that, people want to be beyond the past as a mere means of transport, from the convenience of driving, stability from accidents, navigation and black boxes, and mounting multimedia devices. There have been various demands for the driving environment, such as measuring the traffic volume as well as the car itself, and speeding up the driving control. Currently, the traffic measurement method that is mainly used is a method of transmitting data to the server by installing a sensor and a controller and using an Ethernet link. This method has a problem that initial installation cost is high and space is occupied. To solve these problems, we need a method to simplify the system by using computer vision.

In the study on the present invention, J. Korea Inst. Of Inf. Commun. Eng. Vol. 17, No. 11: 2575-2580 Nov. In 2013, a survey of driving vehicles using CCTV is presented to measure traffic volume. This study improves the problem that existing measuring vehicles can only measure the direction of travel by installing two CCTVs on the measuring vehicle. And it is designed to measure traffic volume by automating the measurement system and shooting using CCTV. However, since the measured vehicle must move repeatedly in the same section, this method can not be used unless the vehicle is on a possible road. This is especially not possible on highways. In addition, a parameter (e.g., temporary congestion that may occur during a U-turn) caused by measurement by the measurement vehicle also becomes a problem.

SUMMARY OF THE INVENTION It is an object of the present invention to provide a method and system for measuring the amount of traffic in a fixed area using a conventional CCTV installed on a road or simple equipment such as a web cam.

In one aspect, a traffic volume measurement method using a computer vision proposed in the present invention includes a step of extracting an outline of image data acquired through a camera using a computer vision, dividing image data to designate an area of interest, Recognizing and tracking vehicles moving within the designated area of interest using an image, and calculating traffic density according to the vehicles.

Wherein the step of extracting the contour of the image data acquired through the camera using the computer vision and dividing the image data to designate the region of interest comprises the steps of extracting contours to calculate only contours of the image data, Extracting only an effective linear line for extracting only an outline of a road in the middle of the road, dividing the image data including the extracted effective line into left and right halves, and applying an oblique filter to each of the regions, Dividing the image data to which the gradient filter is applied into predetermined regions, and finding the intersections of the extracted effective linear lines for each region.

The step of extracting an outline to calculate only the outline of the image data removes edge element values less than a predetermined number of pixels by using a Canny edge search.

Extracting only valid lines for extracting only the outline of the road from among the extracted bounding lines removes non-linear lines using Hough transform to leave only the outline of the road.

Wherein the step of dividing the image data to which the gradient filter is applied into a predetermined region and finding the intersection of the extracted effective linear lines for each region is performed by finding an area having the largest intersection points and calculating an average value of the intersection points, And a linear line that is horizontal to the image data based on the extinction point is designated as the upper side of the ROI.

Recognizing and tracking vehicles moving within the designated region of interest using the motion history image may include updating the motion history image by using a silhouette when the coordinates for the pixel are given, And the motion values of the motion vectors.

The step of calculating the traffic density according to the vehicles calculates the average traffic time, the average traffic speed, and the traffic density per predetermined time using the number of vehicles passing by the predetermined time, the measurement interval distance, and the like.

According to another aspect of the present invention, there is provided a traffic volume measurement system using a computer vision, comprising: extracting a contour of image data acquired through a camera using a computer vision; dividing the image data into an interest area A vehicle recognition and tracking unit for recognizing and tracking vehicles moving within the designated area of interest using motion history images, and a calculation unit for calculating traffic density according to the vehicles.

The interest area designating unit extracts contours to calculate only contours of the image data, and extracts only effective contour lines for extracting only contours from the extracted contour lines.

The interest area designation unit divides the image data including the extracted effective line into left and right halves, applies an oblique filter to each area, and divides the image data to which the oblique filter is applied into a predetermined area Find the intersection of the extracted effective linear lines for each region.

The vehicle recognition and tracking unit updates the motion history image by using the silhouette when the coordinates for the pixel are given, and calculates the direction value and motion elements of the motion of the updated motion history image.

The calculation unit calculates the average traffic time, the average traffic speed, and the traffic density per predetermined time by using the number of vehicles passing by the predetermined time, the measurement interval distance, and the like.

According to the embodiments of the present invention, it is possible to provide traffic services such as traffic time and delay time, speeding vehicle control, accident area identification, and stolen vehicle management by measuring the traffic volume of vehicles moving on public roads and expressways through computer vision have.

1 is a flowchart illustrating a method of measuring traffic volume using a computer vision according to an embodiment of the present invention.
2 is a view for explaining a process of collecting image data and performing vehicle recognition and tracking according to an embodiment of the present invention.
FIG. 3 is a flowchart illustrating a process of setting a dynamic ROI according to an embodiment of the present invention.
4 is a flowchart illustrating a process of recognizing and tracking a vehicle according to an embodiment of the present invention.
5 is a diagram for explaining a process of calculating a traffic density according to an embodiment of the present invention.
6 is a view for explaining a traffic volume measurement system using a computer vision according to an embodiment of the present invention.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings.

1 is a flowchart illustrating a method of measuring traffic volume using a computer vision according to an embodiment of the present invention.

The traffic volume measurement method using the proposed computer vision includes a step 110 of extracting an outline of image data acquired through a camera using a computer vision and dividing the image data into a region of interest 110, Recognizing and tracking moving vehicles within a designated area of interest 120, and calculating 130 the traffic density according to the vehicles.

In step 110, the outline of the image data acquired through the camera is extracted using the computer vision, and the image data is divided to specify the region of interest. Step 110 includes extracting a contour line to calculate only the contour of the image data, extracting only valid contour lines for extracting contour lines only from the extracted contour lines, Dividing the image data into left and right halves and applying an oblique filter to each of the regions, dividing the image data to which the oblique filter is applied into predetermined regions, and determining the intersections of the extracted effective linear lines with respect to each region Searching step.

In the step of extracting the contour line to calculate only the contour of the image data, for example, a cane edge search may be used to remove edge element values less than a predetermined number of pixels.

In the step of extracting only the effective linear line for extracting only the outline from the extracted bounding line, the nonlinear line may be removed using, for example, Hough transform to leave only the outline of the road.

In the step of dividing the image data to which the gradient filter is applied into a predetermined region and finding the intersection point of the extracted effective linear lines for each region, an average value of the intersection points is found by finding the region where the intersection points are the greatest, A linear line that is horizontal to the image data based on the extinction point can be designated as the upper side of the ROI.

In step 120, the motion history image is updated by using the silhouette when the coordinates for the pixel are given, and the direction value and motion elements of the motion of the updated motion history image can be calculated.

In step 130, the average traffic time, the average traffic speed, and the traffic density per predetermined time can be calculated using the number of vehicles passing by the predetermined time, the measurement interval distance, and the like.

2 is a view for explaining a process of collecting image data and performing vehicle recognition and tracking according to an embodiment of the present invention.

First, an image capturing 210 using a camera is performed to collect image data. Then, a dynamic interest area is set (image processing) (220) using the image data acquired through image shooting. Setting the dynamic area of interest allows the vehicle vision and tracking 230 to be performed through the computer vision. The traffic density according to the vehicles can be calculated based on the acquired image data. The process of setting a dynamic ROI and performing vehicle recognition and tracking will be described in more detail with reference to FIGS. 3 and 4. FIG.

FIG. 3 is a flowchart illustrating a process of setting a dynamic ROI according to an embodiment of the present invention.

The proposed method utilizes the image data obtained by using the camera since the traffic volume is measured by utilizing the computer vision. Such image data inevitably contains data outside the road area. This problem can lead to the process of performing unnecessary operations. Therefore, it is necessary to designate only the area on the road as the operation area. This region is referred to as a region of interest, and the region of interest in the onset may have a hexagon shape, for example, similar to a trapezoid.

First, an outline is extracted to calculate only the outline of the image data. In the present invention, the Cuney edge search 311 can be performed using the image data 310 obtained through image capturing. In order to designate only the area on the road as the area of interest, it is first necessary to calculate the outline of the image. For these operations, a Canny edge search can be used. For example, considering that the road boundary is long enough, edge element values less than 50 pixels can be regarded as unnecessary data and removed (313).

Then, edge element values of 50 pixels or more are extracted (321) into an effective linear line. The remaining edge elements after the Canny Edge search may or may not be contour lines for the road. In order to leave only the outline of the road, it is necessary to remove the non-linear line and extract only the linear line. For example, Hough Transform can be used to extract only linear lines.

Next, the image data to which the gradient filter is applied is divided into predetermined regions, and the intersections of the extracted effective linear lines are searched for each region. Remaining linear elements through effective linear line extraction still include off-road contours (e.g., buildings adjacent to the roads, etc.). In this step, the image is divided into left and right halves, and arithmetic operations are performed on the respective regions.

More specifically, an image obtained by extracting only a linear line is divided 331 into left and right images. Then, an edge element inspection 333 is performed by applying a slant filter to the left image data 332. For example, the left image data may be regarded as unnecessary data and may be removed (335) if the edge element test result, that is, the slope is not 20 to 80 degrees. On the other hand, when the edge element test result, that is, the slope is 20 to 80 degrees, only the linear line is extracted. In this way, the corners of the left region of interest are extracted (334).

In addition, an edge element inspection 337 is performed by applying a gradient filter to the right image data 336. [ For example, the right image data can be regarded as unnecessary data and can be removed (335) if the result of the edge element inspection, that is, the slope is not 100 to 160 degrees. On the other hand, when the edge element test result, that is, the slope is 100 to 160 degrees, only the linear line is extracted. In this way, the corners of the ROI are extracted (338). Through the above process, you can calculate both sides of the area of interest.

Next, the image data to which the gradient filter is applied is divided into predetermined regions, and the intersections of the extracted effective linear lines are searched for each region.

An average value of the intersection points is obtained by finding the intersection points having the largest number of intersections, obtaining the extinction point, and designating the linear line horizontal to the image data with respect to the extinction point as the upper side of the interest area.

In other words, we need to find the upper side of the region of interest after the operation applying the slope filter above. To this end, in the embodiment of the present invention, the image region is divided into nine regions and the intersections of the calculated linear lines are searched for each region. The extinction point can be obtained 341 by calculating the average value of the intersections by finding the most area of each intersection point. A linear line that is horizontal to the image is designated as the upper side of the region of interest based on the vanishing point. In this way, corner data of the upper region of interest can be obtained (342).

In addition, the lowermost linear element of the image may be acquired (361). At this time, the lower side of the region of interest is designated as a linear line of the lowest pixels of the image. In this way, edge data of the lower region of interest can be acquired 362.

Since it is measured through this process, the dynamic interest area of the image data acquired using the camera can be set 351. [

4 is a flowchart illustrating a process of recognizing and tracking a vehicle according to an embodiment of the present invention.

Recognizing and tracking vehicles may be performed by using the silhouette when the coordinates for the pixel are given and updating the motion history image and calculating motion values and motion elements of the motion of the updated motion history image.

In order to measure the traffic volume proposed in the present invention, there is a need for a method of recognizing and tracking a moving vehicle in an area of interest. To perform this method, a motion history image, which is a method of tracking an object moving in the image, can be used.

In order to perform vehicle recognition and tracking, an update motion history 420 of the image data 410 in the region of interest may be performed first. When the coordinates for the pixel are given as x, y, the motion history is updated by using the silhouette at each pixel (x, y). These values are continuously updated. The motion history value mhi (x, y) is defined by the following equations (1) and (2).

Figure 112016009345754-pat00001
Equation (1)

Figure 112016009345754-pat00002
Equation (2)

The silhouette acts as a kind of mask. In other words, the motion history update is performed according to the silhouette mask projection value at each pixel (x, y).

If the silhouette mask projection value at each pixel (x, y), i. E., Silhouette (x, y) is non-zero, then there is motion at that pixel. In this case, the value of mhi (x, y) becomes the current time (or current frame) (mhi (x, y) = timestamp).

On the other hand, if the value of silhouette (x, y) is 0 and the motion is long ago, the value of mhi (x, y) becomes 0. How long ago motion to consider is determined by the duration value, which is set to 3 in the present invention. If the value of the silhouette (x, y) is 0, it can be divided according to a predetermined duration value.

mhi (x, y) = 0 when the value of silhouette (x, y) is 0 and mhi (x, y) <(timestamp-duration) If the value of silhouette (x, y) is 0 and mhi (x, y) <(timestamp-duration), mhi (x, y) = mhi (x, y).

Next, a calculation motion gradient (430) can be performed.

The direction value of the motion can be calculated as a result of the motion history update. This value is called the orientation. The orientation has a value between 0 and 360, and is calculated through equation (3).

Figure 112016009345754-pat00003
Equation (3)

Next, a Calculate Motion Segment 440 may be performed.

The motion history image is still in one image state, although it has undergone two steps of Update Motion History 420 and Calculate Motion Gradient 430 above. An image must be separated into individual objects. The values in a certain area are recognized as one object and separated. At this time, the image should be divided based on a certain threshold value. These values are typically 0.5, or sometimes larger. You can get each object separated by the result.

5 is a diagram for explaining a process of calculating a traffic density according to an embodiment of the present invention.

The step of calculating the traffic density according to the vehicles calculates the average traffic time, the average traffic speed, and the traffic density per predetermined time using the number of vehicles passing by the predetermined time, the measurement interval distance, and the like.

The number of vehicles passing on the road can be grasped through the area of interest designation and the vehicle recognition and estimation method in the area of interest. Traffic density calculation per minute is necessary to measure traffic volume. 5, the average travel time Pt and the average travel speed Sa can be calculated by assuming that the number of vehicles passing per minute is Tm and the distance of the measurement section is D, as shown in FIG. Measuring section distance

Figure 112016009345754-pat00004
, And here
Figure 112016009345754-pat00005
to be. And the number of passing vehicles per minute
Figure 112016009345754-pat00006
to be. Thus, the traffic density per minute Dt can be calculated as shown in equation (4).

Figure 112016009345754-pat00007
Equation (4)

Using this, the traffic density per minute for the left and right roads in FIG. 5 is calculated as follows.

Traffic density per minute on the left road:

Figure 112016009345754-pat00008

Traffic density per minute on the right road:

Figure 112016009345754-pat00009

As described above, the present invention uses less expensive equipment such as a webcam or the like to reuse the existing CCTV, so the implementation cost is lower than that of the existing traffic measurement system. The software is designed to be implemented in a conventional computer, but it can also benefit spatially if you use a stick PC.

In addition, since the area of interest is dynamically calculated separately from the image data, the traffic volume is measured. Therefore, it is not necessary to design a customized system according to the road environment on the straight road.

6 is a view for explaining a traffic volume measurement system using a computer vision according to an embodiment of the present invention.

The traffic measurement system 600 according to the present embodiment may include a processor 610, a bus 620, a network interface 630, a memory 640, and a database 650. The memory 640 may include an operating system 641 and a traffic volume measurement routine 642. The processor 610 may include a region of interest designation 611, a vehicle recognition and tracking portion 612, and a calculation portion 613. [ In other embodiments, the traffic volume measurement system 600 may include more components than the components of FIG. However, there is no need to clearly illustrate most prior art components. For example, traffic measurement system 600 may include other components such as a display or a transceiver.

The memory 640 may be a computer-readable recording medium and may include a permanent mass storage device such as a random access memory (RAM), a read only memory (ROM), and a disk drive. The memory 640 may store program codes for the operating system 641 and the traffic volume measurement routine 642. [ These software components may be loaded from a computer readable recording medium separate from the memory 640 using a drive mechanism (not shown). Such a computer-readable recording medium may include a computer-readable recording medium (not shown) such as a floppy drive, a disk, a tape, a DVD / CD-ROM drive, or a memory card. In other embodiments, the software components may be loaded into memory 640 via a network interface 630 rather than a computer-readable recording medium.

The bus 620 may enable communication and data transfer between the components of the traffic measurement system 600. The bus 620 may be configured using a high-speed serial bus, a parallel bus, a Storage Area Network (SAN), and / or other suitable communication technology.

Network interface 630 may be a computer hardware component for connecting traffic measurement system 600 to a computer network. The network interface 630 may connect the traffic measurement system 600 to the computer network via a wireless or wired connection.

The database 650 can store and maintain all the information necessary for measuring the traffic volume using the computer vision. 6, the database 650 is included in the traffic measurement system 600. However, the present invention is not limited thereto and may be omitted depending on the system implementation method or environment, It is also possible to exist as an external database built on another system of the system.

The processor 610 may be configured to process instructions of a computer program by performing basic arithmetic, logic, and input / output operations of the traffic volume measurement system 600. The instructions may be provided to the processor 610 by the memory 640 or the network interface 630 and via the bus 620. The processor 610 may be configured to execute program codes for the region of interest designation 611, the vehicle recognition and tracking portion 612, and the calculation portion 613. [ Such program code may be stored in a recording device such as memory 640. [

The region of interest designation unit 611, the vehicle recognition and tracking unit 612, and the calculation unit 613 may be configured to perform the steps 110 to 130 of FIG.

The traffic volume measurement system 600 may include a region of interest designation unit 611, a vehicle recognition and tracking unit 612, and a calculation unit 613.

The interest area designation unit 611 extracts the contours of the image data acquired through the camera using the computer vision and divides the image data to designate the area of interest.

The ROI specifying unit 611 first extracts contour lines to calculate only contour lines of the image data and extracts only effective contour lines for extracting contour lines only from the extracted contour lines.

Then, the image data including the extracted effective line is divided into left and right halves, and arithmetic operations are performed by applying gradient filters to the respective regions. The image data to which the gradient filter is applied is divided into predetermined regions and the intersections of the extracted effective linear lines are searched for each region.

First, an outline is extracted to calculate only the outline of the image data. In the present invention, it is possible to perform the Canny edge search using the image data acquired through image capturing. In order to designate only the area on the road as the area of interest, it is first necessary to calculate the outline of the image. For these operations, a Canny edge search can be used. For example, considering that the road boundary is long enough, edge element values less than 50 pixels can be regarded as unnecessary data and can be removed.

Then, edge element values of 50 pixels or more are extracted as an effective linear line. The remaining edge elements after the Canny Edge search may or may not be contour lines for the road. In order to leave only the outline of the road, it is necessary to remove the non-linear line and extract only the linear line. For example, Hough Transform can be used to extract only linear lines.

Next, the image data to which the gradient filter is applied is divided into predetermined regions, and the intersections of the extracted effective linear lines are searched for each region. Remaining linear elements through effective linear line extraction still include off-road contours (e.g., buildings adjacent to the roads, etc.). At this time, the image is divided into left and right halves, and arithmetic operations are performed on the respective regions.

More specifically, an image obtained by extracting only a linear line is divided into left and right. Then, an edge element inspection is performed by applying a gradient filter to the left image data. For example, the left image data can be regarded as unnecessary data and can be removed if the edge element test result, that is, the slope is not 20 to 80 degrees. On the other hand, when the edge element test result, that is, the slope is 20 to 80 degrees, only the linear line is extracted. In this way, the corners of the left interest region are extracted.

Also, an edge element inspection is performed by applying a gradient filter to the right image data. For example, the right image data can be regarded as unnecessary data and can be removed if the edge element test result, that is, the slope is not 100 to 160 degrees. On the other hand, when the edge element test result, that is, the slope is 100 to 160 degrees, only the linear line is extracted. In this way, the corners of the ROI are extracted. Through the above process, you can calculate both sides of the area of interest.

Next, the image data to which the gradient filter is applied is divided into predetermined regions, and the intersections of the extracted effective linear lines are searched for each region.

An average value of the intersection points is obtained by finding the intersection points having the largest number of intersections, obtaining the extinction point, and designating the linear line horizontal to the image data with respect to the extinction point as the upper side of the interest area.

In other words, we need to find the upper side of the region of interest after the operation applying the slope filter above. To this end, in the embodiment of the present invention, the image region is divided into nine regions and the intersections of the calculated linear lines are searched for each region. It is possible to obtain the extinction point by calculating the average value of the intersections by finding the most area of each intersection. A linear line that is horizontal to the image is designated as the upper side of the region of interest based on the vanishing point. In this way, corner data of the upper region of interest can be obtained.

It is also possible to obtain the lowermost linear element of the image). At this time, the lower side of the region of interest is designated as a linear line of the lowest pixels of the image. In this way, corner data of the lower region of interest can be obtained. Since it is measured through this process, it is possible to set the dynamic interest area of the image data acquired by using the camera.

The vehicle recognition and tracking unit 612 recognizes and tracks vehicles moving within the designated area of interest using a motion history image. The motion history image is updated by using the silhouette when the vehicle recognition and tracking unit 612 has given the coordinates for the pixel, and calculates the direction value and motion elements of the motion of the updated motion history image.

The vehicle recognition and tracking unit 612 can be performed by calculating the direction value and motion elements of the motion of the updated motion history image by updating the motion history image by using the silhouette when the coordinates for the pixel are given.

In order to measure the traffic volume proposed in the present invention, there is a need for a method of recognizing and tracking a moving vehicle in an area of interest. To perform this method, a motion history image, which is a method of tracking an object moving in the image, can be used.

In order to perform the vehicle recognition and tracking, an update motion history of the image data in the region of interest can be first performed. When the coordinates for the pixel are given as x, y, the motion history is updated by using the silhouette at each pixel (x, y). These values are continuously updated. The motion history value mhi (x, y) is defined by the following equations (1) and (2).

The silhouette acts as a kind of mask. In other words, the motion history update is performed according to the silhouette mask projection value at each pixel (x, y).

If the silhouette mask projection value at each pixel (x, y), i. E., Silhouette (x, y) is non-zero, then there is motion at that pixel. In this case, the value of mhi (x, y) becomes the current time (or current frame) (mhi (x, y) = timestamp).

On the other hand, if the value of silhouette (x, y) is 0 and the motion is long ago, the value of mhi (x, y) becomes 0. How long ago motion to consider is determined by the duration value, which is set to 3 in the present invention. If the value of the silhouette (x, y) is 0, it can be divided according to a predetermined duration value.

mhi (x, y) = 0 when the value of silhouette (x, y) is 0 and mhi (x, y) <(timestamp-duration) If the value of silhouette (x, y) is 0 and mhi (x, y) <(timestamp-duration), mhi (x, y) = mhi (x, y).

Next, a calculation motion gradient can be performed.

The direction value of the motion can be calculated as a result of the motion history update. This value is called the orientation. The orientation has a value between 0 and 360, and is calculated through equation (3).

Next, calculation of motion elements (Calculate Motion Segment) can be performed.

Although the two steps of Update Motion History and Calculate Motion Gradient have been performed above, the motion history image is still an image state. An image must be separated into individual objects. The values in a certain area are recognized as one object and separated. At this time, the image should be divided based on a certain threshold value. These values are typically 0.5, or sometimes larger. You can get each object separated by the result.

The calculation unit 613 calculates the traffic density according to the vehicles. The calculation unit 613 calculates the average traffic time, the average traffic speed, and the traffic density per predetermined time by using the number of vehicles passing through the predetermined time, the measurement interval distance, and the like.

The number of vehicles passing on the road can be grasped through the area of interest designation and the vehicle recognition and estimation method in the area of interest. Traffic density calculation per minute is necessary to measure traffic volume. 5, the average travel time Pt and the average travel speed Sa can be calculated by assuming that the number of vehicles passing per minute is Tm and the distance of the measurement section is D, as shown in FIG. Measuring section distance

Figure 112016009345754-pat00010
, And here
Figure 112016009345754-pat00011
to be. And the number of passing vehicles per minute
Figure 112016009345754-pat00012
to be. Thus, the traffic density per minute Dt can be calculated as shown in equation (4).

The apparatus described above may be implemented as a hardware component, a software component, and / or a combination of hardware components and software components. For example, the apparatus and components described in the embodiments may be implemented within a computer system, such as, for example, a processor, a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a field programmable array (FPA) A programmable logic unit (PLU), a microprocessor, or any other device capable of executing and responding to instructions. The processing device may execute an operating system (OS) and one or more software applications running on the operating system. The processing device may also access, store, manipulate, process, and generate data in response to execution of the software. For ease of understanding, the processing apparatus may be described as being used singly, but those skilled in the art will recognize that the processing apparatus may have a plurality of processing elements and / As shown in FIG. For example, the processing unit may comprise a plurality of processors or one processor and one controller. Other processing configurations are also possible, such as a parallel processor.

The software may include a computer program, code, instructions, or a combination of one or more of the foregoing, and may be configured to configure the processing device to operate as desired or to process it collectively or collectively Device can be commanded. The software and / or data may be in the form of any type of machine, component, physical device, virtual equipment, computer storage media, or device , Or may be permanently or temporarily embodied in a transmitted signal wave. The software may be distributed over a networked computer system and stored or executed in a distributed manner. The software and data may be stored on one or more computer readable recording media.

The method according to an embodiment may be implemented in the form of a program command that can be executed through various computer means and recorded in a computer-readable medium. The computer-readable medium may include program instructions, data files, data structures, and the like, alone or in combination. The program instructions to be recorded on the medium may be those specially designed and configured for the embodiments or may be available to those skilled in the art of computer software. Examples of computer-readable media include magnetic media such as hard disks, floppy disks and magnetic tape; optical media such as CD-ROMs and DVDs; magnetic media such as floppy disks; Magneto-optical media, and hardware devices specifically configured to store and execute program instructions such as ROM, RAM, flash memory, and the like. Examples of program instructions include machine language code such as those produced by a compiler, as well as high-level language code that can be executed by a computer using an interpreter or the like. The hardware devices described above may be configured to operate as one or more software modules to perform the operations of the embodiments, and vice versa.

While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. For example, it is to be understood that the techniques described may be performed in a different order than the described methods, and / or that components of the described systems, structures, devices, circuits, Lt; / RTI &gt; or equivalents, even if it is replaced or replaced.

Therefore, other implementations, other embodiments, and equivalents to the claims are also within the scope of the following claims.

Claims (12)

A method for measuring traffic volume,
Extracting an outline of image data acquired through a camera using a computer vision, and dividing the image data to designate a region of interest;
Recognizing and tracking vehicles moving within the designated area of interest using a motion history image; And
Calculating a traffic density according to the vehicles
Lt; / RTI &gt;
Wherein the step of extracting the contour of the image data acquired through the camera using the computer vision and dividing the image data to designate the region of interest includes:
Extracting an outline to calculate only an outline of the image data;
Extracting only an effective linear line for extracting only a contour line from the extracted contour line;
Dividing the image data including the extracted effective line into left and right halves and applying an oblique filter to each of the regions to perform an arithmetic operation; And
Dividing the image data to which the gradient filter is applied into a predetermined region, and finding an intersection of the extracted effective linear lines for each region
Lt; / RTI &gt;
Dividing the image data to which the gradient filter is applied into predetermined regions and finding the intersections of the extracted effective linear lines for each region,
Acquiring an extinction point by calculating an average value of the intersection points by finding the intersection points having the largest number of intersections, acquiring edge data of the upper interest region by designating an upper line of the interest region as a linear line horizontal to the image data with reference to the extinction point And a linear line of the lowest pixels of the image data is designated as a lower side of the ROI so as to obtain corner data of the lower ROI. Thus, a dynamic ROI that does not require the design of a customized system according to the road environment on a straight road To set
How to measure traffic volume.
delete The method according to claim 1,
Wherein the step of extracting an outline to calculate only the outline of the image data comprises:
Using the Canny edge search to remove edge element values less than a predetermined number of pixels
How to measure traffic volume.
The method according to claim 1,
Wherein the step of extracting only an effective linear line for extracting only an outline of a road among the extracted outlines comprises:
Removing nonlinear lines using Hough transform to leave only the contour of the road
How to measure traffic volume.
delete The method according to claim 1,
Wherein the step of recognizing and tracking vehicles moving within the designated area of interest using the motion history image comprises:
Given the coordinates for the pixel, the motion history image is updated by using a silhouette, and the direction value and motion elements of the motion of the updated motion history image are calculated
How to measure traffic volume.
The method according to claim 1,
Wherein calculating the traffic density according to the vehicles comprises:
The average traffic time, the average traffic speed, and the traffic density per predetermined time are calculated using the number of vehicles passing by the predetermined time, the measurement section distance,
How to measure traffic volume.
A traffic volume measurement system comprising:
A region of interest designating a region of interest by extracting contours of image data acquired through a camera using a computer vision and dividing the image data;
A vehicle recognition and tracking unit for recognizing and tracking vehicles moving within the designated area of interest using a motion history image; And
A calculation unit for calculating a traffic density according to the vehicles;
Lt; / RTI &gt;
The attention area designation unit,
Extracting an outline only for calculating an outline of the image data, extracting only an outline of the extracted outline,
Dividing the image data including the extracted effective line into left and right halves, performing an arithmetic operation by applying a warp filter to each region, dividing the image data applied with the warp filter into predetermined regions, Find the intersection of the extracted valid linear lines,
Acquiring an extinction point by calculating an average value of the intersection points by finding the intersection points having the largest number of intersections, acquiring edge data of the upper interest region by designating an upper line of the interest region as a linear line horizontal to the image data with reference to the extinction point And a linear line of the lowest pixels of the image data is designated as a lower side of the ROI so as to obtain corner data of the lower ROI. Thus, a dynamic ROI that does not require the design of a customized system according to the road environment on a straight road To set
Traffic volume measurement system.
delete delete 9. The method of claim 8,
The vehicle recognizing and tracking unit,
Given the coordinates for the pixel, the motion history image is updated by using a silhouette, and the direction value and motion elements of the motion of the updated motion history image are calculated
Traffic volume measurement system.
9. The method of claim 8,
The calculation unit may calculate,
The average traffic time, the average traffic speed, and the traffic density per predetermined time are calculated using the number of vehicles passing by the predetermined time, the measurement section distance,
Traffic volume measurement system.
KR1020160010496A 2016-01-28 2016-01-28 Method and System for Traffic Measurement using Computer Vision KR101813783B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020160010496A KR101813783B1 (en) 2016-01-28 2016-01-28 Method and System for Traffic Measurement using Computer Vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020160010496A KR101813783B1 (en) 2016-01-28 2016-01-28 Method and System for Traffic Measurement using Computer Vision

Publications (2)

Publication Number Publication Date
KR20170090081A KR20170090081A (en) 2017-08-07
KR101813783B1 true KR101813783B1 (en) 2017-12-29

Family

ID=59654014

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020160010496A KR101813783B1 (en) 2016-01-28 2016-01-28 Method and System for Traffic Measurement using Computer Vision

Country Status (1)

Country Link
KR (1) KR101813783B1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20210010124A (en) 2019-07-19 2021-01-27 (주)유에이치알앤디 Traffic volume computation program using ai and operation method thereof

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20210010124A (en) 2019-07-19 2021-01-27 (주)유에이치알앤디 Traffic volume computation program using ai and operation method thereof

Also Published As

Publication number Publication date
KR20170090081A (en) 2017-08-07

Similar Documents

Publication Publication Date Title
EP3581890B1 (en) Method and device for positioning
US10261574B2 (en) Real-time detection system for parked vehicles
Mandal et al. Object detection and tracking algorithms for vehicle counting: a comparative analysis
JP6650657B2 (en) Method and system for tracking moving objects in video using fingerprints
Negru et al. Image based fog detection and visibility estimation for driving assistance systems
CN109919008A (en) Moving target detecting method, device, computer equipment and storage medium
Azevedo et al. Automatic vehicle trajectory extraction by aerial remote sensing
WO2013186662A1 (en) Multi-cue object detection and analysis
Nguyen et al. Compensating background for noise due to camera vibration in uncalibrated-camera-based vehicle speed measurement system
JP2014071902A5 (en)
KR20050085842A (en) Method and device for tracing moving object in image
JP2018081545A (en) Image data extraction device and image data extraction method
CN107808524B (en) Road intersection vehicle detection method based on unmanned aerial vehicle
US11069071B1 (en) System and method for egomotion estimation
US20200279395A1 (en) Method and system for enhanced sensing capabilities for vehicles
CN110853085A (en) Semantic SLAM-based mapping method and device and electronic equipment
JP2009245042A (en) Traffic flow measurement device and program
Rezaei et al. Traffic-net: 3d traffic monitoring using a single camera
Rasib et al. Pixel level segmentation based drivable road region detection and steering angle estimation method for autonomous driving on unstructured roads
Park et al. Vision-based surveillance system for monitoring traffic conditions
KR102414632B1 (en) Method for determining the location of a fixed object using multiple observation information
Głowacz et al. Video detection algorithm using an optical flow calculation method
CN112447060A (en) Method and device for recognizing lane and computing equipment
KR101813783B1 (en) Method and System for Traffic Measurement using Computer Vision
Guerrieri et al. Traffic flow variables estimation: An automated procedure based on moving observer method. potential application for autonomous vehicles

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E701 Decision to grant or registration of patent right
GRNT Written decision to grant