CN108648463B - Method and system for detecting vehicles in intersection traffic video - Google Patents

Method and system for detecting vehicles in intersection traffic video Download PDF

Info

Publication number
CN108648463B
CN108648463B CN201810455711.6A CN201810455711A CN108648463B CN 108648463 B CN108648463 B CN 108648463B CN 201810455711 A CN201810455711 A CN 201810455711A CN 108648463 B CN108648463 B CN 108648463B
Authority
CN
China
Prior art keywords
time period
current time
video frame
gaussian
new
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810455711.6A
Other languages
Chinese (zh)
Other versions
CN108648463A (en
Inventor
雷帮军
徐光柱
李华文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hubei Jiugan Technology Co ltd
Original Assignee
China Three Gorges University CTGU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Three Gorges University CTGU filed Critical China Three Gorges University CTGU
Priority to CN201810455711.6A priority Critical patent/CN108648463B/en
Publication of CN108648463A publication Critical patent/CN108648463A/en
Application granted granted Critical
Publication of CN108648463B publication Critical patent/CN108648463B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides a method and a system for detecting vehicles in intersection traffic videos, wherein the method comprises the following steps: for a current time period for vehicle detection in a traffic video of a target intersection, performing background learning on a video frame corresponding to the current time period to obtain a background model corresponding to the current time period; acquiring a video frame corresponding to a time period after the current time period in the traffic video of the target intersection, performing vehicle detection on the video frame corresponding to the time period after the current time period by using a background model corresponding to the current time period, and taking the time period after the current time period as the next current time period for performing vehicle detection; and the time length of the current time period and the time length of the next time period are respectively more than twice of the preset red light waiting time length of the target intersection. The invention avoids the scene of vehicle waiting to be detected as the background by mistake, and improves the correctness of background detection and vehicle detection.

Description

Method and system for detecting vehicles in intersection traffic video
Technical Field
The invention belongs to the technical field of background detection, and particularly relates to a method and a system for detecting vehicles in intersection traffic videos.
Background
The first step of moving object detection and tracking is mainly to extract a moving object from a video image and obtain characteristic information of the moving object, such as position, shape, contour, color and the like, namely to detect the moving object. And the moving target detection establishes an initialized target template model for the follow-up moving target tracking. Therefore, whether a moving object can be correctly detected has a significant influence on the subsequent moving object tracking.
At present, a moving object is generally detected by a background elimination method, which is implemented by comparing a current frame in a sequence image with a background reference model, wherein the used background frame is not an original frame directly obtained from a video, but is obtained by updating through an algorithm. Generally, the background elimination method includes several steps of background modeling, background updating, background elimination, post-processing, and the like. The process of initializing the background model is called background modeling, and mainly determines the response speed, dynamic range and the like of the background elimination method in subsequent processing. The background updating process is a process of correcting parameters in the background model by using each frame of video image, and reflects the change of the environment, namely whether motion exists in the background. The process of comparing the current frame with the corrected background model and extracting the moving object is called background elimination. The post-processing process is to accurately correct the extracted moving object, and the step is to perform post-processing according to the requirements of the video application. The key to the application of background subtraction is not the process of comparison with the current video frame to the background model, but rather the maintenance and updating of the background.
The background elimination method of the multi-model is more reflective of the objective real world than the background elimination method of the single model. In most video applications, each background pixel can be approximated by one or more gaussian distributions. Therefore, the Gaussian mixture background elimination model is a basic model used by most moving object detection methods, and can effectively adapt to background dynamic changes. For a multi-peak Gaussian distribution model, each pixel point of an image is modeled according to superposition of a plurality of Gaussian distributions with different weights, each Gaussian distribution corresponds to a state which can possibly generate the color presented by the pixel point, and the weight and distribution parameters of each Gaussian distribution are updated along with time. When processing color images, it is assumed that the image pixels R, G, B have three color channels that are independent of each other and have the same variance. Observation data set { X for random variable X1,x2,…,xN},xt=(rt,gt,bt) For a sample of the pixel at time t, then a single sample point xtThe obeyed mixed gaussian distribution probability density function is:
Figure GDA0002566103130000021
Figure GDA0002566103130000022
Figure GDA0002566103130000023
wherein k is the total number of distribution modes, η (x)ti,ti,t) Is the ith Gaussian distribution at time ti,tIs the mean value ofi,tFor the purpose of its covariance matrix,i,tis the variance, I is the three-dimensional identity matrix, ωi,tThe weight of the ith gaussian distribution at time t.
In an actual scene, the background always changes continuously mainly due to the change of light. Therefore, the gaussian model described above also needs to be updated continuously. The purpose of the update is to reflect recent scene changes, which is appropriate for a progressively changing scene. However, daily intersection road traffic is actually a gradual light change plus a periodic change, as shown in fig. 1. If the current progressive update mode is adopted, after the scene fig. 1c continues for a while, the learned background will be as shown in fig. 1c, and the background actually needed for detecting the vehicle in fig. 1b is as shown in fig. 1a, so that the subsequent vehicle detection is wrong.
Disclosure of Invention
In order to overcome the problem of vehicle detection errors in the intersection traffic video or at least partially solve the problem, the invention provides a method and a system for detecting vehicles in the intersection traffic video.
According to a first aspect of the invention, a method for detecting vehicles in intersection traffic videos is provided, which comprises the following steps:
for a current time period for vehicle detection in a traffic video of a target intersection, performing background learning on a video frame corresponding to the current time period to obtain a background model corresponding to the current time period;
acquiring a video frame corresponding to a time period after the current time period in the traffic video of the target intersection, performing vehicle detection on the video frame corresponding to the time period after the current time period by using a background model corresponding to the current time period, and taking the time period after the current time period as the next current time period for performing vehicle detection;
and the time length of the current time period and the time length of the next time period are respectively more than twice of the preset red light waiting time length of the target intersection.
Specifically, the step of performing background learning on the video frame corresponding to the current time period specifically includes:
and performing background learning on the video frame corresponding to the current time period by adopting a Gaussian mixture model.
Specifically, the step of performing background learning on the video frame corresponding to the current time period by using the gaussian mixture model specifically includes:
for any Gaussian layer in a background model which is newly acquired before the current time period, if the characteristic value of each pixel in each video frame corresponding to the current time period is smaller than the preset multiple of the mean value of the Gaussian layer, each pixel is matched with the Gaussian layer, and the mean value, the variance and the weight of the Gaussian layer are updated according to each pixel;
if the characteristic value of each pixel in each video frame corresponding to the current time period is greater than or equal to the preset multiple of the mean value of the Gaussian layer, each pixel is not matched with the Gaussian layer, and the weight of the Gaussian layer is updated;
and if the pixels matched with the Gaussian layer do not exist in the video frame corresponding to the current time period, resetting the Gaussian layer.
Specifically, updating the mean, variance, and weight of the gaussian layer specifically includes:
if each pixel is matched with the Gaussian layer, acquiring a difference value between a characteristic value of each pixel and a preset multiple of the mean value of the Gaussian layer;
obtaining a new average value and a new variance according to the sum of the differences and the sum of the squares; taking the number of the pixels matched with the Gaussian layer as new weight;
and updating the Gaussian layer according to the new average value, the new variance and the new weight.
In particular, the new mean μ and the new variance σ2Obtained by the following formula:
Figure GDA0002566103130000041
Figure GDA0002566103130000042
wherein p is the sum of the differences, q is the sum of the squares of the differences, and n is the number of pixels matched with the Gaussian layer.
Specifically, the step of performing background learning on the video frame corresponding to the current time period and acquiring the background model corresponding to the current time period includes:
dividing the current time period into a plurality of sub time periods, and for any Gaussian layer in a background model which is newly acquired before any sub time period, if each pixel in each video frame corresponding to the sub time period is matched with the Gaussian layer, acquiring a difference value between a characteristic value of each pixel and a preset multiple of the mean value of the Gaussian layer, and the sum and the square sum of each difference value;
accumulating the sum of the difference values and the sum of squares corresponding to the sub-time periods respectively;
acquiring a new average value and a new variance according to the accumulated result of the sum of the differences and the accumulated result of the sum of the squares of the differences; taking the number of pixels matched with the Gaussian layer in all the sub-time periods as new weight;
and updating the Gaussian layer according to the new average value, the new variance and the new weight.
In particular, the new mean μ and the new variance σ2Obtained by the following formula:
Figure GDA0002566103130000043
Figure GDA0002566103130000051
wherein, P is the accumulated result of the sum of the difference values, Q is the accumulated result of the sum of the squares of the difference values, and N is the number of pixels matched with the gaussian layer in all the sub-periods.
Specifically, one or more of the sub-time periods overlap between the current time period and a time period before the current time period; one or more of the sub-time periods overlap between the current time period and a time period subsequent to the current time period.
Specifically, the step of using the background model corresponding to the current time period to detect the vehicle in the video frame corresponding to the next time period specifically includes:
and using the background model corresponding to the current time period to detect the vehicles of the video frames which are not detected by the vehicles in the video frames corresponding to the next time period.
According to another aspect of the present invention, there is provided a vehicle detection system in intersection traffic video, comprising:
the acquisition unit is used for carrying out background learning on a video frame corresponding to a current time period for vehicle detection in a traffic video of a target intersection and acquiring a background model corresponding to the current time period;
the detection unit is used for acquiring a video frame corresponding to a time period after the current time period in the traffic video of the target intersection, using a background model corresponding to the current time period to detect vehicles in the video frame corresponding to the time period after the current time period, and using the time period after the current time period as the next current time period for vehicle detection;
and the time length of the current time period and the time length of the next time period are respectively more than twice of the preset red light waiting time length of the target intersection.
The invention provides a method and a system for detecting vehicles in intersection traffic videos, wherein the method divides a time axis of a target intersection traffic video into a plurality of time periods, learns a background in each time period, does not need to establish a background model in each time period, only calculates the background model corresponding to each time period at the last time of each time period, and uses the learned background model of each time period to detect the vehicles of a video frame corresponding to the next time period of each time period, wherein the time length of each time period is more than two times of the preset red light waiting time length of the target intersection, so that the scene of vehicle waiting is prevented from being detected as the background by mistake, and the correctness of background detection and vehicle detection is improved.
Drawings
FIG. 1 is three road conditions in an intersection traffic video; wherein, fig. 1a is a scene when no vehicle is in use, fig. 1b is a scene when a vehicle passes through, and fig. 1c is a scene when the vehicle waits in line;
FIG. 2 is a schematic overall flow chart of a method for detecting vehicles in an intersection traffic video according to an embodiment of the present invention;
fig. 3 is a schematic diagram illustrating background model updating in a method for detecting vehicles in an intersection traffic video according to an embodiment of the present invention;
fig. 4 is a schematic diagram illustrating background model updating in a method for detecting vehicles in intersection traffic video according to another embodiment of the present invention;
FIG. 5 is a schematic diagram of an overall structure of a vehicle detection system in an intersection traffic video according to an embodiment of the present invention;
fig. 6 is a schematic view of an overall structure of a vehicle detection device in an intersection traffic video according to an embodiment of the present invention.
Detailed Description
The following detailed description of embodiments of the present invention is provided in connection with the accompanying drawings and examples. The following examples are intended to illustrate the invention but are not intended to limit the scope of the invention.
In an embodiment of the present invention, a method for detecting vehicles in an intersection traffic video is provided, and fig. 2 is a schematic overall flow chart of the method for detecting vehicles in an intersection traffic video provided in the embodiment of the present invention, where the method includes: s101, performing background learning on a video frame corresponding to a current time period for vehicle detection in a traffic video of a target intersection to obtain a background model corresponding to the current time period;
the target intersection traffic video refers to a shot video of the target intersection traffic. The target intersection traffic video may be a historical video of the target traffic intersection or a video of the target traffic intersection taken in real time. Because the intersection traffic condition can better reflect the actual traffic condition, vehicle detection is usually carried out on the intersection traffic video, the intersection vehicle passing rate and the occupancy are further calculated, and a basis is provided for traffic planning and adjustment. Due to the fact that the background is updated in real time, the situation that the vehicle waits under the red light condition is easily mistaken as the background, and therefore vehicle detection errors are caused. In the embodiment, the time axis of the traffic video of the target intersection is divided into a plurality of time periods, and the background is learned in each time period. The background model does not need to be established in each time period, and the background model corresponding to each time period only needs to be calculated at one time according to the video frame of each time period at the end of each time period. The current time period is a time period in the target route traffic video. The background learning of the video frame corresponding to the current time period refers to comparing the characteristics of the video frame corresponding to the current time period with a background model established before the current time period, and updating the background model according to the comparison result to obtain the background model corresponding to the current time period. The present embodiment does not limit the algorithm of background learning.
S102, acquiring a video frame corresponding to a time period after the current time period in the traffic video of the target intersection, carrying out vehicle detection on the video frame corresponding to the time period after the current time period by using a background model corresponding to the current time period, and taking the time period after the current time period as the next current time period for carrying out vehicle detection; the time length of the current time period and the time length of the next time period are respectively greater than twice of the preset red light waiting time length of the target intersection.
Specifically, the learned background model of each time segment is used for vehicle detection of the video frame corresponding to the time segment after each time segment. The time lengths of the time periods may be the same or different. There may or may not be an overlap between the time segments. As shown in fig. 3, the time lengths of the time periods are assumed to be the same and are all a. There is no overlap between time periods t, t +1 and t + 2. Respectively carrying out background learning on the video frames corresponding to the time periods t, t +1 and t +2 to obtain corresponding background models Mt、Mt+1And Mt+2. In the t time period, the background model M acquired in the t-1 time period is usedt-1Carrying out vehicle detection; in the t +1 time period, the background model M acquired in the t time period is usedtCarrying out vehicle detection; during the time period t +2, when t +1 is usedBackground model M acquired in different periodst+1And carrying out vehicle detection. The present embodiment is not limited to the method of vehicle detection. After the background model learned by the current time period is used for carrying out vehicle detection on the video frame corresponding to the next time period of the current time period, the next time period of the current time period is used as the current time period, and the steps of background learning and vehicle detection are executed in an iterative mode, so that real-time learning of the background and real-time vehicle detection are achieved.
The preset red light waiting time is the waiting time needed under the condition of the preset red light. In order to ensure that the time that each vehicle does not occupy the intersection is longer than the time that each vehicle occupies the intersection in each time period, the time length of each time period is set to be more than twice of the preset red light waiting time length of the target intersection. The time of occupying the intersection is the time of the vehicle staying at the intersection. The vehicle may stay at the intersection when the red light starts, or may stay at the intersection at a time point after the red light starts and before the green light starts. And under the condition that the traffic is not congested, the time that the vehicle stays at the intersection at most is the preset waiting time, and then the vehicle drives away from the intersection. However, under the condition of traffic jam, the vehicle still stays at the intersection when the next red light is emitted, and can leave after the next red light is emitted, and the time length of each time period needs to be more than twice of the time length between two red lights.
In the embodiment, the time axis of the traffic video of the target intersection is divided into a plurality of time periods, the background is learned in each time period, a background model does not need to be established in each time period, the background model corresponding to each time period is only calculated at the last time of each time period, the learned background model in each time period is used for carrying out vehicle detection on the video frame corresponding to the next time period of each time period, and the duration of each time period is more than twice of the preset red light waiting duration of the target intersection, so that the scene of vehicle waiting is prevented from being falsely detected as the background, and the correctness of the background detection and the vehicle detection is improved.
On the basis of the foregoing embodiment, the step of performing background learning on the video frame corresponding to the current time period in step S101 in this embodiment specifically includes: and performing background learning on the video frame corresponding to the current time period by adopting a Gaussian mixture model.
Specifically, the gaussian mixture model is a model formed based on a gaussian probability density function, which decomposes one thing into several things by accurately quantizing the thing with the gaussian probability density function. The Gaussian mixture model uses a plurality of Gaussian models to represent the characteristics of each pixel point in the image, and each pixel point in the video frame corresponding to the current time period is matched with the Gaussian mixture model. If the matching is successful, each pixel point is a background point, and otherwise, each pixel point is a foreground point. And updating the Gaussian mixture model according to the matching result so as to obtain a background model corresponding to the current time period.
On the basis of the above embodiment, the step of performing background learning on the video frame corresponding to the current time period by using the gaussian mixture model specifically includes: for any Gaussian layer in the background model corresponding to the previous time period of the current time period, if the characteristic value of each pixel in each video frame corresponding to the current time period is smaller than the preset multiple of the mean value of the Gaussian layer, each pixel is matched with the Gaussian layer, and the mean value, the variance and the weight of the Gaussian layer are updated according to each pixel; if the characteristic value of each pixel in each video frame corresponding to the current time period is greater than or equal to the preset multiple of the mean value of the Gaussian layer, each pixel is not matched with the Gaussian layer, and the weight of the Gaussian layer is updated; and if the pixels matched with the Gaussian layer do not exist in the video frame corresponding to the current time period, resetting the Gaussian layer.
Specifically, when there is no overlap between the time periods, the background model acquired latest before the current time period learns the acquired background model for the time period before the current time period. When the time periods are overlapped, the overlapped time period in the previous time period of the current time period does not complete learning, namely the background modeling is not completed in the previous time period of the current time period, and the background model obtained by the latest previous time period is not the background model obtained by learning in the previous time period of the current time period. Assume that the appearance of each pixel of each frame in the target intersection traffic video over time is represented by K gaussian models. Mean of each GaussianNumber μ, variance σ2And the weight w describes that the preset multiple is 2.5. If the inputted pixel ItLess than 2.5 times the Gaussian mean σ, pixel ItMatching the gaussian layer. The Gaussian layer is updated to:
μt=(1-α)*μt-1+α*It
σt 2=(1-α)*σt-1 2+α*(1-α)*(Itt-1)T(Itt-1);
wt=(1-α)*wt-1+α*1.0;
where α is the learning rate. If the inputted pixel ItGreater than or equal to 2.5 times the Gaussian mean σ, pixel ItThe unmatched Gaussian layer only updates the weight, namely the weight is updated to wt=(1-α)*wt-1. If there are no pixels matching the Gaussian layer, the Gaussian layer is reset, i.e. σ ═ σbig,σbigTo a preset maximum, σ for the RGB formatbig=144,w=0,μ=It. The present embodiment is not limited to the manner in which the gaussian layer is updated. During initialization, all gaussian layers are initialized to an impossible state, i.e., σ ═ σbig,w=0,μ=[-2*σ-2*σ-2*σ]T. In fact, μ can be initialized to any small negative value as long as μ is satisfiedTμ>6.25*σ2
On the basis of the foregoing embodiment, in this embodiment, the updating the mean, the variance, and the weight of the gaussian layer specifically includes: if each pixel is matched with the Gaussian layer, acquiring a difference value between the characteristic value of each pixel and a preset multiple of the mean value of the Gaussian layer; acquiring a new average value and a new variance according to the sum of the differences and the sum of the squares; wherein, the number of the pixels matched with the Gaussian layer is used as a new weight; the gaussian layer is updated based on the new mean, the new variance and the new weight.
In particular, since the present embodiment employs the segmented learning background model, the background model does not need to be established in the current time period,the learned background model is only computed at the end of the current time period. If the sum p of n attribution values of any Gaussian layer in the current time period and the square sum q are obtained, the mean mu and the variance sigma of the background model learned in the current time period can be calculated2(ii) a Wherein n is the number of pixels matched with the Gaussian layer, and the weight of the background model corresponding to the current time period is n. The attribution value is a difference value between the characteristic value of the pixel matched with the Gaussian layer and a preset multiple of the mean value of the Gaussian layer. In the background model learning process, only p and q need to be accumulated, and the learning rate alpha does not need to be set, so that the learning method of the background model is simplified, and the background learning process is accelerated.
On the basis of the above-described embodiment, the new mean μ and the new variance σ in the present embodiment2Obtained by the following formula:
Figure GDA0002566103130000101
Figure GDA0002566103130000102
wherein p is the sum of the differences, q is the sum of the squares of the differences, and n is the number of pixels matched with the Gaussian layer.
On the basis of the foregoing embodiment, in this embodiment, the step S101 specifically includes: dividing the current time period into a plurality of sub-time periods; for any Gaussian layer in the background model which is newly acquired before any sub-time period, if each pixel in each video frame corresponding to the sub-time period is matched with the Gaussian layer, acquiring a difference value between a characteristic value of each pixel and a preset multiple of the mean value of the Gaussian layer, and the sum and the square sum of each difference value; accumulating the sum of the difference values corresponding to the sub-time periods and the sum of the squares respectively; acquiring a new average value and a new variance according to the accumulation result of the sum of the differences and the accumulation result of the sum of the squares of the differences; taking the number of pixels matched with the Gaussian layer in all the sub-time periods as new weight; the gaussian layer is updated based on the new mean, the new variance and the new weight.
Specifically, when there is no overlap between the time periods, the background model acquired latest before the sub-time period learns the acquired background model for the time period before the current time period. When the time periods are overlapped, the background modeling of the previous time period of the current time period is not finished, and the background model which is newly acquired before the sub-time period does not learn the acquired background model for the previous time period of the current time period. In order to increase the speed of background learning, the current period is divided into a plurality of sub-periods a, and accumulation of P and Q is performed in each period a. After all sub-periods of the current period have been learned, a new model is formed based on the accumulated P and Q, as shown in fig. 4.
On the basis of the above-described embodiment, the new mean μ and the new variance σ in the present embodiment2Obtained by the following formula:
Figure GDA0002566103130000111
Figure GDA0002566103130000112
wherein, P is the accumulated result of the sum of the differences, Q is the accumulated result of the sum of the squares of the differences, and N is the number of pixels matched with the gaussian layer in all the sub-periods.
On the basis of the above embodiment, in this embodiment, one or more sub-periods overlap between the current period and the period before the current period; one or more sub-time periods overlap between the current time period and a time period subsequent to the current time period.
Specifically, in the embodiment, each time segment in the traffic video of the target intersection is overlapped, so that the background model can be real-time. As shown in fig. 4, each period a is divided into 4 identical sub-periods a. The background model is updated every time length a, so that the timeliness of updating the background model is guaranteed. When the background model of the current time period is calculated, because the current time period is overlapped with the previous time period of the current time period, the sum of the difference values corresponding to the overlapped sub-time periods and the sum of the squares are already calculated when the previous time period of the current time period is calculated, and the sum is directly accumulated without calculating again.
On the basis of the foregoing embodiment, in this embodiment, the step of performing vehicle detection on the video frame corresponding to the next time period by using the background model corresponding to the current time period in step S102 specifically includes: and carrying out vehicle detection on the video frames which are not subjected to vehicle detection in the video frames corresponding to the next time period by using the background model corresponding to the current time period.
Specifically, due to the overlap between the time periods, after the background model corresponding to the current time period is completed, the vehicle detection is performed on the video frame which is not subjected to the vehicle detection in the video frame corresponding to the next time period. As shown in fig. 4. Assuming that the current time period is t, the background model learned in the current time period is MtAnd the next time period after the current time period is t +1, and the video frame without vehicle detection in the time period of t +1 is the video frame corresponding to the fourth sub-time period in the time period of t + 1. Using MtAnd carrying out vehicle detection on the video frame corresponding to the fourth sub-time period in the t +1 time period. In the embodiment, the vehicle is detected by using the updated model at intervals of a, so that the timeliness of vehicle detection is improved.
In another embodiment of the present invention, a system for detecting vehicles in an intersection traffic video is provided, and fig. 5 is a schematic diagram of an overall structure of the system for detecting vehicles in an intersection traffic video provided in the embodiment of the present invention, where the system includes an obtaining module 1 and a detecting module 2; wherein:
the acquisition module 1 is used for performing background learning on a video frame corresponding to a current time period for vehicle detection in a traffic video of a target intersection, and acquiring a background model corresponding to the current time period;
the target intersection traffic video refers to a shot video of the target intersection traffic. The target intersection traffic video may be a historical video of the target traffic intersection or a video of the target traffic intersection taken in real time. Because the intersection traffic condition can better reflect the actual traffic condition, vehicle detection is usually carried out on the intersection traffic video, the intersection vehicle passing rate and the occupancy are further calculated, and a basis is provided for traffic planning and adjustment. Due to the fact that the background is updated in real time, the situation that the vehicle waits under the red light condition is easily mistaken as the background, and therefore vehicle detection errors are caused. In this embodiment, the acquisition module 1 divides the time axis of the traffic video at the target intersection into a plurality of time segments, and learns the background in each time segment. The background model does not need to be established in each time period, and the background model corresponding to each time period only needs to be calculated at one time according to the video frame of each time period at the end of each time period. The current time period is a time period in the target route traffic video. The background learning of the video frame corresponding to the current time period refers to comparing the characteristics of the video frame corresponding to the current time period with a background model established before the current time period, and updating the background model according to the comparison result to obtain the background model corresponding to the current time period. The present embodiment does not limit the algorithm of background learning.
The detection module 2 is configured to acquire a video frame corresponding to a time period subsequent to a current time period in the target intersection traffic video, perform vehicle detection on the video frame corresponding to the subsequent time period by using a background model corresponding to the current time period, and use the subsequent time period as a next current time period for vehicle detection; and the time length of the current time period and the time length of the next time period are respectively more than twice of the preset red light waiting time length of the target intersection.
The detection module 2 uses the learned background model of each time segment to perform vehicle detection on the video frame corresponding to the next time segment of each time segment. The time lengths of the time periods may be the same or different. There may or may not be an overlap between the time segments. As shown in fig. 3, the time lengths of the time periods are assumed to be the same and are all a. There is no overlap between time periods t, t +1 and t + 2. Respectively carrying out background learning on the video frames corresponding to the time periods t, t +1 and t +2 to obtain corresponding background models Mt、Mt+1And Mt+2. In the t time period, the background model M acquired in the t-1 time period is usedt-1Carrying out vehicle detection; in the t +1 time period, the background model M acquired in the t time period is usedtCarrying out vehicle detection; in the t +2 time period, the background model M acquired in the t +1 time period is usedt+1And carrying out vehicle detection. The present embodiment is not limited to the method of vehicle detection. After the background model learned by the current time period is used for carrying out vehicle detection on the video frame corresponding to the next time period of the current time period, the next time period of the current time period is used as the current time period, and the steps of background learning and vehicle detection are executed in an iterative mode, so that real-time learning of the background and real-time vehicle detection are achieved.
The preset red light waiting time is the waiting time needed under the condition of the preset red light. In order to ensure that the time that each vehicle does not occupy the intersection is longer than the time that each vehicle occupies the intersection in each time period, the time length of each time period is set to be more than twice of the preset red light waiting time length of the target intersection. The time of occupying the intersection is the time of the vehicle staying at the intersection. The vehicle may stay at the intersection when the red light starts, or may stay at the intersection at a time point after the red light starts and before the green light starts. And under the condition that the traffic is not congested, the time that the vehicle stays at the intersection at most is the preset waiting time, and then the vehicle drives away from the intersection. However, under the condition of traffic jam, the vehicle still stays at the intersection when the next red light is emitted, and can leave after the next red light is emitted, and the time length of each time period needs to be more than twice of the time length between two red lights.
In the embodiment, the time axis of the traffic video of the target intersection is divided into a plurality of time periods, the background is learned in each time period, a background model does not need to be established in each time period, the background model corresponding to each time period is only calculated at the last time of each time period, the learned background model in each time period is used for carrying out vehicle detection on the video frame corresponding to the next time period of each time period, and the duration of each time period is more than twice of the preset red light waiting duration of the target intersection, so that the scene of vehicle waiting is prevented from being falsely detected as the background, and the correctness of the background detection and the vehicle detection is improved.
On the basis of the foregoing embodiment, the obtaining module in this embodiment is specifically configured to: and performing background learning on the video frame corresponding to the current time period by adopting a Gaussian mixture model.
On the basis of the foregoing embodiment, the obtaining module in this embodiment is further specifically configured to: for any Gaussian layer in the background model corresponding to the previous time period of the current time period, if the characteristic value of each pixel in each video frame corresponding to the current time period is smaller than the preset multiple of the mean value of the Gaussian layer, each pixel is matched with the Gaussian layer, and the mean value, the variance and the weight of the Gaussian layer are updated according to each pixel; if the characteristic value of each pixel in each video frame corresponding to the current time period is greater than or equal to the preset multiple of the mean value of the Gaussian layer, each pixel is not matched with the Gaussian layer, and the weight of the Gaussian layer is updated; and if the pixels matched with the Gaussian layer do not exist in the video frame corresponding to the current time period, resetting the Gaussian layer.
On the basis of the foregoing embodiment, the obtaining module in this embodiment is further specifically configured to: if each pixel is matched with the Gaussian layer, acquiring a difference value between the characteristic value of each pixel and a preset multiple of the mean value of the Gaussian layer; acquiring a new average value and a new variance according to the sum of the differences and the sum of the squares; wherein, the number of the pixels matched with the Gaussian layer is used as a new weight; the gaussian layer is updated based on the new mean, the new variance and the new weight.
On the basis of the above-described embodiment, the new mean μ and the new variance σ in the present embodiment2Obtained by the following formula:
Figure GDA0002566103130000151
Figure GDA0002566103130000152
wherein p is the sum of the differences, q is the sum of the squares of the differences, and n is the number of pixels matched with the Gaussian layer.
On the basis of the foregoing embodiment, the obtaining module in this embodiment is specifically configured to: dividing the current time period into a plurality of sub-time periods; for any Gaussian layer in the background model which is newly acquired before any sub-time period, if each pixel in each video frame corresponding to the sub-time period is matched with the Gaussian layer, acquiring a difference value between a characteristic value of each pixel and a preset multiple of the mean value of the Gaussian layer, and the sum and the square sum of each difference value; accumulating the sum of the difference values corresponding to the sub-time periods and the sum of the squares respectively; acquiring a new average value and a new variance according to the accumulation result of the sum of the differences and the accumulation result of the sum of the squares of the differences; taking the number of pixels matched with the Gaussian layer in all the sub-time periods as new weight; the gaussian layer is updated based on the new mean, the new variance and the new weight.
On the basis of the above-described embodiment, the new mean μ and the new variance σ in the present embodiment2Obtained by the following formula:
Figure GDA0002566103130000153
Figure GDA0002566103130000154
wherein, P is the accumulated result of the sum of the differences, Q is the accumulated result of the sum of the squares of the differences, and N is the number of pixels matched with the gaussian layer in all the sub-periods.
On the basis of the above embodiment, in this embodiment, one or more sub-periods overlap between the current period and the period before the current period; one or more sub-time periods overlap between the current time period and a time period subsequent to the current time period.
On the basis of the foregoing embodiment, the detection module in this embodiment is specifically configured to: and carrying out vehicle detection on the video frames which are not subjected to vehicle detection in the video frames corresponding to the next time period by using the background model corresponding to the current time period.
The present embodiment provides a vehicle detection device in an intersection traffic video, and fig. 6 is a schematic view of an overall structure of the vehicle detection device in the intersection traffic video provided in the embodiment of the present invention, where the device includes: at least one processor 61, at least one memory 62, and a bus 63; wherein the content of the first and second substances,
the processor 61 and the memory 62 communicate with each other via a bus 63;
the memory 62 stores program instructions executable by the processor 61, and the processor calls the program instructions to execute the methods provided by the above method embodiments, for example, the method includes: for a current time period for vehicle detection in a traffic video of a target intersection, performing background learning on a video frame corresponding to the current time period to obtain a background model corresponding to the current time period; acquiring a video frame corresponding to a time period after the current time period in the traffic video of the target intersection, performing vehicle detection on the video frame corresponding to the time period after the current time period by using a background model corresponding to the current time period, and taking the time period after the current time period as the next current time period for performing vehicle detection; and the time length of the current time period and the time length of the next time period are respectively more than twice of the preset red light waiting time length of the target intersection.
The present embodiments provide a non-transitory computer-readable storage medium storing computer instructions that cause the computer to perform the methods provided by the above method embodiments, for example, including: for a current time period for vehicle detection in a traffic video of a target intersection, performing background learning on a video frame corresponding to the current time period to obtain a background model corresponding to the current time period; acquiring a video frame corresponding to a time period after the current time period in the traffic video of the target intersection, performing vehicle detection on the video frame corresponding to the time period after the current time period by using a background model corresponding to the current time period, and taking the time period after the current time period as the next current time period for performing vehicle detection; and the time length of the current time period and the time length of the next time period are respectively more than twice of the preset red light waiting time length of the target intersection.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
The above-described embodiments of the vehicle detection apparatus in the intersection traffic video are merely illustrative, where the units illustrated as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, may be located in one place, or may also be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, the method of the present application is only a preferred embodiment and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (6)

1. A method for detecting vehicles in intersection traffic videos is characterized by comprising the following steps:
for a current time period for vehicle detection in a traffic video of a target intersection, performing background learning on a video frame corresponding to the current time period to obtain a background model corresponding to the current time period;
the obtaining of the background model corresponding to the current time period includes: calculating a background model corresponding to the current time period at one time according to the video frame of each time period at the end of the current time period;
acquiring a video frame corresponding to a time period after the current time period in the traffic video of the target intersection, performing vehicle detection on the video frame corresponding to the time period after the current time period by using a background model corresponding to the current time period, and taking the time period after the current time period as the next current time period for performing vehicle detection;
the time length of the current time period and the time length of the next time period are respectively two times longer than the preset red light waiting time length of the target intersection;
the step of performing background learning on the video frame corresponding to the current time period specifically includes:
performing background learning on the video frame corresponding to the current time period by adopting a Gaussian mixture model;
the step of performing background learning on the video frame corresponding to the current time period by using a gaussian mixture model specifically includes:
for any Gaussian layer in a background model which is newly acquired before the current time period, if the characteristic value of each pixel in each video frame corresponding to the current time period is smaller than the preset multiple of the mean value of the Gaussian layer, each pixel is matched with the Gaussian layer, and the mean value, the variance and the weight of the Gaussian layer are updated according to each pixel;
if the characteristic value of each pixel in each video frame corresponding to the current time period is greater than or equal to the preset multiple of the mean value of the Gaussian layer, each pixel is not matched with the Gaussian layer, and the weight of the Gaussian layer is updated;
if the pixels matched with the Gaussian layer do not exist in the video frame corresponding to the current time period, resetting the Gaussian layer;
the updating the mean, variance and weight of the gaussian layer specifically includes:
if each pixel is matched with the Gaussian layer, acquiring a difference value between a characteristic value of each pixel and a preset multiple of the mean value of the Gaussian layer;
obtaining a new average value and a new variance according to the sum of the differences and the sum of the squares; taking the number of the pixels matched with the Gaussian layer as new weight;
updating the Gaussian layer according to the new mean value, the new variance and the new weight;
the new mean mu and the new variance sigma2Obtained by the following formula:
Figure FDA0002566103120000021
Figure FDA0002566103120000022
wherein p is the sum of the differences, q is the sum of the squares of the differences, and n is the number of pixels matched with the Gaussian layer.
2. A method for detecting vehicles in intersection traffic videos is characterized by comprising the following steps:
for a current time period for vehicle detection in a traffic video of a target intersection, performing background learning on a video frame corresponding to the current time period to obtain a background model corresponding to the current time period;
the obtaining of the background model corresponding to the current time period includes: calculating a background model corresponding to the current time period at one time according to the video frame of each time period at the end of the current time period;
acquiring a video frame corresponding to a time period after the current time period in the traffic video of the target intersection, performing vehicle detection on the video frame corresponding to the time period after the current time period by using a background model corresponding to the current time period, and taking the time period after the current time period as the next current time period for performing vehicle detection;
the time length of the current time period and the time length of the next time period are respectively two times longer than the preset red light waiting time length of the target intersection;
the step of performing background learning on the video frame corresponding to the current time period specifically includes:
performing background learning on the video frame corresponding to the current time period by adopting a Gaussian mixture model;
the step of performing background learning on the video frame corresponding to the current time period and acquiring the background model corresponding to the current time period specifically includes:
dividing the current time period into a plurality of sub time periods, and for any Gaussian layer in a background model which is newly acquired before any sub time period, if each pixel in each video frame corresponding to the sub time period is matched with the Gaussian layer, acquiring a difference value between a characteristic value of each pixel and a preset multiple of the mean value of the Gaussian layer, and the sum and the square sum of each difference value;
accumulating the sum of the difference values and the sum of squares corresponding to the sub-time periods respectively;
acquiring a new average value and a new variance according to the accumulated result of the sum of the differences and the accumulated result of the sum of the squares of the differences; taking the number of pixels matched with the Gaussian layer in all the sub-time periods as new weight;
updating the Gaussian layer according to the new mean value, the new variance and the new weight;
wherein the new mean μ and the new variance σ2Obtained by the following formula:
Figure FDA0002566103120000031
Figure FDA0002566103120000032
wherein, P is the accumulated result of the sum of the difference values, Q is the accumulated result of the sum of the squares of the difference values, and N is the number of pixels matched with the gaussian layer in all the sub-periods.
3. The method of claim 2, wherein one or more of the sub-time periods overlap between the current time period and a time period prior to the current time period; one or more of the sub-time periods overlap between the current time period and a time period subsequent to the current time period.
4. The method according to claim 3, wherein the step of performing vehicle detection on the video frame corresponding to the next time period by using the background model corresponding to the current time period specifically comprises:
and using the background model corresponding to the current time period to detect the vehicles of the video frames which are not detected by the vehicles in the video frames corresponding to the next time period.
5. A system for detecting vehicles in intersection traffic video, comprising:
the acquisition unit is used for carrying out background learning on a video frame corresponding to a current time period for vehicle detection in a traffic video of a target intersection and acquiring a background model corresponding to the current time period; calculating a background model corresponding to the current time period at one time according to the video frame of each time period at the end of the current time period;
the detection unit is used for acquiring a video frame corresponding to a time period after the current time period in the traffic video of the target intersection, using a background model corresponding to the current time period to detect vehicles in the video frame corresponding to the time period after the current time period, and using the time period after the current time period as the next current time period for vehicle detection;
the time length of the current time period and the time length of the next time period are respectively two times longer than the preset red light waiting time length of the target intersection;
wherein the obtaining unit is specifically configured to: performing background learning on the video frame corresponding to the current time period by adopting a Gaussian mixture model;
the obtaining unit is further specifically configured to: for any Gaussian layer in the background model corresponding to the previous time period of the current time period, if the characteristic value of each pixel in each video frame corresponding to the current time period is smaller than the preset multiple of the mean value of the Gaussian layer, each pixel is matched with the Gaussian layer, and the mean value, the variance and the weight of the Gaussian layer are updated according to each pixel;
if the characteristic value of each pixel in each video frame corresponding to the current time period is greater than or equal to the preset multiple of the mean value of the Gaussian layer, each pixel is not matched with the Gaussian layer, and the weight of the Gaussian layer is updated;
if the pixels matched with the Gaussian layer do not exist in the video frame corresponding to the current time period, resetting the Gaussian layer;
the acquisition unit is further configured to: if each pixel is matched with the Gaussian layer, acquiring a difference value between the characteristic value of each pixel and a preset multiple of the mean value of the Gaussian layer;
acquiring a new average value and a new variance according to the sum of the differences and the sum of the squares; wherein, the number of the pixels matched with the Gaussian layer is used as a new weight;
updating the Gaussian layer according to the new mean, the new variance and the new weight;
new mean μ and new variance σ2Obtained by the following formula:
Figure FDA0002566103120000051
Figure FDA0002566103120000052
wherein p is the sum of the differences, q is the sum of the squares of the differences, and n is the number of pixels matched with the Gaussian layer.
6. A system for detecting vehicles in intersection traffic video, comprising:
the acquisition unit is used for carrying out background learning on a video frame corresponding to a current time period for vehicle detection in a traffic video of a target intersection and acquiring a background model corresponding to the current time period; calculating a background model corresponding to the current time period at one time according to the video frame of each time period at the end of the current time period;
the detection unit is used for acquiring a video frame corresponding to a time period after the current time period in the traffic video of the target intersection, using a background model corresponding to the current time period to detect vehicles in the video frame corresponding to the time period after the current time period, and using the time period after the current time period as the next current time period for vehicle detection;
the time length of the current time period and the time length of the next time period are respectively two times longer than the preset red light waiting time length of the target intersection;
wherein the obtaining unit is specifically configured to: performing background learning on the video frame corresponding to the current time period by adopting a Gaussian mixture model;
wherein, the acquisition module is specifically configured to: dividing the current time period into a plurality of sub-time periods; for any Gaussian layer in the background model which is newly acquired before any sub-time period, if each pixel in each video frame corresponding to the sub-time period is matched with the Gaussian layer, acquiring a difference value between a characteristic value of each pixel and a preset multiple of the mean value of the Gaussian layer, and the sum and the square sum of each difference value; accumulating the sum of the difference values corresponding to the sub-time periods and the sum of the squares respectively; acquiring a new average value and a new variance according to the accumulation result of the sum of the differences and the accumulation result of the sum of the squares of the differences; taking the number of pixels matched with the Gaussian layer in all the sub-time periods as new weight; updating the Gaussian layer according to the new mean, the new variance and the new weight;
wherein the new mean mu and the new variance sigma2Obtained by the following formula:
Figure FDA0002566103120000061
Figure FDA0002566103120000062
wherein, P is the accumulated result of the sum of the differences, Q is the accumulated result of the sum of the squares of the differences, and N is the number of pixels matched with the gaussian layer in all the sub-periods.
CN201810455711.6A 2018-05-14 2018-05-14 Method and system for detecting vehicles in intersection traffic video Active CN108648463B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810455711.6A CN108648463B (en) 2018-05-14 2018-05-14 Method and system for detecting vehicles in intersection traffic video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810455711.6A CN108648463B (en) 2018-05-14 2018-05-14 Method and system for detecting vehicles in intersection traffic video

Publications (2)

Publication Number Publication Date
CN108648463A CN108648463A (en) 2018-10-12
CN108648463B true CN108648463B (en) 2020-10-27

Family

ID=63755151

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810455711.6A Active CN108648463B (en) 2018-05-14 2018-05-14 Method and system for detecting vehicles in intersection traffic video

Country Status (1)

Country Link
CN (1) CN108648463B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111383455A (en) * 2020-03-11 2020-07-07 上海眼控科技股份有限公司 Traffic intersection object flow statistical method, device, computer equipment and medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4666012B2 (en) * 2008-06-20 2011-04-06 ソニー株式会社 Image processing apparatus, image processing method, and program
CN103177456A (en) * 2013-03-29 2013-06-26 上海理工大学 Method for detecting moving target of video image
CN105976612B (en) * 2016-04-27 2017-07-07 东南大学 Vehicle checking method in urban transportation scene based on robust mixed Gauss model
CN106504273B (en) * 2016-10-28 2020-05-15 天津大学 Improved method based on GMM moving object detection
CN106780548A (en) * 2016-11-16 2017-05-31 南宁市浩发科技有限公司 moving vehicle detection method based on traffic video
CN107169992A (en) * 2017-05-11 2017-09-15 南宁市正祥科技有限公司 A kind of traffic video moving target detecting method

Also Published As

Publication number Publication date
CN108648463A (en) 2018-10-12

Similar Documents

Publication Publication Date Title
US11176381B2 (en) Video object segmentation by reference-guided mask propagation
JP7024115B2 (en) Intelligent drive control methods and devices based on lane markings, as well as electronic devices
CN111460926B (en) Video pedestrian detection method fusing multi-target tracking clues
CN112991389B (en) Target tracking method and device and mobile robot
CN108805016B (en) Head and shoulder area detection method and device
CN111209770A (en) Lane line identification method and device
CN107563299B (en) Pedestrian detection method using RecNN to fuse context information
CN110766061B (en) Road scene matching method and device
CN111046973B (en) Multitasking detection method and device and storage medium
CN112651274B (en) Road obstacle detection device, road obstacle detection method, and recording medium
CN109657077A (en) Model training method, lane line generation method, equipment and storage medium
CN105374049B (en) Multi-corner point tracking method and device based on sparse optical flow method
CN113034634A (en) Adaptive imaging method, system and computer medium based on pulse signal
CN112396042A (en) Real-time updated target detection method and system, and computer-readable storage medium
CN108648463B (en) Method and system for detecting vehicles in intersection traffic video
CN104700384B (en) Display systems and methods of exhibiting based on augmented reality
JP7163718B2 (en) INTERFERENCE AREA DETECTION DEVICE AND METHOD, AND ELECTRONIC DEVICE
CN112990102B (en) Improved Centernet complex environment target detection method
CN114219901B (en) Three-dimensional chassis projection method based on projection consistency and twin Transformer
CN111126170A (en) Video dynamic object detection method based on target detection and tracking
CN116363628A (en) Mark detection method and device, nonvolatile storage medium and computer equipment
CN114998801A (en) Forest fire smoke video detection method based on contrast self-supervision learning network
CN112597825A (en) Driving scene segmentation method and device, electronic equipment and storage medium
CN114118188A (en) Processing system, method and storage medium for moving objects in an image to be detected
CN111860261A (en) Passenger flow value statistical method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20231116

Address after: No. 57-5 Development Avenue, No. 6015, Yichang Area, China (Hubei) Free Trade Zone, Yichang City, Hubei Province, 443005

Patentee after: Hubei Jiugan Technology Co.,Ltd.

Address before: 443002, China Three Gorges University, 8, University Road, Hubei, Yichang

Patentee before: CHINA THREE GORGES University

TR01 Transfer of patent right