CN111310565A - Pedestrian detection method fusing image static characteristics and long-time and short-time motion characteristics - Google Patents
Pedestrian detection method fusing image static characteristics and long-time and short-time motion characteristics Download PDFInfo
- Publication number
- CN111310565A CN111310565A CN202010044364.5A CN202010044364A CN111310565A CN 111310565 A CN111310565 A CN 111310565A CN 202010044364 A CN202010044364 A CN 202010044364A CN 111310565 A CN111310565 A CN 111310565A
- Authority
- CN
- China
- Prior art keywords
- channel
- image
- motion
- long
- short
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 51
- 230000003068 static effect Effects 0.000 title claims abstract description 20
- 230000003287 optical effect Effects 0.000 claims abstract description 44
- 238000000034 method Methods 0.000 claims abstract description 24
- 230000002776 aggregation Effects 0.000 claims abstract description 8
- 238000004220 aggregation Methods 0.000 claims abstract description 8
- 238000009825 accumulation Methods 0.000 claims abstract description 3
- 230000005764 inhibitory process Effects 0.000 claims abstract description 3
- 238000004364 calculation method Methods 0.000 claims description 5
- 230000008569 process Effects 0.000 claims description 5
- 230000009466 transformation Effects 0.000 claims description 4
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 claims description 3
- 238000012163 sequencing technique Methods 0.000 claims description 3
- 238000009499 grossing Methods 0.000 claims description 2
- 230000002401 inhibitory effect Effects 0.000 claims description 2
- 230000001131 transforming effect Effects 0.000 claims description 2
- 238000010586 diagram Methods 0.000 description 2
- 206010039203 Road traffic accident Diseases 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000036544 posture Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/243—Classification techniques relating to the number of classes
- G06F18/24317—Piecewise classification, i.e. whereby each classification requires several discriminant rules
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/269—Analysis of motion using gradient-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a pedestrian detection method fusing image static characteristics and long-and-short-term motion characteristics, which comprises the following steps of: 1) extracting LUV color characteristics of an input image; 2) extracting gradient amplitude characteristics of an input image; 3) extracting directional gradient histogram features of an input image; 4) acquiring an optical flow between continuous frames of an input image based on a Lucas-Kanande method; 5) aligning successive frames with a translation transform based on optical flow accumulation; 6) extracting long-time and short-time motion characteristics; 7) aggregation of channel characteristics; 8) extracting a pedestrian candidate frame based on Adaboost and a sliding window method: 9) and (4) extracting a pedestrian window based on a non-maximum value inhibition method to finish pedestrian detection. Compared with the prior art, the pedestrian identification method has the advantages of improving the accuracy of pedestrian identification by combining static characteristics and motion characteristics and the like.
Description
Technical Field
The invention relates to the field of computer vision and pattern recognition, in particular to a pedestrian detection method fusing image static characteristics and long-and-short-term motion characteristics.
Background
Pedestrians are used as important components in road traffic and one of the most direct participants, and are very easily injured, the road traffic environment in China is complex, people and vehicles are common, traffic accidents are easily caused under the condition, the safety of the pedestrians on the road is challenged, and the detection of the pedestrians in front of the vehicle has attracted attention of various automobile manufacturers and various related research institutions in the present day of continuous development and innovation of automobile electronic technology.
With the development of computer vision, vision-based pedestrian detection is increasingly applied to vehicle driving assistance systems and automatic driving research of intelligent automobiles. At present, the research of pedestrian detection still faces a lot of difficulties, because the clothes and postures of pedestrians are different, the background is complex and changeable, the characteristics of a single pedestrian are difficult to achieve a better detection effect, the pedestrian detection integrating a plurality of characteristics becomes necessary, meanwhile, the pedestrian has a certain movement between video frames acquired by a vehicle-mounted camera, so that the optical flow information between continuous frames can be utilized to integrate the characteristics of the pedestrian movement, and in order to further improve the recall rate of the pedestrian detection and reduce the false detection rate, a method for detecting the pedestrian by combining the static channel characteristics and the long-time and short-time movement characteristics of images is needed.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a pedestrian detection method fusing image static characteristics and long and short time motion characteristics.
The purpose of the invention can be realized by the following technical scheme:
a pedestrian detection method fusing image static characteristics and long-and-short-term motion characteristics comprises the following steps:
1) extracting LUV color features of an input image:
converting an input RGB image into an LUV image, separating a hue component U, V and a luminance component L in the LUV image, and storing separately as an L channel, a U channel, and a V channel;
2) extracting gradient amplitude characteristics of an input image:
performing convolution on an input image in the X direction and the Y direction respectively, acquiring the gradient amplitude of each pixel in the image based on the gradient values of the pixel in the X direction and the Y direction, and storing the gradient amplitude of each point in the image as a gradient amplitude channel;
3) extracting directional gradient histogram features of an input image;
4) acquiring an optical flow between continuous frames of an input image based on a Lucas-Kanande method;
5) aligning successive frames with a translation transform based on optical flow accumulation;
6) extracting long-time and short-time motion characteristics;
7) channel feature aggregation: performing feature aggregation on an L channel, a U channel, a V channel, a gradient amplitude channel, a directional gradient histogram channel and a long and short time motion feature channel;
8) extracting a pedestrian candidate frame based on Adaboost and a sliding window method:
9) and (4) extracting a pedestrian window based on a non-maximum value inhibition method to finish pedestrian detection.
The step 3) is specifically as follows:
acquiring the gradient direction of each pixel point in an image, equally dividing the gradient direction of 0-180 degrees into 6 regions, respectively acquiring histogram features corresponding to six channels, acquiring an angle interval to which each pixel belongs, and taking the gradient amplitude as the weight of the histogram to obtain the directional gradient histogram channel features of the six channels;
the step 4) is specifically as follows:
and acquiring the optical flow between the continuous frames by adopting a Lucas-Kanande method according to the hypothesis that the pixel value of the same pixel point between the adjacent frames is unchanged, the movement of an object between the adjacent frames is very small and the motion of all pixels in the neighborhood of the pixel point is consistent.
The step 5) is specifically as follows:
according to an optical flow field between every two frames, the accumulated optical flows between the current frame and the previous 1, 2 and 3 frames of the current frame are respectively obtained, and the obtaining method comprises the following steps:
and successively superposing adjacent optical flow fields in the x direction and the y direction respectively, then smoothing the optical flow fields, calculating to obtain an accumulated optical flow from the first 1 frame to the current frame, an accumulated optical flow from the first 2 frames to the current frame and an accumulated optical flow field from the first 3 frames to the current frame, and transforming to obtain a result aligned with the current frame by taking the moving speed of each pixel in the accumulated optical flow field as an element of a homography matrix through translation operation in affine transformation of an image.
The step 6) is specifically as follows:
the method comprises the steps of extracting the motion characteristics of pedestrians by adopting a multi-frame time difference characteristic calculation method, when detection is carried out on the basis of a current frame, forwardly reading a long moment to eliminate camera motion, extracting a motion characteristic at one time and recording the motion characteristic as a long-time motion characteristic, forwardly reading a short moment to eliminate camera motion, extracting a motion characteristic at one time and recording the motion characteristic as a short-time motion characteristic.
In the step 7), a non-overlapping block with a size of 2 × 2 pixels is adopted to perform feature aggregation on the L channel, the U channel, the V channel, the gradient amplitude channel, the directional gradient histogram channel, and the long and short time motion feature channel, and the sum of pixel values in the block is used as the feature of the block.
The step 8) is specifically as follows:
and sequentially judging by a plurality of Adaboost strong classifiers, and finally judging as the pedestrian only when all the Adaboost strong classifiers judge as correct windows, and finishing the detection of the window when any strong classifier judges as a non-pedestrian in the detection process.
The step 9) specifically comprises the following steps:
91) sequencing the detection windows obtained by detection according to the confidence level, and selecting the bounding box corresponding to the highest score;
92) traversing all the rest detection windows, acquiring the overlapping area ratio of the detection windows and the current highest packet enclosure, and inhibiting the enclosure when the overlapping area ratio exceeds a threshold value, namely deleting the detection windows;
93) continue to select the highest scoring bounding box from the ones that have not been suppressed, repeat step 92) until all bounding boxes have been processed.
Said step 92), the threshold value of the overlapping area ratio is set to 0.5.
In the step 6), the longer time is specifically 8 frames ahead of the current frame, and the shorter time is specifically 4 frames ahead of the current frame.
Compared with the prior art, the invention has the following advantages:
the features adopted by the invention comprise static features (LUV color channel feature + gradient amplitude channel feature + direction gradient histogram channel feature) and dynamic features (short-time motion feature + long-time motion feature), because the static features LUV color channel feature represents the texture information of the pedestrian, the gradient amplitude channel feature and the direction gradient histogram channel feature represent the contour information of the pedestrian, and the short-time motion feature and the long-time motion feature in the dynamic features comprise the motion information of the pedestrian with different motion speeds, the invention effectively improves the accuracy of pedestrian identification by combining the static features and the motion features.
Drawings
FIG. 1 is an overall process flow diagram of the present invention.
Fig. 2 is an original pedestrian picture.
Fig. 3 shows the extracted LUV color channel characteristics, where fig. 3a shows a color channel L, fig. 3b shows a color channel U, and fig. 3c shows a color channel V.
FIG. 4 is a gradient magnitude channel.
FIG. 5 is a histogram channel feature of gradient directions, wherein FIG. 5a is a 0-30 degree direction interval, FIG. 5b is a 30-60 degree direction interval, FIG. 5c is a 60-90 degree direction interval, FIG. 5d is a 90-120 degree direction interval, FIG. 5e is a 150 degree direction interval, and FIG. 5f is a 180 degree direction interval.
Fig. 6 shows two consecutive frames of images under camera motion, where fig. 6a shows a first frame of image, fig. 6b shows a second frame of image, fig. 6c shows a third frame of image, and fig. 6d shows a fourth frame of image.
Fig. 7 is an accumulated optical flow field of frames 1-3 and a current 4 th frame, where fig. 7a is an accumulated optical flow field of frames 1-4 in the x direction, fig. 7b is an accumulated optical flow field of frames 2-4 in the x direction, fig. 7c is an accumulated optical flow field of frames 3-4 in the x direction, fig. 7d is an accumulated optical flow field of frames 1-4 in the y direction, fig. 7e is an accumulated optical flow field of frames 2-4 in the y direction, and fig. 7f is an accumulated optical flow field of frames 3-4 in the y direction.
FIG. 8 shows the motion feature extracted by time difference feature extraction, where FIG. 8 (a) is It-8,tCorresponding movement characteristics, fig. (8b) is It-4,tCorresponding movement characteristics, fig. 8c is ItCorresponding movement characteristics, fig. 8d is It-It-4,tCorresponding movement characteristics, fig. 8e is It-It-8,tCorresponding motion characteristics.
Fig. 9 is a schematic diagram of a cascade classifier detection process.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments.
Examples
Step 1: taking the original pedestrian picture shown in fig. 2 as an example, an input RGB image is first converted into an LUV image, hue components U (green-red), V (blue-yellow) and luminance components L in the LUV image are separated and stored separately as an L channel, a U channel and a V channel, and the extracted features are shown in fig. 3;
step 2: the input image shown in fig. 2 is convolved in the X direction and the Y direction, respectively. For each pixel in the image, the gradient magnitude of the pixel is calculated based on the gradient values of the pixel in the X-direction and the Y-direction. Storing the gradient amplitude of each point in the image as a gradient amplitude channel, wherein the calculated gradient amplitude is shown in FIG. 4;
and step 3: and (3) for each pixel in the image, calculating the gradient direction of the pixel point based on the gradient in the X direction and the gradient in the Y direction obtained by calculation in the step (2), and equally dividing the gradient direction of 0-180 degrees into 6 regions. Respectively calculating histogram features corresponding to the six channels, calculating an angle interval to which each pixel belongs, and simultaneously taking the gradient amplitude as the weight of the histogram to obtain the gradient histogram channel features of the six channels as shown in fig. 5;
and 4, step 4: based on three assumptions that the pixel value of the same pixel point between adjacent frames is unchanged, the movement of an object between the adjacent frames is small, and the motion of all pixels in the neighborhood of the pixel point is consistent, the Lucas-Kanande method is used for calculating the optical flow between the continuous frames, and the formula can be expressed as follows:
Av=b
in the formula:
Ix、Iyand ItThe partial derivatives of the pixel points (x, y, t) in the corresponding directions are respectively considered as known quantities. VxAnd VyIs the velocity in the x and y directions, i.e. the optical flow of a pixel (x, y, t)
And 5: taking the continuous images shot by the moving vehicle shown in fig. 6 as an example, the accumulated optical flows between the 1 st, 2 nd, 3 rd frames and the current frame (4 th frame) are respectively calculated by using the optical flow field between every two frames, the calculation method is that the adjacent optical flow fields in the x and y directions are respectively and successively superposed, and then the optical flow fields are simply and smoothly processed to reduce the possible influence caused by the abrupt change of the optical flow. The accumulated optical flow from frame 1 to the current frame, the accumulated optical flow from frame 2 to the current frame, and the accumulated optical flow field from frame 3 to the current frame shown in fig. 7 are calculated.
Step 6: the accumulated optical flow field is obtained through calculation, namely, the moving speed of each pixel in the accumulated optical flow field is used as an element of a homography matrix through translation operation in affine transformation of an image, and a result after the pixel is aligned with the current frame is obtained through transformation;
and 7: taking the pedestrian in fig. 6 as an example, when the detection is performed based on the current frame, the camera motion is eliminated by reading forward a longer time (for example, 8 frames), and the motion feature is extracted once and recorded as the long-term motion feature, as shown in fig. 8 d;
and 8: taking the pedestrian in fig. 6 as an example, when the detection is performed based on the current frame, the camera motion is eliminated by reading a short time (for example, 4 frames) forward, and the motion feature is extracted once and recorded as the short-time motion feature, as shown in fig. 8 e;
and step 9: for the obtained LUV channel, gradient amplitude channel, direction gradient histogram channel and long-time and short-time movement characteristic channel, carrying out characteristic aggregation by adopting non-overlapping block (2 x 2 pixel size), and similarly calculating the sum of pixel values in the block as the characteristic of the block;
step 10: as shown in fig. 9, for an input image to be detected, the input image is sequentially judged by each Adaboost strong classifier, and only the windows judged to be correct by all the strong classifiers can be finally judged as pedestrians. When any strong classifier is judged to be a non-pedestrian in the detection process, the detection of the window is finished, and the detection efficiency is improved;
step 11: and sequencing the detection windows obtained by detection according to the confidence level, and selecting the bounding box corresponding to the highest score.
Step 12: all remaining detection windows are traversed and the ratio of the overlap area with the current highest bounding box is calculated IoU, and exceeding a threshold (e.g., 0.5) suppresses the bounding box, i.e., deletes the detection window.
Step 13: from the bounding boxes that have not been suppressed, the highest scoring bounding box continues to be selected, and step 12 is repeated until all bounding boxes have been processed.
Claims (10)
1. A pedestrian detection method fusing image static characteristics and long-and-short-term motion characteristics is characterized by comprising the following steps:
1) extracting LUV color features of an input image:
converting an input RGB image into an LUV image, separating a hue component U, V and a luminance component L in the LUV image, and storing separately as an L channel, a U channel, and a V channel;
2) extracting gradient amplitude characteristics of an input image:
performing convolution on an input image in the X direction and the Y direction respectively, acquiring the gradient amplitude of each pixel in the image based on the gradient values of the pixel in the X direction and the Y direction, and storing the gradient amplitude of each point in the image as a gradient amplitude channel;
3) extracting directional gradient histogram features of an input image;
4) acquiring an optical flow between continuous frames of an input image based on a Lucas-Kanande method;
5) aligning successive frames with a translation transform based on optical flow accumulation;
6) extracting long-time and short-time motion characteristics;
7) channel feature aggregation: performing feature aggregation on an L channel, a U channel, a V channel, a gradient amplitude channel, a directional gradient histogram channel and a long and short time motion feature channel;
8) extracting a pedestrian candidate frame based on Adaboost and a sliding window method:
9) and (4) extracting a pedestrian window based on a non-maximum value inhibition method to finish pedestrian detection.
2. The pedestrian detection method fusing image static features and long-and-short-term motion features according to claim 1, wherein the step 3) is specifically as follows:
the gradient direction of each pixel point in the image is obtained, the gradient directions of 0-180 degrees are equally divided into 6 regions, histogram features are respectively obtained corresponding to six channels, the angle interval to which each pixel belongs is obtained, and the gradient amplitude is used as the weight of the histogram to obtain the directional gradient histogram channel features of the six channels.
3. The pedestrian detection method fusing image static features and long-and-short-term motion features according to claim 1, wherein the step 4) specifically comprises:
and acquiring the optical flow between the continuous frames by adopting a Lucas-Kanande method according to the hypothesis that the pixel value of the same pixel point between the adjacent frames is unchanged, the movement of an object between the adjacent frames is very small and the motion of all pixels in the neighborhood of the pixel point is consistent.
4. The pedestrian detection method fusing image static features and long-and-short-term motion features according to claim 1, wherein the step 5) specifically comprises:
according to an optical flow field between every two frames, the accumulated optical flows between the current frame and the previous 1, 2 and 3 frames of the current frame are respectively obtained, and the obtaining method comprises the following steps:
and successively superposing adjacent optical flow fields in the x direction and the y direction respectively, then smoothing the optical flow fields, calculating to obtain an accumulated optical flow from the first 1 frame to the current frame, an accumulated optical flow from the first 2 frames to the current frame and an accumulated optical flow field from the first 3 frames to the current frame, and transforming to obtain a result aligned with the current frame by taking the moving speed of each pixel in the accumulated optical flow field as an element of a homography matrix through translation operation in affine transformation of an image.
5. The pedestrian detection method fusing image static features and long-and-short-term motion features according to claim 1, wherein the step 6) specifically comprises:
the method comprises the steps of extracting the motion characteristics of pedestrians by adopting a multi-frame time difference characteristic calculation method, when detection is carried out on the basis of a current frame, forwardly reading a long moment to eliminate camera motion, extracting a motion characteristic at one time and recording the motion characteristic as a long-time motion characteristic, forwardly reading a short moment to eliminate camera motion, extracting a motion characteristic at one time and recording the motion characteristic as a short-time motion characteristic.
6. The pedestrian detection method fusing the static features and long and short time motion features of the images according to claim 1, wherein in the step 7), a non-overlapping block with a size of 2 × 2 pixels is used for performing feature aggregation on the L channel, the U channel, the V channel, the gradient magnitude channel, the direction gradient histogram channel and the long and short time motion feature channel, and the sum of pixel values in the block is used as the feature of the block.
7. The pedestrian detection method fusing image static features and long-and-short-term motion features according to claim 1, wherein the step 8) specifically comprises:
and sequentially judging by a plurality of Adaboost strong classifiers, and finally judging as the pedestrian only when all the Adaboost strong classifiers judge as correct windows, and finishing the detection of the window when any strong classifier judges as a non-pedestrian in the detection process.
8. The pedestrian detection method fusing the static features and the long and short time motion features of the image according to claim 1, wherein the step 9) specifically comprises the following steps:
91) sequencing the detection windows obtained by detection according to the confidence level, and selecting the bounding box corresponding to the highest score;
92) traversing all the rest detection windows, acquiring the overlapping area ratio of the detection windows and the current highest packet enclosure, and inhibiting the enclosure when the overlapping area ratio exceeds a threshold value, namely deleting the detection windows;
93) continue to select the highest scoring bounding box from the ones that have not been suppressed, repeat step 92) until all bounding boxes have been processed.
9. The pedestrian detection method fusing the static image features and the long-and-short-term motion features according to claim 8, wherein in the step 92), the threshold value of the overlapping area ratio is set to 0.5.
10. The method as claimed in claim 5, wherein in step 6), the longer time is 8 frames ahead of the current frame, and the shorter time is 4 frames ahead of the current frame.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010044364.5A CN111310565A (en) | 2020-01-15 | 2020-01-15 | Pedestrian detection method fusing image static characteristics and long-time and short-time motion characteristics |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010044364.5A CN111310565A (en) | 2020-01-15 | 2020-01-15 | Pedestrian detection method fusing image static characteristics and long-time and short-time motion characteristics |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111310565A true CN111310565A (en) | 2020-06-19 |
Family
ID=71147053
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010044364.5A Pending CN111310565A (en) | 2020-01-15 | 2020-01-15 | Pedestrian detection method fusing image static characteristics and long-time and short-time motion characteristics |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111310565A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111832495A (en) * | 2020-07-17 | 2020-10-27 | 中通服咨询设计研究院有限公司 | Method for detecting vehicle accident in video |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104463191A (en) * | 2014-10-30 | 2015-03-25 | 华南理工大学 | Robot visual processing method based on attention mechanism |
CN106778478A (en) * | 2016-11-21 | 2017-05-31 | 中国科学院信息工程研究所 | A kind of real-time pedestrian detection with caching mechanism and tracking based on composite character |
CN106874949A (en) * | 2017-02-10 | 2017-06-20 | 华中科技大学 | A kind of moving platform moving target detecting method and system based on infrared image |
CN107622258A (en) * | 2017-10-16 | 2018-01-23 | 中南大学 | A kind of rapid pedestrian detection method of combination static state low-level image feature and movable information |
CN109164450A (en) * | 2018-09-12 | 2019-01-08 | 天津大学 | A kind of downburst prediction technique based on Doppler Radar Data |
-
2020
- 2020-01-15 CN CN202010044364.5A patent/CN111310565A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104463191A (en) * | 2014-10-30 | 2015-03-25 | 华南理工大学 | Robot visual processing method based on attention mechanism |
CN106778478A (en) * | 2016-11-21 | 2017-05-31 | 中国科学院信息工程研究所 | A kind of real-time pedestrian detection with caching mechanism and tracking based on composite character |
CN106874949A (en) * | 2017-02-10 | 2017-06-20 | 华中科技大学 | A kind of moving platform moving target detecting method and system based on infrared image |
CN107622258A (en) * | 2017-10-16 | 2018-01-23 | 中南大学 | A kind of rapid pedestrian detection method of combination static state low-level image feature and movable information |
CN109164450A (en) * | 2018-09-12 | 2019-01-08 | 天津大学 | A kind of downburst prediction technique based on Doppler Radar Data |
Non-Patent Citations (2)
Title |
---|
DENNIS PARK: "ExploringWeak Stabilization for Motion Feature Extraction", 《2013 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 * |
付新川: "图像中的行人检测关键技术研究", 《万方数据库》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111832495A (en) * | 2020-07-17 | 2020-10-27 | 中通服咨询设计研究院有限公司 | Method for detecting vehicle accident in video |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Koresh et al. | Computer vision based traffic sign sensing for smart transport | |
Wu et al. | Lane-mark extraction for automobiles under complex conditions | |
Mithun et al. | Detection and classification of vehicles from video using multiple time-spatial images | |
CN105809184B (en) | Method for real-time vehicle identification and tracking and parking space occupation judgment suitable for gas station | |
CN109145798B (en) | Driving scene target identification and travelable region segmentation integration method | |
CN105005766A (en) | Vehicle body color identification method | |
CN103324958B (en) | Based on the license plate locating method of sciagraphy and SVM under a kind of complex background | |
Siogkas et al. | Random-walker monocular road detection in adverse conditions using automated spatiotemporal seed selection | |
KR20200119369A (en) | Apparatus and method for detecting object | |
Rabiu | Vehicle detection and classification for cluttered urban intersection | |
Dubuisson et al. | Object contour extraction using color and motion | |
KR100965800B1 (en) | method for vehicle image detection and speed calculation | |
Kardkovács et al. | Real-time traffic sign recognition system | |
CN111310565A (en) | Pedestrian detection method fusing image static characteristics and long-time and short-time motion characteristics | |
CN108133231B (en) | Scale-adaptive real-time vehicle detection method | |
Rahaman et al. | Lane detection for autonomous vehicle management: PHT approach | |
Yung et al. | Recognition of vehicle registration mark on moving vehicles in an outdoor environment | |
Uchiyama et al. | Removal of moving objects from a street-view image by fusing multiple image sequences | |
Priya et al. | Intelligent parking system | |
Wennan et al. | Lane detection in some complex conditions | |
Boliwala et al. | Automatic number plate detection for varying illumination conditions | |
Khalid et al. | A new vehicle detection method | |
CN110765877B (en) | Pedestrian detection method and system based on thermal imager and binocular camera | |
CN113705577A (en) | License plate recognition method based on deep learning | |
Karthiprem et al. | Recognizing the moving vehicle while driving on Indian roads |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200619 |