CN116363612A - Pedestrian tracking and street crossing intention prediction method based on image recognition - Google Patents

Pedestrian tracking and street crossing intention prediction method based on image recognition Download PDF

Info

Publication number
CN116363612A
CN116363612A CN202310186761.XA CN202310186761A CN116363612A CN 116363612 A CN116363612 A CN 116363612A CN 202310186761 A CN202310186761 A CN 202310186761A CN 116363612 A CN116363612 A CN 116363612A
Authority
CN
China
Prior art keywords
pedestrian
crossing
intention
road
gesture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310186761.XA
Other languages
Chinese (zh)
Inventor
谢博亚
万千
彭国庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guilin University of Electronic Technology
Original Assignee
Guilin University of Electronic Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guilin University of Electronic Technology filed Critical Guilin University of Electronic Technology
Priority to CN202310186761.XA priority Critical patent/CN116363612A/en
Publication of CN116363612A publication Critical patent/CN116363612A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30236Traffic on road, railway or crossing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the field of traffic safety, in particular to a pedestrian tracking and crossing intention prediction method based on image recognition, which comprises a camera unit, an algorithm unit and a prompt unit; the camera unit comprises at least two groups of high-definition camera groups for acquiring video images of pedestrians crossing a road at a coupling visual angle; the algorithm unit comprises a coupling visual angle downlink people flow detection module, a coupling visual angle downlink people flow tracking module and a pedestrian road crossing intention judging module, wherein the coupling visual angle downlink people flow detection module acquires the 2D gesture of a pedestrian according to the multi-angle view acquired by the camera unit, and then the coupling visual angle downlink people flow detection module generates a 3D gesture model and then tracks the pedestrian and transmits the acquired 3D gesture to the pedestrian road crossing intention judging module, so that the road crossing intention of the pedestrian is accurately, quickly and robustly judged; the prompting unit adopts a vehicle-mounted electronic display screen to output video sound effects to prompt a driver or adopts vehicle-mounted ETC sound effects to prompt the driver to give full play to pedestrians.

Description

Pedestrian tracking and street crossing intention prediction method based on image recognition
Technical Field
The invention relates to the field of traffic safety, in particular to a pedestrian tracking and crossing intention prediction method based on image recognition.
Background
With the rapid development of urban process and the rapid increase of the quantity of motor vehicles kept in China, the problem of collision between people and vehicles is increasingly serious, and pedestrians become the object of injury of the second major traffic accident. Therefore, how to effectively ensure the traffic safety of pedestrians, reduce the number of the occurrence and the casualties of the traffic accidents of the pedestrians, improve the road traffic efficiency and are key works of urban road traffic management.
At present, as the road network density of the urban road traffic system in China is lower, the traffic flow is larger, the pedestrian traffic is more frequent, the control mode of the traditional pedestrian crossing button can not meet the requirements, or the control mode of the pedestrian crossing button causes larger interference on the motor vehicle traffic, the road traffic efficiency is greatly reduced, and even traffic jam is caused; or because the pedestrian crossing signal can not be responded in time, the pedestrian lacks enough waiting for the patience or thinks that the pedestrian crossing button is invalid, the phenomenon of running the red light by the pedestrian is more common, the problem of collision between the pedestrians and the vehicles is more prominent, and the pedestrian crossing button control mode can not effectively ensure the crossing safety of the pedestrian. There is a need to develop a system that can predict downstream people flow tracking and street crossing intent with high accuracy.
Disclosure of Invention
The invention aims to overcome the defects of the technology and provide a coupled visual angle downstream people flow tracking and street crossing intention prediction method based on image recognition.
The system mainly comprises three main units, namely a camera unit, an algorithm unit and a prompt unit which are sequentially connected;
the camera shooting unit comprises at least two groups of high-definition camera groups which are arranged on intelligent lamp posts at two sides of the sidewalk and are used for collecting video images of pedestrians passing through the sidewalk at multiple visual angles;
the algorithm unit comprises a coupling visual angle downlink people flow detection module, a coupling visual angle downlink people flow tracking module and a pedestrian crossing road intention judging module, wherein the coupling visual angle downlink people flow detection module acquires the 2D gesture of a pedestrian according to the multi-angle view acquired by the camera unit, and then the pedestrian 3D gesture tracking module generates a 3D gesture model under the coupling visual angle to track the pedestrian and transmit the acquired 3D gesture to the pedestrian crossing road intention judging module so as to judge the crossing road intention of the pedestrian;
the prompting unit comprises a vehicle-mounted display for displaying the 3D gesture of the pedestrian crossing the road in real time to prompt a motor vehicle driver to give full play to the pedestrian with the road intention, or prompts the motor vehicle driver to give full play to the pedestrian with the road intention through the vehicle-mounted ETC.
Further, the camera unit collects pedestrian crossing video images through a high-definition camera set on a smart lamp post, which comprises the following environmental conditions,
the high definition camera set is erected at the position of 10 meters of the intelligent lamp posts on two sides of the pavement, all vehicles and pedestrians in 300 meters of the pavement can be completely seen through the combination of the camera sets on the intelligent lamp posts, and the shooting angles of the cameras are opposite to the pavement.
Further, the coupled visual angle downstream people stream detection module is used for identifying the patterns of the movement of the pedestrians on the road to generate a 2D feature frame on the original video.
The method comprises the following steps:
collecting pedestrian crossing data at a plurality of sidewalks in a plurality of time periods and marking with manual labels;
classifying, summarizing and sending the acquired data into YOLOv5 for training;
and adjusting network parameters according to the training errors given in the training process, and repeating the training until the loss function of the network reaches the minimum.
Further, the pedestrian 3D gesture tracking module under the coupling view angle uses a multi-term matching algorithm to cluster 2D gestures detected in a multi-line human motion view shot by the camera set, encodes two-dimensional gestures of the same pedestrian in different views, ensures periodic consistency of the detected two-dimensional gestures in the multi-view, and accurately presumes three-dimensional gestures of the multi-pedestrian. And the 3D gesture of the pedestrian is tracked.
Further, the pedestrian crossing road intention judging module is used for predicting the crossing intention of a plurality of pedestrians, establishing a Bayesian network model for predicting the crossing of the pedestrians, taking some factors affecting the crossing of the pedestrians as node variables in the model, establishing a conditional probability table of each node variable, and predicting the crossing intention of the pedestrians by reasoning to obtain a prediction result of the crossing of the pedestrians.
Further, the prompting unit comprises a vehicle-mounted electronic display screen or a vehicle-mounted ETC for prompting information, the algorithm unit judges whether the pedestrian has the street crossing intention through calculation, and if the pedestrian is judged to have the street crossing intention, the prompting unit sends the prompting information to the vehicle-mounted electronic display screen and the vehicle-mounted ETC to prompt the vehicle to decelerate or stop for yielding.
The invention has the advantages that: the pedestrian 3D gesture tracking module generates the 3D gesture model and then tracks the pedestrian and transmits the acquired 3D gesture to the pedestrian crossing road judging module, so that the pedestrian crossing road intention is accurately, rapidly and robustly judged; the prompting unit adopts a vehicle-mounted electronic display screen to output video sound effects to prompt a driver or adopts vehicle-mounted ETC sound effects to prompt the driver to give full play to pedestrians.
Drawings
FIG. 1 is a schematic diagram of a coupled visual angle downstream people flow tracking and cross street intention prediction method based on image recognition;
FIG. 2 is a schematic flow chart of a coupled visual angle downstream people flow tracking and street crossing intention prediction method based on image recognition;
FIG. 3 is a flow chart of a multi-view multi-person 3D pose estimation module provided by the present invention;
FIG. 4 is a schematic diagram of a feature pyramid network of the present invention.
Detailed Description
The technical scheme in the embodiment of the invention is clearly and perfectly described below with reference to the drawings and the embodiment. It will be apparent that the described embodiments are only some, but not all, embodiments of the invention. The components of the embodiments of the invention generally described herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which are obtained by a person skilled in the art without making any inventive effort, are within the scope of the present invention based on the embodiments of the present invention.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures. Also in the description of the present invention, the terms "first," "second," and the like are used merely to distinguish between descriptions and should not be construed as indicating or implying relative importance.
The image recognition-based coupling visual angle downstream pedestrian flow tracking and street crossing intention prediction method is applied to intelligent lamp posts on two sides of a road, and a camera is arranged on the intelligent lamp posts and is connected with a server in a wired mode.
Multiple intelligent lamp poles are needed to be applied in the same scene, and multiple cameras are possibly installed on each intelligent lamp pole and used for processing different scenes. In the same scene, the camera head is required to be simultaneously applied to a plurality of intelligent lamp poles. The camera device is provided with at least one communication interface or is connected with the Internet of vehicles to realize communication connection between the camera device and the server.
The processor may be an integrated circuit chip with signal processing capability, and the novel multi-pedestrian crossing intent detection system and method may be implemented by instructions in the form of integrated logic circuits or software of hardware in the processor. The processor may be a general purpose processor including a Central Processing Unit (CPU), network Processor (NP), mathematical signal processor (DSP), application Specific Integrated Circuit (ASIC), off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components.
The camera is used for shooting video and sending the video to the processor for processing or sending the video to the memory for storage.
As shown in fig. 1, the system mainly comprises three main units, namely a camera unit, an algorithm unit and a prompt unit which are sequentially connected;
the camera shooting unit comprises at least two groups of high-definition camera groups which are arranged on intelligent lamp posts at two sides of the sidewalk and are used for collecting video images of pedestrians passing through the sidewalk at multiple visual angles;
the algorithm unit comprises a coupling visual angle downlink people flow detection module, a coupling visual angle downlink people flow tracking module and a pedestrian crossing road intention judging module, wherein the coupling visual angle downlink people flow detection module acquires the 2D gesture of a pedestrian according to the multi-angle view acquired by the camera unit, and then the pedestrian 3D gesture tracking module generates a 3D gesture model under the coupling visual angle to track the pedestrian and transmit the acquired 3D gesture to the pedestrian crossing road intention judging module so as to judge the crossing road intention of the pedestrian;
the prompting unit comprises a vehicle-mounted display for displaying the 3D gesture of the pedestrian crossing the road in real time to prompt a motor vehicle driver to give full play to the pedestrian with the road intention, or prompts the motor vehicle driver to give full play to the pedestrian with the road intention through the vehicle-mounted ETC.
Examples
Referring to fig. 2, a flowchart of a coupled view angle downstream people flow tracking and street crossing intention prediction method based on image recognition provided by the embodiment of the invention is shown, and the novel multi-pedestrian crossing intention detection system and method comprise the following steps:
in step S1, a plurality of image capturing apparatuses capture videos and transmit the videos to a processor for processing, or transmit the videos to a memory for storage waiting processing.
And S2, receiving the video input from the image pickup equipment by a multi-view multi-person 3D gesture estimation module in the algorithm unit.
Referring to fig. 3, a flowchart of a multi-view multi-person 3D gesture estimation module provided by the embodiment of the present invention is shown, the multi-view multi-person 3D gesture estimation module adopts a top-down path, and the multi-view multi-person 3D gesture estimation module includes the following steps:
in a substep S21, after receiving the video information from the image capturing apparatus, a corresponding bounding box is generated for each pedestrian appearing in the image, and the position nodes of the key nodes of the human body (i.e., the main joint positions of the respective bodies).
Referring to fig. 4, the left middle position double solid line frame is the original picture, and the bottom-up front process is performed first using the feature pyramid network (Feature Pyramid Network). In the former procedure, the size of the feature map is changed after some layers pass, but is not changed when other layers pass, the layers which do not change the size of the feature map are grouped into a stage (solid line box with different sizes at the left side in fig. 4), the output of the last layer in the stages is extracted to form the left part in fig. 4, the right part in fig. 4 is a top-down backward procedure, and up sampling (upsampling) is adopted to finish the process. The cross-linking (lateral connection) is to fuse (merge) the result obtained by up-sampling and the feature map of the same size generated by the previous process, and then convolve the fused result by using a 3*3 convolution kernel (convolution kernel) to eliminate aliasing effect (aliasing effect) in the up-sampling process. The enlarged areas in fig. 4 are cross-linked, where the main effect of the convolution kernel of 1*1 is to reduce the number of convolution kernels and thus the number of feature patterns, but not to change the size of the feature patterns.
In a substep S22, the existence of the pedestrian is determined from the bounding box generated for the pedestrian from the previous 3 frames of pictures, while it is tracked in the subsequent video.
The tracking module adopts a StrongSORT network. Its tracking of pedestrians has mainly changed in two aspects, namely, appearance branch tracking and movement branch tracking. BOT networks are employed in appearance branch tracking, which extract more distinguishing features by pre-training on the DukeMTMC-reiD dataset using ResNeSt50 as a backbone. The appearance state eti of the ith tracklet of the t-th frame is updated with equation 1 using the exponential average moving EMA method.
Figure BDA0004104171020000041
Where fti is the appearance embedding of the current match and α=0.9 is the momentum term.
ECC is employed in motion branch tracking for camera motion compensation. Meanwhile, an NSA Kalman filtering algorithm is adopted, and the calculation formula is as follows:
Figure BDA0004104171020000042
where Rk is a preset constant measurement noise covariance and Ck is a detection confidence score in state k.
In the aspect of appearance judgment, judging whether the pedestrian is crossed according to the matching relationship between the behavior characteristics of the pedestrian and the corresponding key node positions of the human body; in the aspect of motion judgment, whether the pedestrian is crossed or not is judged according to the geometric position relation between the motion trail of the pedestrian of the previous 30 frames and the road.
Pedestrian crossing behavior characteristics are mainly divided into the following four cases: firstly, the device is of a normal type, always keeps uniform pace and steadily advances. Secondly, the pedestrian sees more vehicles on the way of crossing the road, and stops before or hesitates. Thirdly, the speed is increased in the middle, and most of pedestrians walk to the central line of the road, then the vehicles are seen to rapidly travel, and the steps are accelerated to walk and cross the street. And fourthly, the pedestrian crossing is usually a fast busy running and rush crossing, and after the pedestrian crossing reaches the central line, no automobile goes on the road, so that the pedestrian crossing slows down and steadily moves. And constructing a database aiming at different characteristic conditions, training the model, and finally obtaining whether pedestrians cross the street or not.
And S4, feeding the obtained judging result back to the vehicle-mounted display screen or the vehicle-mounted ETC to carry out image or voice prompt.
The invention and its embodiments have been described above with no limitation, and the invention is illustrated in the drawings as one of its embodiments and is not limited thereto. In summary, those skilled in the art, having benefit of this disclosure, will appreciate that the invention can be practiced without the specific details disclosed herein.

Claims (6)

1. A pedestrian tracking and crossing intention prediction method based on image recognition is characterized by comprising the following steps of: the system mainly comprises three main units, namely a camera shooting unit, an algorithm unit and a prompt unit which are sequentially connected;
the camera shooting unit comprises at least two groups of high-definition camera groups which are arranged on intelligent lamp posts at two sides of the sidewalk and are used for collecting video images of pedestrians passing through the sidewalk at multiple visual angles;
the algorithm unit comprises a coupling visual angle downlink people flow detection module, a coupling visual angle downlink people flow tracking module and a pedestrian crossing road intention judging module, wherein the coupling visual angle downlink people flow detection module acquires the 2D gesture of a pedestrian according to the multi-angle view acquired by the camera unit, and then the pedestrian 3D gesture tracking module generates a 3D gesture model under the coupling visual angle to track the pedestrian and transmit the acquired 3D gesture to the pedestrian crossing road intention judging module so as to judge the crossing road intention of the pedestrian;
the prompting unit comprises a vehicle-mounted display for displaying the 3D gesture of the pedestrian crossing the road in real time to prompt a motor vehicle driver to give full play to the pedestrian with the road intention, or prompts the motor vehicle driver to give full play to the pedestrian with the road intention through the vehicle-mounted ETC.
2. The pedestrian tracking and pedestrian crossing intention prediction method based on image recognition according to claim 1, wherein the capturing unit captures a pedestrian crossing video image by a high-definition camera set assumed on a smart lamp post, comprising the following environmental conditions:
the high definition camera set is erected at the position of 10 meters of the intelligent lamp posts on two sides of the pavement, all vehicles and pedestrians in 300 meters of the pavement can be completely seen through the combination of the camera sets on the intelligent lamp posts, and the shooting angles of the cameras are opposite to the pavement.
3. The pedestrian tracking and cross-street intention prediction method based on image recognition as claimed in claim 1, wherein: the coupling visual angle downstream people flow detection module is used for identifying the patterns of the movement of multiple pedestrians on the road and generating a 2D feature frame on the original video;
the method comprises the following steps:
collecting pedestrian crossing data at a plurality of sidewalks in a plurality of time periods and marking with manual labels;
classifying, summarizing and sending the acquired data into YOLOv5 for training;
and adjusting network parameters according to the training errors given in the training process, and repeating the training until the loss function of the network reaches the minimum.
4. The pedestrian tracking and cross-street intention prediction method based on image recognition as claimed in claim 1, wherein: the pedestrian 3D gesture tracking module under the coupling view angle uses a polynomial matching algorithm to cluster 2D gestures detected in a multi-row human motion view shot by the camera set, encodes two-dimensional gestures of the same pedestrian in different views, ensures the periodic consistency of the detected two-dimensional gestures in the multi-view, and accurately presumes the three-dimensional gestures of multiple pedestrians. And the 3D gesture of the pedestrian is tracked.
5. The pedestrian tracking and cross-street intention prediction method based on image recognition as claimed in claim 1, wherein: the pedestrian crossing road intention judging module is used for predicting the crossing intention of a plurality of pedestrians, establishing a Bayesian network model for predicting the crossing of the pedestrians, taking factors affecting the crossing of the pedestrians as node variables in the model, establishing a conditional probability table of each node variable, and predicting the crossing intention of the pedestrians by reasoning to obtain a pedestrian crossing prediction result.
6. The pedestrian tracking and cross-street intention prediction method based on image recognition as claimed in claim 1, wherein: the prompting unit comprises a vehicle-mounted electronic display screen or a vehicle-mounted ETC for prompting information, the algorithm unit judges whether the pedestrian has the street crossing intention through calculation, and if the pedestrian is judged to have the street crossing intention, the prompting unit sends the prompting information to the vehicle-mounted electronic display screen and the vehicle-mounted ETC to prompt the vehicle to decelerate or stop for giving way.
CN202310186761.XA 2023-03-01 2023-03-01 Pedestrian tracking and street crossing intention prediction method based on image recognition Pending CN116363612A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310186761.XA CN116363612A (en) 2023-03-01 2023-03-01 Pedestrian tracking and street crossing intention prediction method based on image recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310186761.XA CN116363612A (en) 2023-03-01 2023-03-01 Pedestrian tracking and street crossing intention prediction method based on image recognition

Publications (1)

Publication Number Publication Date
CN116363612A true CN116363612A (en) 2023-06-30

Family

ID=86926822

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310186761.XA Pending CN116363612A (en) 2023-03-01 2023-03-01 Pedestrian tracking and street crossing intention prediction method based on image recognition

Country Status (1)

Country Link
CN (1) CN116363612A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116863728A (en) * 2023-07-21 2023-10-10 重庆交通大学 Signal timing method and system based on pedestrian pace classification

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116863728A (en) * 2023-07-21 2023-10-10 重庆交通大学 Signal timing method and system based on pedestrian pace classification

Similar Documents

Publication Publication Date Title
CN111368687B (en) Sidewalk vehicle illegal parking detection method based on target detection and semantic segmentation
WO2022022721A1 (en) Path prediction method and apparatus, device, storage medium, and program
CN110348445B (en) Instance segmentation method fusing void convolution and edge information
TWI677826B (en) License plate recognition system and method
KR102197946B1 (en) object recognition and counting method using deep learning artificial intelligence technology
CN109829400B (en) Rapid vehicle detection method
Lyu et al. Small object recognition algorithm of grain pests based on SSD feature fusion
CN106919939B (en) A kind of traffic signboard tracks and identifies method and system
US20230142676A1 (en) Trajectory prediction method and apparatus, device, storage medium and program
CN116363612A (en) Pedestrian tracking and street crossing intention prediction method based on image recognition
Yebes et al. Learning to automatically catch potholes in worldwide road scene images
Park et al. Vision-based surveillance system for monitoring traffic conditions
CN114973199A (en) Rail transit train obstacle detection method based on convolutional neural network
Gupta et al. Real-time traffic control and monitoring
Bourja et al. Real time vehicle detection, tracking, and inter-vehicle distance estimation based on stereovision and deep learning using YOLOv3
Ong et al. A Cow Crossing Detection Alert System.
Shafie et al. Smart video surveillance system for vehicle detection and traffic flow control
Liu et al. A review of traffic visual tracking technology
Oh et al. Development of an integrated system based vehicle tracking algorithm with shadow removal and occlusion handling methods
CN113869239A (en) Traffic signal lamp countdown identification system and construction method and application method thereof
CN113850111A (en) Road condition identification method and system based on semantic segmentation and neural network technology
Wang et al. Illuminating vehicles with motion priors for surveillance vehicle detection
Song et al. Method of Vehicle Behavior Analysis for Real-Time Video Streaming Based on Mobilenet-YOLOV4 and ERFNET
CN114022803B (en) Multi-target tracking method and device, storage medium and electronic equipment
CN112084928B (en) Road traffic accident detection method based on visual attention mechanism and ConvLSTM network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination