CN114530058A - Collision early warning method, device and system - Google Patents

Collision early warning method, device and system Download PDF

Info

Publication number
CN114530058A
CN114530058A CN202210207724.8A CN202210207724A CN114530058A CN 114530058 A CN114530058 A CN 114530058A CN 202210207724 A CN202210207724 A CN 202210207724A CN 114530058 A CN114530058 A CN 114530058A
Authority
CN
China
Prior art keywords
moving
collision
motion
video stream
stream data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210207724.8A
Other languages
Chinese (zh)
Inventor
白勍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Evergrande Hengchi New Energy Automobile Research Institute Shanghai Co Ltd
Original Assignee
Evergrande Hengchi New Energy Automobile Research Institute Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Evergrande Hengchi New Energy Automobile Research Institute Shanghai Co Ltd filed Critical Evergrande Hengchi New Energy Automobile Research Institute Shanghai Co Ltd
Priority to CN202210207724.8A priority Critical patent/CN114530058A/en
Publication of CN114530058A publication Critical patent/CN114530058A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/166Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0967Systems involving transmission of highway information, e.g. weather, speed limits
    • G08G1/096708Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Atmospheric Sciences (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a collision early warning method, a collision early warning device and a collision early warning system, which are used for solving the problem that a moving target is difficult to accurately detect surrounding environment and the collision is possible. The scheme comprises the following steps: acquiring video stream data acquired by at least one camera in real time in a preset area; identifying at least two moving targets in a preset area according to video stream data; respectively predicting the motion tracks of at least two motion targets according to the video stream data; judging whether the motion tracks of at least two moving targets in the same time interval are crossed; and sending collision early warning to at least one moving target with crossed moving tracks. According to the scheme, the moving target which is likely to collide is determined for the video stream data collected in the preset area through the camera, and the moving target in the preset area can be accurately detected, so that the collision risk is effectively predicted, and the collision is avoided by timely sending early warning to the moving target.

Description

Collision early warning method, device and system
Technical Field
The invention relates to the field of safety control, in particular to a collision early warning method, device and system.
Background
In the field of automatic driving, vehicles often execute planning decisions according to detected surrounding environments so as to realize automatic control of safe driving of the vehicles. However, in a complex road condition with complicated obstacles and more pedestrians, the detection capability of the vehicle for the surrounding environment is limited, and sensing devices such as a camera on the vehicle are difficult to accurately and comprehensively detect the surrounding environment, so that a planning decision is possibly wrong, and a traffic accident occurs.
How to execute effective safety early warning on a moving target to improve the safety of the moving target is a technical problem to be solved by the application.
Disclosure of Invention
The embodiment of the application aims to provide a collision early warning method, a collision early warning device and a collision early warning system, which are used for solving the problem that a moving target is difficult to accurately detect surrounding environment, so that collision is possible.
In a first aspect, a collision warning method is provided, including:
acquiring video stream data acquired by at least one camera in real time in a preset area;
identifying at least two moving targets in the preset area according to the video stream data;
respectively predicting the motion tracks of the at least two motion targets according to the video stream data;
judging whether the motion tracks of the at least two moving targets at the same time interval are crossed;
and sending collision early warning to at least one moving target with crossed motion tracks.
In a second aspect, a collision warning apparatus is provided, including:
the acquisition module acquires video stream data acquired by at least one camera in real time in a preset area;
the identification module is used for identifying at least two moving targets in the preset area according to the video stream data;
the prediction module is used for predicting the motion tracks of the at least two motion targets according to the video stream data;
the judging module is used for judging whether the motion tracks of the at least two moving targets in the same time period are crossed;
and the early warning module is used for sending collision early warning to at least one moving target with crossed motion tracks.
In a third aspect, a collision warning system is provided, including:
the collision warning apparatus according to the second aspect;
the electronic equipment is in communication connection with the collision early warning device and is used for receiving collision early warning sent by the collision early warning device;
and the at least one camera is in communication connection with the collision early warning device and is used for acquiring video stream data in real time and sending the video stream data to the collision early warning device.
In a fourth aspect, an electronic device is provided, the electronic device comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the method according to the first aspect.
In a fifth aspect, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, realizes the steps of the method as in the first aspect.
In the embodiment of the application, video stream data acquired by at least one camera in real time in a preset area is acquired; identifying at least two moving targets in a preset area according to video stream data; respectively predicting the motion tracks of at least two moving objects according to the video stream data; judging whether the motion tracks of at least two moving targets at the same time interval are crossed; and sending collision early warning to at least one moving target with crossed moving tracks. According to the scheme, the moving target which is likely to generate collision is determined for the video stream data collected in the preset area through the camera, and the problem that the moving target is likely to generate collision due to insufficient environment sensing capability of a sensor carried by the moving target is avoided. By identifying the video stream data collected by the camera, the accuracy of the determined moving target and the predicted moving track can be improved, the problem that potential collision risks cannot be identified due to wall shielding, intersection blind areas and other terrain obstacles is avoided, the collision risks can be effectively predicted, and the collision is avoided by timely sending early warning to the moving target.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention and not to limit the invention. In the drawings:
fig. 1a is a scene schematic diagram of a parking lot according to an embodiment of the present invention.
Fig. 1b is a schematic flow chart of a collision warning method according to an embodiment of the present invention.
Fig. 1c is a schematic view of an application scenario of a collision warning method according to an embodiment of the present invention.
Fig. 2 is a second flowchart of a collision warning method according to an embodiment of the present invention.
Fig. 3 is a third flowchart illustrating a collision warning method according to an embodiment of the present invention.
Fig. 4 is a fourth flowchart illustrating a collision warning method according to an embodiment of the present invention.
Fig. 5 is a fifth flowchart illustrating a collision warning method according to an embodiment of the present invention.
Fig. 6 is a sixth flowchart illustrating a collision warning method according to an embodiment of the present invention.
Fig. 7 is a seventh schematic flow chart of a collision warning method according to an embodiment of the present invention.
Fig. 8 is a schematic structural diagram of a collision warning apparatus according to an embodiment of the present invention.
Fig. 9 is a schematic structural diagram of a collision warning system according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention. The reference numbers in the present application are only used for distinguishing the steps in the scheme and are not used for limiting the execution sequence of the steps, and the specific execution sequence is described in the specification.
The automatic driving technology is an automatic control technology that can be applied to vehicles, and an automatic driving automobile (Self-driving automatic) that applies the automatic driving technology is also called an unmanned automobile, a computer-driven automobile or a wheeled mobile robot, and is an intelligent automobile that realizes unmanned driving through a computer system.
The above-mentioned automatic driving car is often equipped with sensors such as a camera and a radar for detecting an environment in which the vehicle is located. According to the collected environmental information, the vehicle-mounted terminal plans a driving route according to the environmental information so as to realize the functions of obstacle avoidance, vehicle speed control and the like, thereby realizing automatic safe driving.
Autonomous Valet Parking (AVP) is a research and development direction in which mass production is possible to be realized first at the current level of automatic driving low-speed L4, and is more in scenes of remote one-key Parking and remote one-key vehicle calling for the last kilometer of outdoor and indoor Parking lots. However, in practical applications, many problems still face, for example, many parking lots in supermarkets, shopping malls, etc. at present, because of various house building structures and patterns, especially, indoor parking lots are generally divided into multiple floors, and there are many turning ramps, load-bearing walls, and upright posts, and the closure is serious. In an outdoor parking lot, parking spaces are usually dense, and the parking density of vehicles in empty parking spaces is high. Due to the fact that actual road conditions of the indoor parking lot and the outdoor parking lot are complex, a large number of obstacles exist, and environmental information which can be detected by the vehicle-mounted sensor is limited, potential collision objects are difficult to accurately detect once being shielded by the obstacles, and further the vehicle-mounted terminal can execute wrong instructions to cause collision.
In the scenes that traffic road conditions are relatively standard, people relatively obey traffic rules, and people's traffic density is small, a single-vehicle intelligent scheme is generally adopted in AVP implementation. Namely: the unmanned vehicle automatic driving domain controller realizes the whole processes of environment perception, data fusion, high-precision positioning, prediction, planning decision and control in a full-automatic manner. The disadvantage of this technique is that the vehicle's own sensor has limited ability to sense the surrounding environment (e.g. scanning distance, obstruction, etc., and cannot effectively identify traffic obstacles). The AVP parking lot environment is relatively serious in closure, and the vehicle cannot timely and effectively sense and identify other pedestrians, vehicles, moving objects and the like by using a sensor of the vehicle (such as the vehicle turns on an uphill road and runs on a parking lot, a probe is arranged at a turning intersection of the parking lot and the like), so that the intelligent single-vehicle prior art can cause potential safety hazards after the AVP function of the vehicle is started in the parking lot.
The following description is made in conjunction with a practical application scenario. Fig. 1a is a schematic view of a parking lot, which includes a wall of the parking lot, a vehicle a and a vehicle B separated by the wall of the parking lot, and a drivable area marked by a lane line.
Assuming that the vehicle a and the vehicle B both have an automatic driving function, the surroundings can be sensed by an adas (automatic Data Acquisition system) sensor, and then the formal route is automatically planned. The ADAS sensor of bicycle intelligence, it is limited to the perception ability of surrounding environment. In a parking lot AVP scene with certain sealing performance, due to signal shielding, the vehicle cannot sense and identify other pedestrians, vehicles, moving objects and the like in time (such as the conditions that the vehicle runs on an up-and-down slope road in the parking lot, a ghost probe at a turning intersection of the parking lot and the like), and the vehicle senses the surrounding environment after passing through a shielding object, makes a decision and controls to execute actions, and is easy to cause accidents late. Therefore, the prior art of the bicycle intelligence can cause the potential safety hazard of the vehicle after the AVP function is started in the parking lot.
Based on the scenario shown in fig. 1a, the driving intention routes of a vehicle a and a vehicle B are shown, where the vehicle a turns left, the vehicle B turns right, but there is a wall isolation of the parking lot between the vehicle a and the vehicle B, and the sensor of the vehicle itself cannot identify the opposite side, and when two vehicles drive onto the driving route, the opposite side can only be perceived, but at this time, the vehicle has not been in time to perform avoidance, and collision is very likely to occur.
In order to solve the problems existing in the prior art, the embodiment of the application provides a collision early warning method. The method can be used for carrying out collision early warning on vehicles in the preset area, or can also be used for carrying out collision early warning on electronic equipment such as robots in the preset area. For example, the robot in the preset area may be a shopping guide robot that provides shopping guide services for customers in a shopping mall, or may be a cleaning robot having a cleaning function.
The execution main body of the scheme provided by the embodiment of the application can be a server or other electronic equipment with a data processing function. For convenience of description, the present solution is described below with the edge cloud device as an implementation subject.
As shown in fig. 1b, the method provided in the embodiment of the present application includes:
s11: and acquiring video stream data acquired by at least one camera in real time in a preset area.
Fig. 1c shows a schematic application scenario diagram of the embodiment of the present application. Based on the scene shown in fig. 1a, the present solution may arrange a camera in the parking lot to collect the pictures in the parking lot in real time, and send the video stream data to the edge cloud device in a wired or wireless manner. In practical application, the edge cloud device may be disposed inside or outside the parking lot.
The edge cloud device in this embodiment may be disposed in a position with good communication, and may receive video stream data uploaded by one or more cameras. For example, the edge cloud equipment arranged in the set position can receive video stream data collected by cameras in a plurality of parking lots within a preset range in real time, and provide a vehicle safety early warning function for the parking lots.
The position of the camera can be specifically set according to actual requirements, for example, the camera is arranged at a road intersection, such as a parking lot intersection, and is used for collecting images of the parking lot intersection region in real time so as to solve the problem that a corner blind area is likely to collide. The camera may also be disposed in an area with many obstacles and a large obstacle volume, for example, a position shown in fig. 1c, and an angle at which the camera captures a video is shown as a dashed trapezoid in fig. 1 c. The camera may be installed in an outdoor scene, for example, in a curve portion with a short line-of-sight distance, an extremely small curve radius of a road in a mountain area, an intersection with poor visibility (particularly, an intersection without a signal), a railroad crossing, or the like.
Taking a parking lot as an example, a camera arranged in the parking lot is used for collecting traffic environment video information in a certain range of the parking lot in real time. Under the general condition, often need deploy a plurality of cameras in the parking area, guarantee the region that the camera was gathered through deploying the camera in the position of difference and can cover whole parking area completely, avoid having the blind area in the parking area.
The camera collects video information of a parking lot traffic environment in real Time, and then a Real Time Streaming Protocol (RTSP) protocol is used for pushing a compressed real-Time video code stream to a video Streaming media service of an edge cloud through a local area Ethernet, and the video Streaming media service can be specifically configured as a high-performance server cluster, a linux operating system is installed, and a video Streaming network protocol stack is packaged. The video streaming service forwards the captured video stream. Optionally, if necessary, the video splicing process may be performed first, and then the spliced video may be forwarded. On one hand, the video stream media recording service can be forwarded, and on the other hand, the multi-path video fragments can be forwarded to the video stream decoding service, so that the collected video stream data can be fully utilized.
The video streaming media recording service can be configured as a high-performance server cluster, a linux operating system is installed, various video storage formats and protocol conversion are supported, specific video files are stored in distributed object storage, such as S3/OSS/COS and the like, and the video streaming media recording service can be used for storing and playing back historical traffic environments of a parking lot.
The video stream decoding service can be configured into a high-performance server cluster, parallel acceleration is realized, a linux operating system is installed, and video coding and decoding software such as a video stream big data processing platform and FFmpeg is installed on the upper layer. Can be used to convert video streams into a variety of playback formats. The processing of the video stream by the decoding service may include, for example, entropy decoding, IQ (in-phase quadrature) modulation, IDCT (Inverse Discrete Cosine Transform), frame prediction, filtering, dpb (decoded picture buffer), and the like.
S12: and identifying at least two moving objects in the preset area according to the video stream data.
In this step, parsing may be performed on the video stream data to determine which objects in the preset area have moved, and the moved objects are determined as moving objects. Wherein, the moving object in the video stream data can be identified as the moving object based on the motion identification function. In practical applications, the moving object may be a vehicle, a pedestrian, a robot, a pet, or the like.
S13: and respectively predicting the motion tracks of the at least two motion targets according to the video stream data.
Because the video stream data is obtained by shooting in real time by the camera, the video stream data can represent the appearance, the position, the historical motion track and other characteristics of the moving target. In this step, the motion trajectory of the moving object in the future period may be predicted from the features in the video stream data. The future time period is specifically a future time period relative to a time period of video stream data acquired by the camera in real time.
For example, the motion trajectory of the moving object in the history period can be used for predicting the motion trajectory in a future period. Specifically, multiple parameters such as the position, the moving direction, the moving speed, the moving acceleration and the like of the moving target in a historical time period can be determined according to the video stream data, and the track prediction of the moving target is realized by combining a Long short-term memory neural network (LSTM) based on the parameters. In addition, the prediction of the motion trail can also be realized through other types of pre-training models.
In addition, the multiple parameters of the moving object in the historical period can also be acquired by communicating with the moving object. Or the moving target is periodically reported to the database, and then the edge cloud equipment acquires the parameters corresponding to the moving target from the database according to the appearance or the identification characteristics of the moving target in the video stream data.
In this step, the predicted motion trajectory represents the position of the moving object in the future time period, and each point on the motion trajectory corresponds to at least one time instant in the future time period.
Optionally, in this step, the motion trajectory of the moving object may be periodically predicted based on video stream data acquired by the camera in real time. For example, the motion trail of the moving object in the future 5 seconds is predicted based on the latest 10-second video stream data acquired by the camera. The time length corresponding to the video stream data and the time length corresponding to the predicted motion trajectory may be preset according to actual requirements. In addition, the predicted motion trail can be corrected according to the video stream data acquired by the camera in real time.
S14: and judging whether the motion trails of the at least two motion targets in the same time interval are crossed.
In this step, whether the motion trajectories of the at least two moving objects have the intersection is determined based on the above steps. If the motion tracks in the same time period are crossed, the fact that the motion targets corresponding to the crossed motion tracks are in collision risk is indicated.
S15: and sending collision early warning to at least one moving target with crossed motion tracks.
In practical application, collision early warning can be sent to a moving target in various forms according to actual requirements. For example, the early warning information is directly sent to the moving target, for example, the early warning information is directly sent to the automatic driving vehicle to inform the automatic driving vehicle that collision may occur in a future time period, so that the automatic driving vehicle is favorable for avoiding in time.
Or the collision early warning can be sent to the moving target in the forms of sound, light, display pictures and the like. For example, an audible prompt, which may be a voice or warning tone, is played by a sound generating device near the moving object to remind pedestrians or other moving objects to pay attention to safety and avoid collision.
When the moving target is a vehicle, in this step, the edge cloud device may send a collision warning to the target vehicle through a wireless communication mode such as 4G/5G. Alternatively, the collision warning may be information sent in the form of sound, light, or the like. For example, the collision warning indicates that the vehicle emits a warning sound, controls the warning lamp to light, controls the vehicle-mounted display to display prompt information, and the like. The collision early warning can remind a driver, and driving safety is improved.
Alternatively, the collision warning may be warning information sent by the vehicle-mounted terminal. The collision early warning information is used for indicating the vehicle-mounted terminal to execute deceleration or brake so as to slow down or suspend an automatic driving strategy executed by the vehicle-mounted terminal, and therefore collision risks are avoided.
In the embodiment of the application, video stream data acquired by at least one camera in real time in a preset area is acquired; identifying at least two moving targets in a preset area according to video stream data; respectively predicting the motion tracks of at least two moving objects according to the video stream data; judging whether the motion tracks of at least two moving targets at the same time interval are crossed; and sending collision early warning to at least one moving target with crossed moving tracks. According to the scheme, the moving target which is likely to generate collision is determined for the video stream data collected in the preset area through the camera, and the problem that the moving target is likely to generate collision due to insufficient environment sensing capability of a sensor carried by the moving target is avoided. By identifying the video stream data collected by the camera, the accuracy of the determined moving target and the predicted moving track can be improved, the problem that potential collision risks cannot be identified due to wall shielding, intersection blind areas and other terrain obstacles is avoided, the collision risks can be effectively predicted, and the collision is avoided by timely sending early warning to the moving target.
When the moving target is a vehicle, the scheme can determine the target vehicle which is likely to generate collision through video stream data collected by the camera, and the problem of unsafe driving caused by insufficient environment perception capability of a vehicle sensor is avoided. By identifying the video stream data collected by the camera, the accuracy of the position relation between the determined vehicle and the corresponding potential collision object can be improved, and the problem that the potential collision object cannot be identified due to wall shielding, intersection blind areas and other terrain obstacles is avoided. According to the scheme, the speed information of the vehicle, the speed information of the potential collision object corresponding to the vehicle, the position relation between the vehicle and the potential collision object and other information can be determined according to the video stream data, so that the collision risk is effectively predicted, and the vehicle running safety is improved by timely sending collision early warning to the target vehicle. The potential collision object may be a vehicle, a pedestrian, a pet, or another object that may collide with the vehicle in a future period.
Based on the solution provided by the foregoing embodiment, optionally, the preset area includes a plurality of sub-areas, as shown in fig. 2, where the step S12 includes:
s21: and respectively determining the environmental data of the at least two moving objects according to the video stream data.
In this step, the moving object may refer to an object moving in the video stream data. Specifically, the moving object can be classified according to the shape of the moving object through an image classification detection service of the edge cloud device, so as to determine that the moving object is a vehicle, a pedestrian or other types of potential collision objects. Subsequently, the detected environmental information (also referred to as a world model) around the vehicle may be sent to a vehicle/pedestrian high-precision positioning service by an image classification detection service of the edge cloud device.
S22: comparing the feature data of a preset map with the environmental data of the at least two moving targets to respectively determine the position information of the at least two moving targets, wherein the position information of the moving targets represents the positions of the moving targets in the preset map.
In this step, the vector map and feature map data can be obtained from the high-precision map engine by the vehicle/pedestrian high-precision positioning service. Then, matching calculation is performed on the above world model and high-precision map data based on vectors or features by a vehicle/pedestrian high-precision positioning service using a matching algorithm, thereby determining the specific position of the moving object in the parking lot. The matching algorithm may specifically be implemented by using an image similarity algorithm, such as sift (scale inverse Feature transform) Feature matching, perceptual hashing, and the like.
S23: and identifying at least two moving targets positioned in the same sub-area of the preset area according to the position information of the at least two moving targets.
The position information of the at least two moving objects may specifically include a position relationship between the at least two moving objects, and the position relationship may be a two-dimensional position relationship based on high-precision map data or a three-dimensional position relationship based on a vector high-precision map.
The sub-regions in this embodiment may refer to partial regions in a preset region, and at least partial overlapping regions may exist between a plurality of sub-regions in the preset region. In practical applications, the sub-region may specifically be a region with a blind field of view and a high collision risk in a preset region, such as a road junction, a corner blocked by a wall, and the like. At least two moving targets located in the same sub-area may collide due to the fact that surrounding environments cannot be comprehensively detected, and therefore the effectiveness of collision early warning in the scheme is improved.
By the scheme provided by the embodiment of the application, the high-precision positioning of moving targets such as vehicles, pedestrians and the like can be realized based on the video stream data. In relatively closed scenes such as underground parking lots, the influence of communication quality is received, and the traditional positioning function is often difficult to realize accurate positioning. Moreover, the conventional positioning usually has errors, and the positioning function is difficult to be accurately realized. Because the scheme is based on video stream data, positioning can be realized by comparing map characteristic data with environment data, namely positioning is carried out according to real pictures acquired in real time, the positioning accuracy can be effectively improved, the accuracy of determining a moving target which is likely to collide in the following steps is further improved, and effective collision early warning is realized.
Based on the solution provided by the foregoing embodiment, optionally, as shown in fig. 3, the foregoing step S13 includes:
s31: and determining the categories of the at least two moving objects according to the video stream data respectively.
In this step, the moving objects may be classified according to their appearances to determine their categories. Specifically, the moving object may be classified according to the appearance image of the moving object in the video stream data to determine that the moving object is a pedestrian, a vehicle, a robot, a pet, or the like.
Optionally, the category of the moving object may be optimally corrected according to the video stream data. For example, the size of the moving object may be compared with the size of the surrounding environment object according to the video stream data, so as to estimate the actual size of the moving object, and the category to which the moving object belongs may be determined according to the actual size of the moving object. Or, determining the category of the moving object according to the appearance change of the moving object in the video stream data during the moving process.
Optionally, this step may be implemented by an image classification detection service function of the edge cloud device. The image classification detection service may be configured as a gpu (graphics processing unit) multi-card acceleration server cluster, and may specifically install a cuda (computer Unified Device architecture) library, a deep learning engine such as tensrflow, and the like, and an upper deployment trained and verified CNN convolutional neural network model. The processing process of the model for classifying and detecting the moving objects such as vehicles, pedestrians and the like in the traffic environment of the parking lot can comprise image input, convolution kernel processing, pooling processing, discarding processing, full-connection output processing and the like. It can input the results of real-time classification detection to a high-precision positioning service and a collision risk estimation service.
The embodiment of the invention has the advantages that the classification detection of the vehicles and the pedestrians aims to ensure that the prompt planning decision behaviors of the moving target on different types of potential collision objects can be different. Such as: when the target vehicle turns right, a pedestrian just walks on the right side, the instant decision-making behavior of the target vehicle is likely to be stopping, and the target vehicle continues to run after the pedestrian passes; however, if the right side is a vehicle traveling at a low speed and wants to make a left turn, the immediate decision-making behavior of the target vehicle is likely to be to slow down, or to change lanes to the rightmost lane, etc. Therefore, the category of the moving target determined by the scheme can be used for predicting the moving track in the subsequent step and can also be used as collision early warning information to be sent to the moving target so as to optimize an avoidance strategy for the moving target.
S32: and respectively predicting the motion tracks of the at least two motion targets according to the categories of the at least two motion targets.
In practical application, the motion rules of different kinds of moving objects are often different. For example, pedestrians have strong flexibility and may change the moving direction at any time. In contrast, the vehicle is less maneuverable and has a limited steering angle. According to the scheme provided by the embodiment of the application, the category of the moving target is determined, and then the moving track of the moving target is predicted according to the category of the moving target, so that the accuracy of the predicted moving track can be effectively improved.
Based on the solution provided by the foregoing embodiment, optionally, as shown in fig. 4, the foregoing step S32 includes:
s41: if the category of the first moving object is a communicable electronic device, the moving parameters of the first moving object are acquired.
The communicable electronic device may be, for example, a vehicle or a robot having a communication function, and the communicable electronic device may transmit and receive information based on the communication function. In this step, the motion parameters may be directly or indirectly acquired from the first moving object based on the communication function of the first moving object.
In the following, description is made assuming that the first moving object is a vehicle. The motion parameter in this step may be information periodically reported by the vehicle, and specifically, the motion parameter may include speed information, where the speed information represents a speed of the vehicle, and the speed is vector information having a direction.
In addition, the motion parameters may carry vehicle identifiers representing vehicle identities, so as to distinguish different vehicles. Further, the speed information may specifically include a speed message, an acceleration message, a steering wheel angle message, and the like of the drive-by-wire chassis, so that the edge cloud can more accurately recognize the driving intention of the vehicle.
The speed information may be real-time information reported by the vehicle through a telematics BOX (T-BOX) based on a preset period. Specifically, the information uploading may be performed by using a 4G/5G wireless cellular network, and the transmission protocol may use mqtt (message Queuing telecommunications transport), for example.
Optionally, the edge cloud may deploy a load balancing reverse proxy service, for example, the load balancing reverse proxy service may be specifically nginnx + KeepAlived, and is used to receive a signal reported by a vehicle. The load balancing reverse proxy can intelligently forward the signal data initiated by the multiple vehicles to the secure access communication gateway according to a routing algorithm, and can be realized by applying MQTT-Broker service. The secure access communication gateway is suitable for multi-vehicle high-concurrent signal secure access, and then issues (publish) vehicle signal subject (Topic) messages to a distributed message queue, for example, through Kafka service. The distributed message queue has high throughput and can persist a large amount of vehicle signal data.
Furthermore, the Topic (Topic) of the distributed message queue can be subscribed to (subscribed) by a real-time message subscription parsing service, for example, implemented by Spark Streaming or Flink, and specific vehicle signals are parsed according to a vehicle bus transmission protocol, and then written into a vehicle signal caching service.
The vehicle signal caching service may be implemented by, for example, a key-value database Redis, and is configured to store speed data reported by a vehicle, such as a speed, an acceleration, and a steering wheel angle of the vehicle, and subsequently, speed information corresponding to the vehicle may be searched from the stored data based on a vehicle identifier, and a driving route of the vehicle may be predicted according to the stored data, so as to determine whether the vehicle may collide.
Optionally, the vehicle signal caching service in the edge cloud device may be configured to input speed information of the vehicle, such as speed, acceleration, steering wheel angle, and the like, to the collision risk estimation service, so as to determine the target vehicle that may collide according to the relative position of the vehicle and the corresponding potential collision object, the speed information of the vehicle, and the speed information of the potential collision object.
Optionally, the collision risk estimation service of the edge cloud device may perform collision risk calculation analysis according to the input information to determine a target vehicle that may collide. Specifically, ttceiip algorithms may be applied, including motion model building, calculation of dimensional changes, multi-model tracking (kalman filtering), multi-model fusion decisions, and the like. Alternatively, a suitable algorithm may be selected to determine the target vehicle based on actual demand.
Optionally, the potential collision objects such as vehicles and pedestrians can be located through a vehicle/pedestrian high-precision location service, and the position information is input to a collision risk estimation service to predict a target vehicle which is likely to collide.
The collision risk estimation service may be a service function provided by an edge cloud device built-in module.
S42: and predicting the motion track of the first moving target according to the motion parameters of the first moving target.
In this step, the motion parameters of the first moving object may be input into a preset model (e.g., LSTM), and the motion trajectory of the first moving object in the future period may be predicted according to the output result of the model. Through the scheme provided by the embodiment of the application, the motion trail prediction in the future time period can be realized according to the motion parameters of the first motion target. Whether the first moving target has the risk of collision or not is deduced based on whether the moving tracks are crossed or not, and then collision early warning is accurately and efficiently carried out.
Based on the solution provided by the foregoing embodiment, optionally, as shown in fig. 5, the foregoing step S32 includes:
s51: if the category to which the second moving object belongs is not a communicable electronic device, performing motion estimation on the second moving object according to the video stream data to determine a motion parameter of the second moving object.
The second moving object is not a communicable electronic device, and for example, the second moving object may be a pedestrian, a pet, a vehicle without a communication function, a robot without a communication function, or the like.
Since the category to which the second moving object belongs is not a communicable electronic apparatus, it is difficult to acquire the moving parameters of the second moving object based on the communication manner. In this scheme, motion estimation is performed on the second moving object according to the video stream data to determine motion parameters of the second moving object.
For example, if the moving object is a pedestrian, the pedestrian may be subjected to camera-vision motion estimation. In practical application, an optical flow method, a block-based motion field, global motion estimation, particle filtering, a color histogram and the like can be adopted to obtain the motion direction and the velocity of the pedestrian so as to determine the velocity information of the pedestrian, and the velocity information is used as the motion parameter of the second motion object.
S52: and predicting the motion track of the second moving target according to the motion parameters of the second moving target.
In this step, the motion parameters of the second moving object may be input into a preset model (e.g., LSTM), and the motion trajectory of the second moving object in the future period may be predicted according to the output result of the model. Through the scheme provided by the embodiment of the application, the motion trail prediction in the future time period can be realized according to the motion parameters of the second motion target. Whether the second moving target has the risk of collision or not is deduced based on whether the moving tracks are crossed or not, and collision early warning is accurately and efficiently carried out.
Based on the solution provided by the foregoing embodiment, as shown in fig. 6, before the foregoing step S15, optionally, the method further includes:
s61: and determining the collision Time of the third moving object with crossed motion tracks according To a Contact Time Estimation algorithm (Time To Contact Estimation Using Interest Points) based on the Interest Points.
Time-To-Collision (TTC) refers To the Time between when a moving object is located and when a Collision occurs. The contact time estimation algorithm based on the interest point in the step may specifically include four steps: firstly, estimating the size change S of a moving target by using a target key point; then, establishing a uniform motion model and an accelerated motion model of the moving object by utilizing the S; then, tracking model parameters by using an EKF (extended Kalman Filter) extended Kalman filter; and finally, calculating TTC collision time of a final result by adopting multi-model fusion decision.
S62: and determining the preset safety decision time length of the third moving target.
The preset safety decision duration may specifically be a preset safety time length for the moving target to execute risk avoidance, and if the length of the collision time is greater than the preset safety decision duration, avoidance may be implemented by the moving target itself. For example, when the moving target is a vehicle, the vehicle can detect a potential collision object through sensors such as a radar and a camera, and avoid the potential collision object through deceleration, steering and the like. The preset safety decision durations of different vehicles are often different, and optionally, the preset safety decision durations can be acquired from the vehicles in advance by the edge cloud device, or the preset safety decision durations can be actively reported by the vehicles when the speed information is reported.
Wherein, the step S15 includes:
and S63, if the length of the collision time is less than the preset safety decision duration of the third moving target, sending a collision early warning to the third moving target.
If the length of the collision time is less than the preset safety decision duration of the moving target, it indicates that even if the moving target detects the potential collision object, the decision cannot be made in time and avoidance is executed. In the step, the collision early warning is sent to the third moving target, so that the collision early warning can be timely and effectively executed on the third moving target, the collision can be avoided by means of deceleration or braking and the like, the moving target does not need to make a decision according to the environment information detected by the moving target, and the avoidance instantaneity can be improved.
Optionally, in this embodiment of the application, the collision warning may specifically be collision warning information including a motion trajectory of a potential collision object. The collision early warning information enables the moving target to adjust the driving route according to the movement track of the potential collision object so as to realize avoidance. Or the collision early warning information may include a track intersection, that is, a position where the moving target may collide is notified, and the moving target may avoid collision through avoidance, deceleration, and other strategies.
According to the scheme provided by the embodiment of the application, collision early warning information is generated based on the motion track, and the motion track of the object which is possibly collided with the moving target can be informed, so that the moving target can adjust the driving strategy to realize avoidance, effective warning is realized, and the collision risk is reduced.
To further explain the scheme, a collision warning method provided by the embodiment of the present application is described below with reference to fig. 7.
As shown in fig. 7, the edge cloud device acquires video stream data collected by the camera, and acquires position information of a plurality of target moving objects from the high-precision positioning module and map scene information from the high-precision map engine. The high-precision positioning module is used for positioning moving targets such as vehicles or pedestrians in the video stream data. The high-precision map engine is used for acquiring map scene information corresponding to an area shot by the camera, and the map scene information can be a plane map or a vector map.
The position information of a plurality of target moving objects is mapped into a high-precision map, and whether the plurality of target moving objects are in the same road junction or not is checked and judged to determine whether collision is possible between the target moving objects or not. The target moving object is the moving object described in the above embodiment.
If not, ending the flow; and if the judgment result is that the objects are in the same intersection, continuously obtaining the category of the target moving object from the image classification detection module. The image classification detection module is used for classifying the target moving object according to the video image of the target moving object, for example, determining that the target moving object is a vehicle, a pedestrian or other objects.
Specifically, it is determined whether the target moving object is a vehicle or a pedestrian. If the vehicle is a vehicle, the speed/acceleration/steering wheel angle (course angle) information of the target vehicle is obtained from the vehicle signal cache, wherein the corresponding speed information can be searched from the cache based on the appearance, color, license plate or other vehicle identification of the vehicle.
If the pedestrian is the pedestrian, the pedestrian is subjected to computer vision motion estimation, and the speed/acceleration/course angle information of the pedestrian can be obtained as the speed information of the pedestrian by considering an optical flow method, a block-based motion field, global motion estimation, particle filtering, a color histogram and the like.
The speed information (which can include speed/acceleration/course angle information, etc.) of a plurality of target moving objects is input to an LSTM long-short term memory neural network to complete the track prediction of the plurality of target moving objects, and the LSTM network outputs the behavior tracks of the plurality of target moving objects.
Is there a cross in the behavior trajectories of a plurality of target moving objects determined? If not, ending the flow; if there is a cross, a Time To Contact Estimation Using Interest Points (TTCEUIP) algorithm is used To perform the target moving object collision Time Estimation. The TTCEUIP algorithm is divided into four steps: estimating the size change S of the target by using the key point of the target, establishing a uniform motion model and an accelerated motion model of the target by using S, tracking model parameters by using an EKF (extended Kalman Filter) extended Kalman filter, and calculating the TTC collision time of a final result by using multi-model fusion decision.
And determining the value Th of the decision threshold by using an iteration method, a maximum histogram entropy threshold segmentation method, a maximum inter-class variance method and the like. The decision threshold is the preset safety decision duration described in the above embodiment, and the preset safety decision duration may be actively reported by the vehicle or acquired from the vehicle by the edge cloud device. Alternatively, the edge cloud device may be estimated from historical collision records.
Judging whether the TTC is smaller than a decision threshold Th, if so, indicating that a collision early warning condition is met, and sending collision early warning information to a collision early warning notification service, thereby realizing the safety early warning of the target vehicle and ending the process; if not, the target vehicle can avoid the potential collision object by itself, and the contact time estimation algorithm based on the interest point can be reused to estimate the collision time of other target moving objects.
The collision early warning notification service can send the course angle, the speed, the pre-collision time, the collision risk conclusion and the like to the collision early warning notification service. The collision warning notification service formats and arranges the message notification, and then issues (publish) a message Topic (Topic) to the security access communication gateway. The target vehicle then subscribes (subscribes) to a secure access communication gateway through a load balancing reverse proxy service using MQTT protocol over a 4G/5G wireless cellular network for a collision notification message Topic (Topic). And after the vehicle is informed of the collision message, the planning decision behavior of the vehicle is adjusted in time to avoid collision.
According to the scheme provided by the embodiment of the application, the vehicles which are likely to collide are determined through the video stream data collected by the cameras within the range, and the problem of unsafe driving caused by insufficient environment perception capability of the vehicle sensors is avoided. By identifying the video stream data collected by the camera, the accuracy of the position relation between the determined vehicle and the corresponding potential collision object can be improved, and the problem that the potential collision object cannot be identified due to wall shielding, intersection blind areas and other terrain obstacles is avoided. According to the scheme, the target vehicle which is likely to generate collision is determined according to the speed information of the vehicle, the speed information of the corresponding potential collision object and the position relation between the vehicle and the potential collision object, the collision risk can be effectively predicted, and the driving safety of the vehicle is improved by timely sending early warning information to the target vehicle.
In order to solve the problems existing in the prior art, the embodiment of the present application further provides a collision early warning device 80, as shown in fig. 8, including:
the acquiring module 81 acquires video stream data acquired by at least one camera in real time in a preset area;
the identification module 82 is used for identifying at least two moving targets in the preset area according to the video stream data;
the prediction module 83 predicts the motion tracks of the at least two motion targets according to the video stream data;
the judging module 84 is used for judging whether the motion tracks of the at least two motion targets in the same time period are crossed;
and the early warning module 85 is used for sending collision early warning to at least one moving target with crossed motion tracks.
The collision early warning device provided by the embodiment of the application can be edge cloud equipment, and video stream data acquired by at least one camera in real time in a preset area is acquired; identifying at least two moving targets in a preset area according to video stream data; respectively predicting the motion tracks of at least two moving objects according to the video stream data; judging whether the motion tracks of at least two moving targets at the same time interval are crossed; and sending collision early warning to at least one moving target with crossed moving tracks. According to the scheme, the moving target which is likely to generate collision is determined for the video stream data collected in the preset area through the camera, and the problem that the moving target is likely to generate collision due to insufficient environment sensing capability of a sensor carried by the moving target is avoided. By identifying the video stream data collected by the camera, the accuracy of the determined moving target and the predicted moving track can be improved, the problem that potential collision risks cannot be identified due to wall shielding, intersection blind areas and other terrain obstacles is avoided, the collision risks can be effectively predicted, and the collision is avoided by timely sending early warning to the moving target.
In order to solve the problems existing in the prior art, an embodiment of the present application further provides a collision warning system, as shown in fig. 9, including:
the collision warning apparatus 91 according to any one of the above embodiments;
at least one electronic device 92 connected to the collision warning apparatus 91 for receiving the collision warning from the collision warning apparatus 91;
and the at least one camera 93 is in communication connection with the collision early warning device 91 and is used for acquiring video stream data in real time and sending the video stream data to the collision early warning device 91.
According to the system provided by the embodiment of the application, video stream data acquired by at least one camera in real time in a preset area is acquired; identifying at least two moving targets in a preset area according to video stream data; respectively predicting the motion tracks of at least two moving objects according to the video stream data; judging whether the motion tracks of at least two moving targets in the same time interval are crossed; and sending collision early warning to at least one moving target with crossed moving tracks. According to the scheme, the moving target which is likely to generate collision is determined for the video stream data collected in the preset area through the camera, and the problem that the moving target is likely to generate collision due to insufficient environment sensing capability of a sensor carried by the moving target is avoided. By identifying the video stream data collected by the camera, the accuracy of the determined moving target and the predicted moving track can be improved, the problem that potential collision risks cannot be identified due to wall shielding, intersection blind areas and other terrain obstacles is avoided, the collision risks can be effectively predicted, and the collision is avoided by timely sending early warning to the moving target.
The electronic device may specifically be a moving object, for example, the electronic device is an autonomous vehicle, a robot, or the like. Alternatively, the electronic device may be a display screen, a player, an indicator light, or the like that is provided in the scene and can perform warning in the form of an image, sound, light, or the like.
If the electronic device is a moving object, the electronic device may communicate with the collision warning device in a 4G or 5G, MQTT (Message queue Telemetry Transport) manner, and the moving object may report information in the manner described above, or may receive collision warning issued by the collision warning device in the manner described above. The camera may communicate with the side collision warning apparatus through a local area ethernet using RTSP (Real Time Streaming Protocol).
The scheme provided by the embodiment of the invention can be applied to a parking lot, the system can comprise a field end camera and edge cloud equipment, and the edge cloud equipment is used as a collision early warning device and is used for jointly carrying out dangerous collision early warning analysis on the last kilometer of AVP in the parking lot and timely informing a target vehicle. Before a sensor of a target vehicle scans a moving barrier (a wall or a fixed barrier covers the moving barrier), nearby moving vehicle/pedestrian information is informed, so that the target vehicle is sufficiently adjusted in a planning decision-making mode, the problem that the environment sensing capability of a vehicle sensor is insufficient (multilayer up-and-down slope turning, ghost probes and the like of a closed parking lot) in a closed parking lot environment of single vehicle intelligence is effectively solved, and further collision accidents are avoided.
The edge cloud in the embodiment of the invention can be deployed in a network access layer of a network operator and can simultaneously serve a plurality of nearby parking lots. And only need in every parking lot according to the deployment requirement install the end camera in the specific position, guarantee 4G 5G WIFI wireless network, can let the bicycle intelligence obtain the supplementary dangerous collision early warning service of end + edge cloud, construction cost is very cheap, and the universality is strong. Under the condition that the existing V2X (V2V/V2V/V2P) technology is immature and the constraint limiting conditions are more, the problem of long tail of the last kilometer of the AVP of the parking lot can be effectively solved at low cost, the method can be widely applied to various scenes, and has the advantages of low cost and high safety.
The modules in the apparatus and the devices in the system provided by the embodiment of the present application may also implement the method steps provided by the above method embodiment. Alternatively, the apparatus provided in the embodiment of the present application may further include other modules besides the modules described above, so as to implement the method steps provided in the foregoing method embodiments. The device provided by the embodiment of the application can achieve the technical effects achieved by the method embodiment.
Preferably, an embodiment of the present invention further provides an electronic device, which includes a processor, a memory, and a computer program stored in the memory and capable of running on the processor, where the computer program, when executed by the processor, implements each process of the above-mentioned embodiment of the collision warning method, and can achieve the same technical effect, and details are not repeated here to avoid repetition.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the above-mentioned embodiment of the collision warning method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. A collision warning method is characterized by comprising the following steps:
acquiring video stream data acquired by at least one camera in real time in a preset area;
identifying at least two moving targets in the preset area according to the video stream data;
respectively predicting the motion tracks of the at least two motion targets according to the video stream data;
judging whether the motion tracks of the at least two moving targets at the same time interval are crossed;
and sending collision early warning to at least one moving target with crossed motion tracks.
2. The method of claim 1, wherein the preset area comprises a plurality of sub-areas, and wherein identifying at least two moving objects within the preset area from the video stream data comprises:
respectively determining the environmental data of the at least two moving objects according to the video stream data;
comparing the feature data of a preset map with the environment data of the at least two moving targets to respectively determine the position information of the at least two moving targets, wherein the position information of the moving targets represents the positions of the moving targets in the preset map;
and identifying at least two moving targets positioned in the same sub-area of the preset area according to the position information of the at least two moving targets.
3. The method of claim 1, wherein predicting motion trajectories of the at least two moving objects from the video stream data, respectively, comprises:
determining the categories of the at least two moving objects according to the video stream data respectively;
and respectively predicting the motion tracks of the at least two motion targets according to the categories of the at least two motion targets.
4. The method of claim 3, wherein predicting the motion trajectories of the at least two moving objects according to the categories of the at least two moving objects respectively comprises:
if the category of the first moving object is a communicable electronic device, acquiring a moving parameter of the first moving object;
and predicting the motion track of the first moving target according to the motion parameters of the first moving target.
5. The method of claim 3, wherein predicting the motion trajectories of the at least two moving objects according to the categories of the at least two moving objects respectively comprises:
performing motion estimation on a second moving object according to the video stream data to determine a motion parameter of the second moving object if the category to which the second moving object belongs is not a communicable electronic apparatus;
and predicting the motion track of the second moving target according to the motion parameters of the second moving target.
6. The method of claim 1, wherein before issuing a collision warning to at least one moving object whose moving trajectory intersects, further comprising:
determining the collision time of a third moving object with crossed motion tracks according to a contact time estimation algorithm based on the interest points;
determining a preset safety decision duration of the third moving target;
wherein, to at least one motion orbit have crisscross moving target to send out collision early warning, include:
and if the length of the collision time is less than the preset safety decision duration of the third moving target, sending a collision early warning to the third moving target.
7. A collision warning apparatus, comprising:
the acquisition module acquires video stream data acquired by at least one camera in real time in a preset area;
the identification module is used for identifying at least two moving targets in the preset area according to the video stream data;
the prediction module is used for predicting the motion tracks of the at least two motion targets according to the video stream data;
the judging module is used for judging whether the motion tracks of the at least two moving targets in the same time period are crossed;
and the early warning module is used for sending collision early warning to at least one moving target with crossed motion tracks.
8. A collision warning system, comprising:
the collision warning apparatus according to claim 7;
the electronic equipment is in communication connection with the collision early warning device and is used for receiving the collision early warning sent by the collision early warning device;
and the at least one camera is in communication connection with the collision early warning device and is used for acquiring video stream data in real time and sending the video stream data to the collision early warning device.
9. An electronic device, comprising: memory, processor and computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, carries out the steps of the method according to any one of claims 1 to 6.
10. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.
CN202210207724.8A 2022-03-03 2022-03-03 Collision early warning method, device and system Pending CN114530058A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210207724.8A CN114530058A (en) 2022-03-03 2022-03-03 Collision early warning method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210207724.8A CN114530058A (en) 2022-03-03 2022-03-03 Collision early warning method, device and system

Publications (1)

Publication Number Publication Date
CN114530058A true CN114530058A (en) 2022-05-24

Family

ID=81627245

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210207724.8A Pending CN114530058A (en) 2022-03-03 2022-03-03 Collision early warning method, device and system

Country Status (1)

Country Link
CN (1) CN114530058A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115240471A (en) * 2022-08-09 2022-10-25 东揽(南京)智能科技有限公司 Intelligent factory collision avoidance early warning method and system based on image acquisition
CN115909749A (en) * 2023-01-09 2023-04-04 广州通达汽车电气股份有限公司 Vehicle operation road risk early warning method, device, equipment and storage medium
CN116071960A (en) * 2023-04-06 2023-05-05 深圳市城市交通规划设计研究中心股份有限公司 Non-motor vehicle and pedestrian collision early warning method, electronic equipment and storage medium
CN117079219A (en) * 2023-10-08 2023-11-17 山东车拖车网络科技有限公司 Vehicle running condition monitoring method and device applied to trailer service
TWI831242B (en) * 2022-06-15 2024-02-01 鴻海精密工業股份有限公司 Vehicle collision warning method, system, vehicle and computer readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111243274A (en) * 2020-01-20 2020-06-05 陈俊言 Road collision early warning system and method for non-internet traffic individuals
CN112700470A (en) * 2020-12-30 2021-04-23 上海智能交通有限公司 Target detection and track extraction method based on traffic video stream
CN112712733A (en) * 2020-12-23 2021-04-27 交通运输部公路科学研究所 Vehicle-road cooperation-based collision early warning method and system and road side unit
CN113538917A (en) * 2021-07-29 2021-10-22 北京万集科技股份有限公司 Collision early warning method and collision early warning device
CN113538968A (en) * 2021-07-20 2021-10-22 阿波罗智联(北京)科技有限公司 Method and apparatus for outputting information

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111243274A (en) * 2020-01-20 2020-06-05 陈俊言 Road collision early warning system and method for non-internet traffic individuals
CN112712733A (en) * 2020-12-23 2021-04-27 交通运输部公路科学研究所 Vehicle-road cooperation-based collision early warning method and system and road side unit
CN112700470A (en) * 2020-12-30 2021-04-23 上海智能交通有限公司 Target detection and track extraction method based on traffic video stream
CN113538968A (en) * 2021-07-20 2021-10-22 阿波罗智联(北京)科技有限公司 Method and apparatus for outputting information
CN113538917A (en) * 2021-07-29 2021-10-22 北京万集科技股份有限公司 Collision early warning method and collision early warning device

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI831242B (en) * 2022-06-15 2024-02-01 鴻海精密工業股份有限公司 Vehicle collision warning method, system, vehicle and computer readable storage medium
CN115240471A (en) * 2022-08-09 2022-10-25 东揽(南京)智能科技有限公司 Intelligent factory collision avoidance early warning method and system based on image acquisition
CN115240471B (en) * 2022-08-09 2024-03-01 东揽(南京)智能科技有限公司 Intelligent factory collision avoidance early warning method and system based on image acquisition
CN115909749A (en) * 2023-01-09 2023-04-04 广州通达汽车电气股份有限公司 Vehicle operation road risk early warning method, device, equipment and storage medium
CN116071960A (en) * 2023-04-06 2023-05-05 深圳市城市交通规划设计研究中心股份有限公司 Non-motor vehicle and pedestrian collision early warning method, electronic equipment and storage medium
CN117079219A (en) * 2023-10-08 2023-11-17 山东车拖车网络科技有限公司 Vehicle running condition monitoring method and device applied to trailer service
CN117079219B (en) * 2023-10-08 2024-01-09 山东车拖车网络科技有限公司 Vehicle running condition monitoring method and device applied to trailer service

Similar Documents

Publication Publication Date Title
US20200302196A1 (en) Traffic Signal Analysis System
WO2021004077A1 (en) Method and apparatus for detecting blind areas of vehicle
CN114530058A (en) Collision early warning method, device and system
CN113165652B (en) Verifying predicted trajectories using a mesh-based approach
US10345822B1 (en) Cognitive mapping for vehicles
RU2767955C1 (en) Methods and systems for determining the presence of dynamic objects by a computer
US9672446B1 (en) Object detection for an autonomous vehicle
US20210035442A1 (en) Autonomous Vehicles and a Mobility Manager as a Traffic Monitor
US11294387B2 (en) Systems and methods for training a vehicle to autonomously drive a route
CN107949875B (en) Method and system for determining traffic participants with interaction possibilities
KR20190100407A (en) Use of wheel orientation to determine future career
CN113423627A (en) Operating an automated vehicle according to road user reaction modeling under occlusion
CN113160547B (en) Automatic driving method and related equipment
US11042159B2 (en) Systems and methods for prioritizing data processing
US10769799B2 (en) Foreground detection
US10160459B2 (en) Vehicle lane direction detection
CN112543877B (en) Positioning method and positioning device
US20200174474A1 (en) Method and system for context and content aware sensor in a vehicle
JP7537787B2 (en) Collision prevention method, device, server and computer program
EP3825958B1 (en) A new way to generate tight 2d bounding boxes for autonomous driving labeling
CN113496189B (en) Sensing method and system based on static obstacle map
KR20220134033A (en) Point cloud feature-based obstacle filtering system
KR20230152643A (en) Multi-modal segmentation network for enhanced semantic labeling in mapping
CN111105644A (en) Vehicle blind area monitoring and driving control method and device and vehicle road cooperative system
US11380109B2 (en) Mobile launchpad for autonomous vehicles

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20220524