CN114078326A - Collision detection method, device, visual sensor and storage medium - Google Patents

Collision detection method, device, visual sensor and storage medium Download PDF

Info

Publication number
CN114078326A
CN114078326A CN202010838469.8A CN202010838469A CN114078326A CN 114078326 A CN114078326 A CN 114078326A CN 202010838469 A CN202010838469 A CN 202010838469A CN 114078326 A CN114078326 A CN 114078326A
Authority
CN
China
Prior art keywords
target
targets
current
depth information
probability
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010838469.8A
Other languages
Chinese (zh)
Other versions
CN114078326B (en
Inventor
关喜嘉
王邓江
邓永强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Wanji Technology Co Ltd
Original Assignee
Beijing Wanji Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Wanji Technology Co Ltd filed Critical Beijing Wanji Technology Co Ltd
Priority to CN202010838469.8A priority Critical patent/CN114078326B/en
Publication of CN114078326A publication Critical patent/CN114078326A/en
Application granted granted Critical
Publication of CN114078326B publication Critical patent/CN114078326B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • G08G1/0129Traffic data processing for creating historical data or processing based on historical data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Theoretical Computer Science (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Tourism & Hospitality (AREA)
  • Development Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Game Theory and Decision Science (AREA)
  • General Business, Economics & Management (AREA)
  • Electromagnetism (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Radar Systems Or Details Thereof (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application relates to a collision detection method, a collision detection device, a visual sensor and a storage medium. The method comprises the following steps: acquiring a target pixel position of each target in a plurality of targets on a two-dimensional image at the current moment; acquiring depth information corresponding to each target pixel position at the current moment by using a preset mapping model; the mapping model comprises a mapping relation between pixel positions of the two-dimensional image and depth information of a point cloud of the radar sensor; acquiring the state information of each target at the current moment; the state information is used for representing the moving state of the corresponding target at the current moment; predicting the subsequent state of each target based on the state information of each target at the current moment and the corresponding depth information to obtain the predicted depth information of each target at the subsequent moment; and determining the probability of collision between each target and other targets in the plurality of targets according to the predicted depth information of each target. By adopting the method, the probability of collision among the targets can be predicted, and the safe driving of the targets is ensured.

Description

Collision detection method, device, visual sensor and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a collision detection method and apparatus, a visual sensor, and a storage medium.
Background
The intelligent transportation system is characterized in that a traffic participant provides real-time traffic information of all places to a traffic information center through a sensor and transmission equipment which are arranged on a road, a vehicle and the like, the traffic information center can provide other information related to travel such as road traffic information and the like to the traffic participant after obtaining and processing the information, and a traveler can determine the travel mode of the traveler and select a route and the like according to the information, so that the safety of traffic travel can be ensured.
In the related art, when real-time traffic information of various places is provided to a traffic information center, image data in a road scene is generally collected by a monocular camera, depth information such as distance in the road scene is collected by a radar, and then the camera and the radar transmit the collected information to the traffic information center for processing.
However, in the above technology, when the radar fails, it is difficult to provide depth information to the traffic information center, so that the traffic information center cannot predict whether each target in the road will collide, and thus there is a potential safety hazard.
Disclosure of Invention
In view of the above, it is necessary to provide a collision detection method, apparatus, visual sensor, and storage medium capable of providing depth information so that safe driving of a target can be ensured, in view of the above technical problems.
A collision detection method is applied to a visual sensor and comprises the following steps:
acquiring a target pixel position of each target in a plurality of targets on a two-dimensional image at the current moment;
acquiring depth information corresponding to each target pixel position at the current moment by using a preset mapping model; the mapping model comprises a mapping relation between pixel positions of the two-dimensional image and depth information of a point cloud of the radar sensor;
acquiring the state information of each target at the current moment; the state information is used for representing the moving state of the corresponding target at the current moment;
predicting the subsequent state of each target based on the state information of each target at the current moment and the corresponding depth information to obtain the predicted depth information of each target at the subsequent moment;
and determining the probability of collision between each target and other targets in the plurality of targets according to the predicted depth information of each target.
In one embodiment, the acquiring the target pixel position of each of the plurality of targets on the two-dimensional image at the current time includes:
carrying out target detection on the two-dimensional image to obtain the position of a target frame corresponding to each target on the two-dimensional image;
and determining the target pixel position of each target on the two-dimensional image according to the position of the target frame of each target.
In one embodiment, the determining the target pixel position of each target on the two-dimensional image according to the position of the target frame of each target includes:
determining the bottom edge center point position of each target frame as the target pixel position of each target; alternatively, the position of each target frame is determined as the target pixel position of each target.
In one embodiment, the determining, according to the predicted depth information of each target, a probability of collision between each target and another target of the plurality of targets includes:
performing curve fitting processing on the depth information corresponding to each target and the predicted depth information to determine the running track of each target;
and determining the probability of collision between each target and other targets in the plurality of targets according to the running track of each target.
In one embodiment, the plurality of targets includes a current target and other targets; the determining the probability of collision between each target and other targets in the plurality of targets according to the moving trajectory of each target includes:
judging whether the running track of the current target and the running tracks of other targets have intersection points or not to obtain a first judgment result;
and determining the probability of collision between the current target and other targets according to the first judgment result.
In one embodiment, the determining, according to the first determination result, the probability of collision between the current target and another target includes:
and if the first judgment result is that the running track of the current target and the running tracks of other targets have intersection points, determining that the probability of collision between the current target and other targets is a first-level probability.
In one embodiment, the method further includes:
if the first judgment result is that the running track of the current target and the running tracks of other targets do not have intersection points, acquiring the actual size position of the current target and the actual size positions of other targets;
and determining the probability of collision between the current target and other targets according to the actual size position of the current target and the actual size positions of other targets.
In one embodiment, the acquiring the actual size position of the current target and the actual size positions of the other targets includes:
obtaining the position of the bottom edge of each target frame from the position of each target frame, and processing by using a mapping model to obtain depth information corresponding to the position of the bottom edge of each target frame;
constructing the length and width of the current target and the length and width of other targets according to the depth information corresponding to the positions of the bottom edges of the target frames;
acquiring the height of each target frame from the position of each target frame, and constructing the actual height of the current target and the actual heights of other targets according to the height of each target frame;
and obtaining the actual size position of the current target according to the length, the width and the actual height of the current target, and obtaining the actual size positions of other targets according to the length, the width and the actual height of other targets.
In one embodiment, the determining the probability of collision between the current target and the other targets according to the actual size position of the current target and the actual size positions of the other targets includes:
judging whether the actual size position of the current target and the actual size positions of other targets are overlapped at the current moment and the subsequent moment to obtain a second judgment result;
and determining the probability of collision between the current target and other targets according to the second judgment result.
In one embodiment, determining the probability of collision between the current target and the other target according to the second determination result includes:
if the second judgment result is that the actual size position of the current target and the actual size positions of other targets are overlapped at any moment, determining the probability of collision between the current target and the other targets as a second-level probability; the second level probability is lower than the first level probability.
In one embodiment, the method further includes:
if the second judgment result shows that the actual size position of the current target and the actual size positions of other targets do not overlap at the current moment and at subsequent moments, acquiring a first course angle of the current target at the current moment and second course angles of the other targets at the current moment;
acquiring a first course angle variation of a current target at a subsequent moment and a second course angle variation of other targets at the subsequent moment;
predicting a new operation track of the current target and new operation tracks of other targets according to the first course angle, the first course angle variation, the second course angle and the second course angle variation;
and determining the probability of collision between the current target and other targets according to the new running track of the current target and the new running tracks of other targets.
In one embodiment, the predicting the new operation track of the current target and the new operation tracks of the other targets according to the first heading angle and the first heading angle variation, and the second heading angle variation includes:
determining new first predicted depth information of the current target at the subsequent moment and new second predicted depth information of other targets at the subsequent moment according to the first course angle, the first course angle variation, the second course angle and the second course angle variation;
performing curve fitting processing on the first depth information and the new first predicted depth information to determine a new running track of the current target;
and performing curve fitting processing on the second depth information and the new second predicted depth information to determine new running tracks of other targets.
In one embodiment, the determining, according to the first heading angle and the first heading angle variation, and the second heading angle variation, the new first predicted depth information of the current target at the subsequent time and the new second predicted depth information of the other target at the subsequent time includes:
performing mathematical operation processing on the first course angle and the first course angle variation to obtain a first predicted course angle of the current target at the subsequent time;
performing mathematical operation processing on the second course angle and the second course angle variation to obtain a second predicted course angle of other targets at the subsequent time;
determining new first predicted depth information of the current target at the subsequent moment based on the first predicted course angle and the first depth information of the current target;
and determining new second predicted depth information of other targets at the subsequent moment based on the second predicted course angles and the second depth information of other targets.
In one embodiment, the determining the probability of collision between the current target and the other targets according to the new operation trajectory of the current target and the new operation trajectories of the other targets includes:
judging whether the new running track of the current target and the new running tracks of other targets have intersection points or not;
if the new running track of the current target and the new running tracks of other targets have intersection points, determining that the probability of collision between the current target and other targets is a third-level probability; the third level probability is lower than the second level probability.
A collision detection device applied to a vision sensor, the device comprising:
the first acquisition module is used for acquiring the target pixel position of each target in the plurality of targets on the two-dimensional image at the current moment;
the depth information determining module is used for acquiring depth information corresponding to each target pixel position at the current moment by using a preset mapping model; the mapping model comprises a mapping relation between pixel positions of the two-dimensional image and depth information of a point cloud of the radar sensor;
the second acquisition module is used for acquiring the state information of each target at the current moment; the state information is used for representing the moving state of the corresponding target at the current moment;
the prediction module is used for predicting the subsequent state of each target based on the state information of each target at the current moment and the corresponding depth information to obtain the predicted depth information of each target at the subsequent moment;
and the collision determining module is used for determining the probability of collision between each target and other targets in the plurality of targets according to the predicted depth information of each target.
A vision sensor comprising a camera, a memory and a processor, the memory storing a computer program which when executed by the processor effects the steps of:
acquiring a target pixel position of each target in a plurality of targets on a two-dimensional image at the current moment;
acquiring depth information corresponding to each target pixel position at the current moment by using a preset mapping model; the mapping model comprises a mapping relation between pixel positions of the two-dimensional image and depth information of a point cloud of the radar sensor;
acquiring the state information of each target at the current moment; the state information is used for representing the moving state of the corresponding target at the current moment;
predicting the subsequent state of each target based on the state information of each target at the current moment and the corresponding depth information to obtain the predicted depth information of each target at the subsequent moment;
and determining the probability of collision between each target and other targets in the plurality of targets according to the predicted depth information of each target.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring a target pixel position of each target in a plurality of targets on a two-dimensional image at the current moment;
acquiring depth information corresponding to each target pixel position at the current moment by using a preset mapping model; the mapping model comprises a mapping relation between pixel positions of the two-dimensional image and depth information of a point cloud of the radar sensor;
acquiring the state information of each target at the current moment; the state information is used for representing the moving state of the corresponding target at the current moment;
predicting the subsequent state of each target based on the state information of each target at the current moment and the corresponding depth information to obtain the predicted depth information of each target at the subsequent moment;
and determining the probability of collision between each target and other targets in the plurality of targets according to the predicted depth information of each target.
According to the collision detection method, the device, the visual sensor and the storage medium, the visual sensor is used for obtaining the target pixel position of each target in a plurality of targets on the two-dimensional image at the current moment, the preset mapping model is used for obtaining the depth information of each target pixel position at the current moment, the state information of each target at the current moment is obtained, the subsequent state of each target is predicted based on the state information and the corresponding depth information of each target at the current moment, the predicted depth information of each target at the subsequent moment is obtained, and the probability of collision between each target and other targets in the plurality of targets is determined according to the predicted depth information of each target; the mapping model comprises a mapping relation between pixel positions of the two-dimensional image and depth information of point clouds of the radar sensor, and the state information is used for representing the moving state of a corresponding target at the current moment. In the method, the visual sensor can establish a mapping model which comprises a mapping relation between the pixel position of the two-dimensional image and the depth information of the point cloud of the radar sensor, so that when the radar sensor cannot provide the depth information, the visual sensor can obtain the depth information corresponding to the pixel position of each target at the current moment through the pre-established mapping model, the depth information of each target at the subsequent moment can be predicted according to the depth information of each target at the current moment, and whether each target collides at the subsequent moment or not and the probability of collision can be determined according to the depth information of each target at the subsequent moment, so that the safe driving of each target is ensured.
Drawings
FIG. 1 is an internal block diagram of a vision sensor in one embodiment;
FIG. 2 is a schematic flow chart of a collision detection method in one embodiment;
FIG. 3 is a schematic flow chart of a collision detection method in another embodiment;
FIG. 4 is a schematic flow chart of a collision detection method in another embodiment;
FIG. 5 is a schematic flow chart of a collision detection method in another embodiment;
FIG. 6 is a schematic flow chart of a collision detection method in another embodiment;
fig. 7 is a block diagram showing the structure of the collision detecting apparatus in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The collision detection method provided by the embodiment of the application can be applied to the visual sensor shown in fig. 1, and the visual sensor can be a monocular camera, such as a gun-type camera, a dome-type camera, a ball-type camera, and the like, and has a computing capability. The visual sensor can comprise a processor, a memory, a communication interface, a display screen and an input device which are connected through a system bus, and can also comprise a camera, wherein the camera is mainly used for collecting image data of a target in a scene, and can be connected with the processor and used for transmitting the collected image data to the processor for processing. Wherein the processor of the vision sensor is configured to provide computational and control capabilities. The memory of the vision sensor comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the visual sensor is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a collision detection method.
Those skilled in the art will appreciate that the configuration shown in fig. 1 is a block diagram of only a portion of the configuration associated with the present application and does not constitute a limitation on the visual sensor to which the present application is applied, and that a particular visual sensor may include more or less components than those shown, or combine certain components, or have a different arrangement of components.
The actuator according to the embodiment of the present invention may be a visual sensor, or may be a collision detection device inside the visual sensor, and the visual sensor is described as the actuator.
In one embodiment, a collision detection method is provided, and the embodiment relates to a specific process of how to obtain depth information of a first target and a second target at the current time, predict depth information of two targets at subsequent time according to the depth information of the two targets at the current time, and determine the probability of collision of the two targets according to the depth information of the two targets at the subsequent time. As shown in fig. 2, the method may include the steps of:
s202, acquiring the pixel position of each target in the plurality of targets on the two-dimensional image at the current moment.
The two-dimensional image may be a two-dimensional image of a road scene, which may be an outdoor road scene, or an indoor amusement road scene. The respective objects here may be vehicles, pedestrians, etc. in a road scene, the respective objects may be the same type of object, e.g. all vehicles, may be trucks, cars, etc., or may be different types of objects, e.g. one object is a vehicle, the other objects are pedestrians, etc.
In addition, the target pixel position of each target on the two-dimensional image may be a position of a target frame where each target is located on the two-dimensional image, a position of one or more pixel points in the target frame where each target is located, a position of one or more pixel points on a frame boundary of the target frame, or the like.
Specifically, the vision sensor may perform target detection on each target on the two-dimensional image at the current time, so as to obtain a target pixel position corresponding to each target.
S204, acquiring depth information corresponding to each target pixel position at the current time by using a preset mapping model; the mapping model comprises a mapping relation between pixel positions of the two-dimensional image and depth information of a point cloud of the radar sensor.
In this step, before obtaining the depth information of each target at the current time, optionally, whether the radar sensor fails may be detected first, and if the radar sensor fails, this step may be executed, that is, the step of obtaining the depth information corresponding to each target pixel position at the current time by using a preset mapping model is executed. The failure of the radar sensor refers to the conditions of data distortion, data loss, data unavailability and the like of the radar sensor caused by damage of the radar sensor, damage of partial data in data acquired by the radar sensor, weather environment and other objective reasons. The radar sensor may be a laser radar, a millimeter wave radar, etc., the laser radar may include an 8-line, 16-line, 24-line, 128-line laser radar, and the millimeter wave radar may be a 24G, 77G radar, etc.
In addition, the mapping model in this step may be a fitting mapping model or a depth learning model, and thus, before the depth information of each target at the current time is obtained using the target pixel position of each target in this step, a fitting mapping model or a depth learning model between the pixel position and the depth information may be obtained first.
When the fitting mapping model and the depth learning model are obtained, historical image data and historical point cloud data of a historical target at each moment in the scene can be collected in the same scene, wherein the historical image data comprises the positions of pixel points of the historical target, the historical image data can be obtained by measuring through a visual sensor, the historical point cloud data comprises depth information of the historical target on the pixel points, the depth information can represent the distance between the historical target and collection equipment, the collection equipment can be a radar sensor, and the historical target can be a vehicle, a pedestrian and the like in the scene; and then, correlating the positions of the pixel points in the historical image data and the depth information in the point cloud data at the same moment to obtain a mapping relation between the positions and the depth information, so as to obtain a fitting mapping model. Similarly, the position of a pixel point in the historical image data of the historical target at the same moment can be used as the input of the initial deep learning model, the depth information in the historical point cloud data at the moment is used as a label, and the initial deep learning model is trained to obtain the deep learning model.
It should be noted that, in the case of roadside, the vision sensor and the radar sensor are usually installed at the same position on the roadside, so the depth information in the above-mentioned historical point cloud data can represent the distance between the historical target and the radar, and substantially the distance between the historical target and the vision sensor.
After the fitting mapping model or the deep learning model is established, the target pixel positions of the targets at the current moment can be input into the fitting mapping model or the deep learning model, and the depth information of the targets at the current moment can be obtained. The depth information obtained here refers to the distance between each point on each target and the vision sensor, that is, the position of each target in the real road scene can be obtained.
S206, acquiring the state information of each target at the current moment; the state information is used for representing the moving state of the corresponding target at the current moment.
In this step, the state information of each target at the current time may include information such as a speed and a heading angle of each target at the current time.
Specifically, after the depth information of each object at the current time is obtained, the state information of each object at the current time may also be obtained.
S208, predicting the subsequent state of each target based on the state information of each target at the current moment and the corresponding depth information to obtain the predicted depth information of each target at the subsequent moment.
Specifically, after the depth information of each target at the current time and the state information of each target are obtained, the depth information of each target at one or more subsequent times can be predicted according to the state information of each target at the current time and the depth information of each target at the current time, and the predicted depth information of each target at the subsequent times is recorded.
For example, assuming that the depth information of a target at the current time and the state information of the target are a course angle and a speed as an example, and the depth information is an actual three-dimensional position of the target, the actual three-dimensional position of the target at the next time, that is, the predicted depth information of the target at the next time, may be obtained by taking the actual three-dimensional position of the target as a basis, obtaining a position change amount of the target from the current time to the next time through a product of the speed and time in the course angle direction, and then adding the position change amount to the actual three-dimensional position of the current time. By analogy, the predicted depth information of the target at a plurality of subsequent moments can be predicted.
S210, determining the probability of collision between each target and other targets in the plurality of targets according to the predicted depth information of each target.
Here, one or more pieces of predicted depth information of each target may be used, and the number of pieces of predicted depth information used by each target is generally equal.
Specifically, after obtaining the predicted depth information of each target at the subsequent time, optionally, the future moving trajectory and the contour size in the actual scene (or the size of the target frame) of each target may be obtained according to the predicted depth information of each target, so that whether any one target in each target collides with another target and the collision probability when the target collides may be obtained by comparing the future moving trajectory of each target and the contour size in the actual scene.
The collision detection method comprises the steps of obtaining target pixel positions of each target in a plurality of targets on a two-dimensional image at the current moment through a visual sensor, obtaining depth information of the target pixel positions at the current moment by using a preset mapping model, obtaining state information of the targets at the current moment, predicting the subsequent state of each target based on the state information obtained by the targets at the current moment and the corresponding depth information to obtain predicted depth information of each target at the subsequent moment, and determining the probability of collision between each target and other targets in the plurality of targets according to the predicted depth information of each target; the mapping model comprises a mapping relation between pixel positions of the two-dimensional image and depth information of point clouds of the radar sensor, and the state information is used for representing the moving state of a corresponding target at the current moment. In the method, the visual sensor can establish a mapping model which comprises a mapping relation between the pixel position of the two-dimensional image and the depth information of the point cloud of the radar sensor, so that when the radar sensor cannot provide the depth information, the visual sensor can obtain the depth information corresponding to the pixel position of each target at the current moment through the pre-established mapping model, the depth information of each target at the subsequent moment can be predicted according to the depth information of each target at the current moment, and whether each target collides at the subsequent moment or not and the probability of collision can be determined according to the depth information of each target at the subsequent moment, so that the safe driving of each target is ensured.
In another embodiment, another collision detection method is provided, which relates to a specific process of how to obtain the target pixel positions on the two-dimensional image of each target at the current time. On the basis of the above embodiment, as shown in fig. 3, the above S202 may include the following steps:
s302, carrying out target detection on the two-dimensional image to obtain the position of a target frame corresponding to each target on the two-dimensional image.
The target detection of the target may be performed by using a target detection related algorithm, for example, a yolo v3(you only look once) target detection algorithm, an SSD (Single Shot multiple box Detector) target detection algorithm, and the like.
Specifically, the two-dimensional image at the current time may be detected by using an object detection algorithm to obtain the object frame and the frame position of each object on the two-dimensional image at the current time, and of course, the confidence of each object frame (which indicates the probability that the object in each object frame is an object of a certain type), the type of each object (for example, whether the object is a male or a female, whether the object is a large truck or a small car, etc.), the identifier of each object, and the like may also be obtained.
S304, determining the target pixel position of each target on the two-dimensional image according to the position of the target frame of each target.
The position of each target frame can be obtained by obtaining the position of each pixel point in each target frame and on the boundary, and the target pixel position of each target can be determined from the positions. Alternatively, the bottom center point position of each target frame may be determined as the target pixel position of each target. That is, the position of the pixel point corresponding to the bottom edge center point of each target frame can be determined as the target pixel position of each target, the bottom edge center point is selected as the target pixel position, the method is simple, and meanwhile, the bottom edge is a point close to the ground, so that the selected point is relatively accurate when representing each target.
Of course, alternatively, the position of each target frame may be determined as the target pixel position of each target. That is, the position of each pixel point in each target frame and on the boundary can be directly used as the target pixel position of each target, or the position of each pixel point on the boundary of each target frame can be used as the target pixel position of each target. Therefore, the size information of the target in the actual scene can be conveniently obtained through the positions of all pixel points in the target frame and on the boundary.
The collision detection method provided in this embodiment may perform target detection on the two-dimensional image at the current time to obtain the position of the target frame corresponding to each target on the two-dimensional image, and obtain the target pixel position of each target according to the position of the target frame of each target. Thus, a data base can be provided for subsequently obtaining the depth information of each target; meanwhile, the mode of obtaining the target pixel position of each target is simpler, and the obtained pixel position is more accurate, so that the subsequent collision prediction process is quicker, and the prediction result is more accurate.
In another embodiment, another collision detection method is provided, and the embodiment relates to a specific process of how to determine the probability of collision of each target with other targets according to the predicted depth information of each target. On the basis of the above embodiment, as shown in fig. 4, the above S210 may include the following steps:
s402, performing curve fitting processing on the depth information and the predicted depth information corresponding to each target, and determining the running track of each target.
In this step, the curve fitting may be to directly connect the depth information of the same target at multiple times to obtain the moving trajectory of the target, or to smoothly connect the depth information of the same target at multiple times to obtain the moving trajectory of the target.
Specifically, the depth information of each target at the current time may be a three-dimensional coordinate of each target at the current time in a world coordinate system or a polar coordinate in a polar coordinate system, and the predicted depth information of each target may be a three-dimensional coordinate or a polar coordinate of each target at a subsequent time. Therefore, curve fitting can be carried out on the three-dimensional coordinates or polar coordinates of each target at the current moment and the subsequent moment, and the future running track of each target is obtained.
S404, determining the probability of collision between each target and other targets in the plurality of targets according to the running track of each target.
In this step, after obtaining the future operation trajectory of each target, optionally, the plurality of targets include a current target and other targets; assuming that there are a plurality of targets, any one of which is the current target, the targets other than the current target are all other targets. The vision sensor can judge whether the running track of the current target and the running tracks of other targets have intersection points or not to obtain a first judgment result; and determining the probability of collision between the current target and other targets according to the first judgment result. That is, the trajectory of the current target and one of the other targets may be considered as two connecting lines, and the connecting lines may be straight lines or curved lines, so that whether the two connecting lines intersect or not may be determined, and a determination result of whether the two connecting lines intersect or not is obtained and recorded as a first determination result.
In a possible embodiment, optionally, if the first determination result is that the operation trajectory of the current target and the operation trajectories of the other targets have an intersection, determining that the probability of the collision between the current target and the other targets is the first-level probability. That is, if the future operation trajectory of the current target and the future operation trajectories of the other targets have an intersection, it indicates that the current target and the other targets may collide at a subsequent time, and the collision evaluation level is high, i.e. the probability of collision is high, which may be referred to as a first-level probability, where the first-level probability may be a probability related to time, i.e. the probability is higher if the intersection occurs in a shorter time. For example, different weights may be given according to the occurrence of various levels, for example, the first level probability may be a ═ 0.9 × log (t), the second level probability may be a ═ 0.6 × log (t), and so on, where t is time, i.e., the time difference between the current time and the next time.
In another possible implementation, if the future trajectories of the two targets do not have an intersection, then there is a possibility that the two targets will not collide in the future, but there is also a possibility that the two targets will collide, i.e. the probability of collision may not be too high, and then further determination may be continued by other means, which will be further described in the following examples.
The collision detection method provided in this embodiment may perform curve fitting on the depth information of each target at multiple times to obtain the movement track of each target, and determine the probability of collision of each target according to the movement track of each target. By the method, the probability of collision of each target can be determined through the running track of each target, so that the method is simple and rapid, the speed of collision detection can be increased, and the method is intuitive.
In another embodiment, another collision detection method is provided, and this embodiment relates to a specific process of how to further judge the probability of collision according to the actual contour size positions of the two targets when the running tracks of the two targets do not have an intersection. On the basis of the above embodiment, as shown in fig. 5, the method may further include the following steps:
and S502, if the first judgment result shows that the running track of the current target and the running tracks of other targets do not have intersection points, acquiring the actual size position of the current target and the actual size positions of other targets.
In this step, another object will be described below as an example of one other object.
When the running tracks of the current target and the other targets do not have an intersection point, whether the current target and the other targets still collide in the future or what the probability of collision is can be further judged. Optionally, the size and position of the current target and other targets in the actual scene, that is, the position of the current target and other targets in the actual scene may be obtained, optionally, through the following steps a1-a 4:
a1, obtaining the position of the bottom edge of each target frame from the positions of each target frame, and processing the positions by using the mapping model to obtain the depth information corresponding to the position of the bottom edge of each target frame.
A2, constructing the length and width of the current object and the length and width of the other object according to the depth information corresponding to the position of the bottom side of each object frame.
And A3, obtaining the height of each target frame from the position of each target frame, and constructing the actual height of the current target and the actual heights of other targets according to the height of each target frame.
And A4, obtaining the actual size position of the current target according to the length and width and the actual height of the current target, and obtaining the actual size positions of other targets according to the length and width and the actual height of other targets.
In a1-a4, the mapping model may map the position of a pixel to the depth information of the pixel, so that the positions of all pixels in the current target frame may be input into the mapping model, and the depth information of all pixels in the current target frame may be obtained. Of course, the positions of the pixel points on each boundary of the current target frame may be selected from all the pixel points in the current target frame, and then the depth information of the pixel points on each boundary of the current target frame is obtained through the mapping model, where the depth information of the pixel points on each boundary of the current target frame is also the depth information corresponding to the boundary frame of the current target.
After the positions of the pixel points in the target frame corresponding to the targets are obtained, the depth information corresponding to the pixel points can be obtained by using the mapping model, so that the depth information of the points of the boundary frame of the target frame can be obtained, and the depth information of the points at the bottom of the boundary frame of the target frame can be obtained.
After the depth information of each point at the bottom edge of each bounding box is obtained, the depth information of each point at the bottom edge is (x, y, z) three-dimensional coordinates, so that the length and the width of the target can be constructed through the three-dimensional coordinates of each point at the bottom edge. For the height of the target, corresponding calculation can be performed by using different pixel heights on the established different fitting curves, for example, h1=∑αkhk2,h1Is the actual height of the target, h2The target pixel height, i.e. the height of the bounding box here, k is the index of the point, and α is a height scaling factor, which can be inferred from the actual situation, here a known number. Of course, the actual height of the target may be obtained in other ways, for example, by deriving a geometric relationship. In short, by the method, the length, the width and the height of each target can be obtained, and the length, the width and the height of each target are combined to obtain the size of each target, namely the actual size position of each target.
For example, taking two-dimensional coordinates in an actual scene as an example, assuming that a target frame of a current target is a rectangle, there are 4 intersection points, assuming that coordinates of the four corner points (upper left, lower right, and upper right, respectively) in the actual scene are two-dimensional coordinates (1,9), (1,3), (5,3), and (5,9), then the four corner points frame together is an actual size position where the current target is located, and the length of the frame is 4 and the width of the frame is 6.
S504, determining the probability of collision between the current target and other targets according to the actual size position of the current target and the actual size positions of other targets.
The actual size positions of the current target and other targets in the actual scene may be unchanged at any time, and then the probability of collision between the two targets may be determined by judging the actual size positions of the two targets at the current time or at a subsequent time.
Optionally, whether the actual size position of the current target overlaps with the actual size positions of other targets at the current time and at subsequent times can be judged to obtain a second judgment result; and determining the probability of collision between the current target and other targets according to the second judgment result. That is, after the actual size positions of the current object and the other objects are obtained, it may be determined whether the actual size positions of the two objects overlap at any subsequent time, and a second determination result of whether the two objects overlap is obtained.
In a possible implementation manner, optionally, if the second determination result is that the actual size position of the current target and the actual size positions of the other targets overlap at any one time, determining that the probability that the current target and the other targets collide is a second-level probability; the second level probability is lower than the first level probability. That is, if the actual size positions of the current target and the other targets at any subsequent time are overlapped, where the overlap may be that the bounding boxes of the two targets in the actual scene have an overlapped part, and the overlapped part is not 0, it indicates that the current target and the other targets may collide at the subsequent time, and the collision evaluation level is higher, that is, the probability of collision is relatively higher, which may be referred to as a second-level probability, where the second-level probability is smaller than the first-level probability.
In another possible implementation, if the actual size positions of two targets do not overlap, it is possible that the two targets will not collide in the future, but it is also possible that the collision will occur, i.e. the probability of collision may not be too high, and further determination may be continued by other means, which are also further explained in the following examples.
According to the collision detection method provided by the embodiment, when the running tracks of the current target and the other targets do not have intersection points, the actual size positions of the current target and the other targets can be obtained, and the probability of collision between the current target and the other targets is determined according to the actual size positions of the two targets. By the method, when the tracks of the two targets do not have an intersection point, the probability of collision of the two targets can be further judged by adopting the actual size position, so that the finally obtained probability of collision of the two targets can be more accurate through further judgment, and safe driving between the targets can be further ensured.
In another embodiment, another collision detection method is provided, and this embodiment relates to a specific process of how to further obtain new trajectories of two targets through the course angles of the two targets when the actual size positions of the two targets do not overlap, and further judge the probability of collision between the two targets through the new trajectories. On the basis of the above embodiment, as shown in fig. 6, the method may further include the following steps:
s602, if the second judgment result shows that the actual size position of the current target and the actual size positions of other targets do not overlap at the current moment and at subsequent moments, a first course angle of the current target at the current moment and second course angles of the other targets at the current moment are obtained.
S604, acquiring a first course angle variation of the current target at the subsequent time and a second course angle variation of other targets at the subsequent time.
In steps S602-S604, when the actual size positions of the current target and the other targets do not overlap, it may be further determined whether the current target and the other targets will collide in the future or what the probability of collision will be. Optionally, any axis in the world coordinate system may be acquired, the acquired axis is used as a reference axis (which may be any coordinate axis of the world coordinate system, such as an x-axis, a y-axis, or a z-axis), an included angle between the position vector variable of the current target at the current time and the reference axis is calculated, and the obtained included angle is determined as a heading angle of the current target at the current time and is recorded as a first heading angle. The position vector variable of the current target at the current time may be obtained by, for example, subtracting the current position vector of the current target from the previous position vector (the current position vector of the current target may be obtained by connecting the three-dimensional coordinates of the origin in the world coordinate system and the three-dimensional coordinates of the current target at the current time). And calculating the course angle of the current target at the previous moment according to the method, and then making a difference between the course angles at the previous moment and the current moment to obtain the course angle variation of the current target at the two moments, and recording the course angle variation as the first course angle variation of the current moment, wherein the first course angle variation can also be preset and directly taken out for use when in need.
Similarly, the course angles of other targets at the current moment can be obtained by the same method as the current target, and are recorded as the second course angle, and the second course angle variation corresponding to other targets at the current moment is obtained.
And S606, predicting the new running track of the current target and the new running tracks of other targets according to the first course angle, the first course angle variation, the second course angle and the second course angle variation.
In this step, when new moving trajectories of the current target and other targets are obtained, optionally, the following steps a1-A3 may be adopted:
and A1, determining new first predicted depth information of the current target at the subsequent time and new second predicted depth information of other targets at the subsequent time according to the first course angle and the first course angle variation, and the second course angle variation.
In this step, optionally, mathematical operation processing may be performed on the first course angle and the first course angle variation to obtain a first predicted course angle of the current target at the subsequent time; performing mathematical operation processing on the second course angle and the second course angle variation to obtain a second predicted course angle of other targets at the subsequent time; determining new first predicted depth information of the current target at the subsequent moment based on the first predicted course angle and the first depth information of the current target; and determining new second predicted depth information of other targets at the subsequent moment based on the second predicted course angles and the second depth information of other targets.
The mathematical operation processing here may be addition or subtraction, where the addition is mainly, that is, the first heading angle of the current target may be added with the first heading angle variation to obtain a first predicted heading angle of the current target at the next time, and of course, the first predicted heading angle of the current target at the next time may be added with the first heading angle variation to obtain a first predicted heading angle at the next time, and so on at subsequent times, so that the first predicted heading angles of the current target at subsequent multiple times may be obtained. Similarly, according to the method, second predicted course angles of other targets at subsequent moments can be obtained.
Then, the depth information of the current target or other targets at the subsequent time can be predicted according to the depth information of each target at the current time and the predicted course angle corresponding to each subsequent time, so as to obtain new first predicted depth information of the current target at the subsequent time and new second predicted depth information of other targets at the subsequent time.
And A2, performing curve fitting processing on the first depth information and the new first predicted depth information to determine a new running track of the current target.
And A3, performing curve fitting processing on the second depth information and the new second predicted depth information to determine new running tracks of other targets.
Specifically, the new first predicted depth information may be a new three-dimensional coordinate or a polar coordinate of the current target at the subsequent time, and similarly, the new second predicted depth information may also be a new three-dimensional coordinate or a polar coordinate of another target at the subsequent time, so that curve fitting may be performed on the new three-dimensional coordinate or the polar coordinate of the current target at the current time and the subsequent time to obtain a new operation trajectory of the current target, and similarly, curve fitting may be performed on the new three-dimensional coordinate or the polar coordinate of the other target at the current time and the subsequent time to obtain a new operation trajectory of the other target.
And S608, determining the probability of collision between the current target and other targets according to the new running track of the current target and the new running tracks of other targets.
In this step, when determining the probability of collision between two targets through the new operation trajectory, optionally, it may be determined whether the new operation trajectory of the current target and the new operation trajectories of other targets have an intersection; if the new running track of the current target and the new running tracks of other targets have intersection points, determining that the probability of collision between the current target and other targets is a third-level probability; the third level probability is lower than the second level probability. That is, if the new trajectory of the current target and the new trajectories of the other targets have an intersection, that is, the current target and the other targets may collide at a subsequent time along with the change trend of the heading angle, but the collision evaluation level is not very high, that is, the probability of collision is not too high, the collision level is general, and may be referred to as a third-level probability, where the third-level probability is lower than the second-level probability, and certainly is lower than the first-level probability.
According to the collision detection method provided by the embodiment, when the actual size positions of the current target and the other targets are not overlapped, the new running tracks of the current target and the other targets can be obtained based on the change of the heading angles of the current target and the other targets, and the probability of collision between the two targets is determined based on the new running tracks of the current target and the other targets. By the method, when the tracks of the two targets do not have an intersection point and the actual size positions of the two targets do not overlap, a new running track is further obtained based on the change of the course angle, and the probability of collision of the two targets is further judged through the new running track.
It should be understood that although the various steps in the flow charts of fig. 2-6 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-6 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least some of the other steps.
In one embodiment, as shown in fig. 7, there is provided a collision detecting apparatus including: a first obtaining module 10, a depth information determining module 11, a second obtaining module 12, a predicting module 13 and a collision determining module 14, wherein:
a first obtaining module 10, configured to obtain a target pixel position of each target in the multiple targets on the two-dimensional image at the current time;
the depth information determining module 11 is configured to obtain depth information corresponding to each target pixel position at the current time by using a preset mapping model; the mapping model comprises a mapping relation between pixel positions of the two-dimensional image and depth information of a point cloud of the radar sensor;
a second obtaining module 12, configured to obtain state information of each target at the current time; the state information is used for representing the moving state of the corresponding target at the current moment;
the prediction module 13 is configured to predict the subsequent state of each target based on the state information of each target at the current time and the corresponding depth information, so as to obtain predicted depth information of each target at the subsequent time;
and the collision determining module 14 is configured to determine, according to the predicted depth information of each target, a probability that each target collides with another target in the multiple targets.
For the specific definition of the collision detection means, reference may be made to the above definition of the collision detection method, which is not described in detail here.
In another embodiment, another collision detection apparatus is provided, and on the basis of the above embodiment, the above first acquisition module 10 may include an object detection unit and a pixel position determination unit, wherein:
the target detection unit is used for carrying out target detection on the two-dimensional image to obtain the position of a target frame corresponding to each target on the two-dimensional image;
and the pixel position determining unit is used for determining the target pixel position of each target on the two-dimensional image according to the position of the target frame of each target.
Optionally, the pixel position determining unit is specifically configured to determine a bottom center point position of each target frame as a target pixel position of each target; alternatively, the position of each target frame is determined as the target pixel position of each target.
In another embodiment, another collision detection apparatus is provided, and on the basis of the above embodiment, the collision determination module 14 may include a curve fitting unit and a collision determination unit, wherein:
the curve fitting unit is used for performing curve fitting processing on the depth information and the predicted depth information corresponding to each target and determining the running track of each target;
and the collision determining unit is used for determining the probability of collision between each target and other targets in the plurality of targets according to the running track of each target.
Optionally, the multiple targets include a current target and other targets; the above-mentioned collision determination unit may include a first judgment subunit and a collision determination subunit, wherein:
the first judgment subunit is used for judging whether the running track of the current target and the running tracks of other targets have intersection points or not to obtain a first judgment result;
and the collision determining subunit is used for determining the probability of collision between the current target and other targets according to the first judgment result.
Optionally, the collision determining subunit is specifically configured to determine, when the first determination result is that the running trajectory of the current target and the running trajectories of other targets have an intersection, that the probability of the collision between the current target and other targets is a first rank probability.
Optionally, the collision determining subunit is further configured to, when the first determination result is that the running trajectory of the current target and the running trajectories of other targets do not have an intersection, obtain an actual size position of the current target and actual size positions of the other targets; and determining the probability of collision between the current target and other targets according to the actual size position of the current target and the actual size positions of other targets.
Optionally, the collision determining subunit is further configured to obtain a position of a bottom edge of each target frame from the position of each target frame, and process the position by using a mapping model to obtain depth information corresponding to the position of the bottom edge of each target frame; constructing the length and width of the current target and the length and width of other targets according to the depth information corresponding to the positions of the bottom edges of the target frames; acquiring the height of each target frame from the position of each target frame, and constructing the actual height of the current target and the actual heights of other targets according to the height of each target frame; and obtaining the actual size position of the current target according to the length, the width and the actual height of the current target, and obtaining the actual size positions of other targets according to the length, the width and the actual height of other targets.
Optionally, the collision determining subunit is further configured to determine whether the actual size position of the current target overlaps with the actual size positions of the other targets at the current time and at subsequent times, so as to obtain a second determination result; and determining the probability of collision between the current target and other targets according to the second judgment result.
Optionally, the collision determining subunit is further configured to determine, when the second determination result is that the actual size position of the current target and the actual size positions of the other targets overlap at any time, that the probability of the collision between the current target and the other targets is a second-level probability; the second level probability is lower than the first level probability.
Optionally, the collision determining subunit is further configured to, when the second determination result is that there is no overlap between the actual size position of the current target and the actual size positions of other targets at the current time and at subsequent times, obtain a first course angle of the current target at the current time and a second course angle of the other targets at the current time; acquiring a first course angle variation of a current target at a subsequent moment and a second course angle variation of other targets at the subsequent moment; predicting a new operation track of the current target and new operation tracks of other targets according to the first course angle, the first course angle variation, the second course angle and the second course angle variation; and determining the probability of collision between the current target and other targets according to the new running track of the current target and the new running tracks of other targets.
Optionally, the collision determining subunit is further configured to determine, according to the first course angle and the first course angle variation, and the second course angle variation, new first predicted depth information of the current target at the subsequent time and new second predicted depth information of other targets at the subsequent time; performing curve fitting processing on the first depth information and the new first predicted depth information to determine a new running track of the current target; and performing curve fitting processing on the second depth information and the new second predicted depth information to determine new running tracks of other targets.
Optionally, the collision determining subunit is further configured to perform mathematical operation on the first course angle and the first course angle variation to obtain a first predicted course angle of the current target at a subsequent time; performing mathematical operation processing on the second course angle and the second course angle variation to obtain a second predicted course angle of other targets at the subsequent time; determining new first predicted depth information of the current target at the subsequent moment based on the first predicted course angle and the first depth information of the current target; and determining new second predicted depth information of other targets at the subsequent moment based on the second predicted course angles and the second depth information of other targets.
Optionally, the collision determining subunit is further configured to determine whether an intersection exists between the new operation trajectory of the current target and the new operation trajectories of other targets; under the condition that the new running track of the current target and the new running tracks of other targets have intersection points, determining the probability of collision between the current target and other targets as a third-level probability; the third level probability is lower than the second level probability.
For the specific definition of the collision detection means, reference may be made to the above definition of the collision detection method, which is not described in detail here.
The respective modules in the above-described collision detection apparatus may be implemented in whole or in part by software, hardware, and a combination thereof. The modules can be embedded in a processor in a computer device of the vision sensor in a hardware form or independent of the vision sensor, and can also be stored in a memory in the computer device of the vision sensor in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, there is provided a vision sensor comprising a camera, a memory and a processor, the memory having stored therein a computer program, the processor when executing the computer program implementing the steps of:
acquiring a target pixel position of each target in a plurality of targets on a two-dimensional image at the current moment;
acquiring depth information corresponding to each target pixel position at the current moment by using a preset mapping model; the mapping model comprises a mapping relation between pixel positions of the two-dimensional image and depth information of a point cloud of the radar sensor;
acquiring the state information of each target at the current moment; the state information is used for representing the moving state of the corresponding target at the current moment;
predicting the subsequent state of each target based on the state information of each target at the current moment and the corresponding depth information to obtain the predicted depth information of each target at the subsequent moment;
and determining the probability of collision between each target and other targets in the plurality of targets according to the predicted depth information of each target.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
carrying out target detection on the two-dimensional image to obtain the position of a target frame corresponding to each target on the two-dimensional image; and determining the target pixel position of each target on the two-dimensional image according to the position of the target frame of each target.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
determining the bottom edge center point position of each target frame as the target pixel position of each target; alternatively, the position of each target frame is determined as the target pixel position of each target.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
performing curve fitting processing on the depth information corresponding to each target and the predicted depth information to determine the running track of each target; and determining the probability of collision between each target and other targets in the plurality of targets according to the running track of each target.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
judging whether the running track of the current target and the running tracks of other targets have intersection points or not to obtain a first judgment result; and determining the probability of collision between the current target and other targets according to the first judgment result.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
and if the first judgment result is that the running track of the current target and the running tracks of other targets have intersection points, determining that the probability of collision between the current target and other targets is a first-level probability.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
if the first judgment result is that the running track of the current target and the running tracks of other targets do not have intersection points, acquiring the actual size position of the current target and the actual size positions of other targets; and determining the probability of collision between the current target and other targets according to the actual size position of the current target and the actual size positions of other targets.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
obtaining the position of the bottom edge of each target frame from the position of each target frame, and processing by using a mapping model to obtain depth information corresponding to the position of the bottom edge of each target frame; constructing the length and width of the current target and the length and width of other targets according to the depth information corresponding to the positions of the bottom edges of the target frames; acquiring the height of each target frame from the position of each target frame, and constructing the actual height of the current target and the actual heights of other targets according to the height of each target frame; and obtaining the actual size position of the current target according to the length, the width and the actual height of the current target, and obtaining the actual size positions of other targets according to the length, the width and the actual height of other targets.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
judging whether the actual size position of the current target and the actual size positions of other targets are overlapped at the current moment and the subsequent moment to obtain a second judgment result; and determining the probability of collision between the current target and other targets according to the second judgment result.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
if the second judgment result is that the actual size position of the current target and the actual size positions of other targets are overlapped at any moment, determining the probability of collision between the current target and the other targets as a second-level probability; the second level probability is lower than the first level probability.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
if the second judgment result shows that the actual size position of the current target and the actual size positions of other targets do not overlap at the current moment and at subsequent moments, acquiring a first course angle of the current target at the current moment and second course angles of the other targets at the current moment; acquiring a first course angle variation of a current target at a subsequent moment and a second course angle variation of other targets at the subsequent moment; predicting a new operation track of the current target and new operation tracks of other targets according to the first course angle, the first course angle variation, the second course angle and the second course angle variation; and determining the probability of collision between the current target and other targets according to the new running track of the current target and the new running tracks of other targets.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
determining new first predicted depth information of the current target at the subsequent moment and new second predicted depth information of other targets at the subsequent moment according to the first course angle, the first course angle variation, the second course angle and the second course angle variation; performing curve fitting processing on the first depth information and the new first predicted depth information to determine a new running track of the current target; and performing curve fitting processing on the second depth information and the new second predicted depth information to determine new running tracks of other targets.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
performing mathematical operation processing on the first course angle and the first course angle variation to obtain a first predicted course angle of the current target at the subsequent time; performing mathematical operation processing on the second course angle and the second course angle variation to obtain a second predicted course angle of other targets at the subsequent time; determining new first predicted depth information of the current target at the subsequent moment based on the first predicted course angle and the first depth information of the current target; and determining new second predicted depth information of other targets at the subsequent moment based on the second predicted course angles and the second depth information of other targets.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
judging whether the new running track of the current target and the new running tracks of other targets have intersection points or not; if the new running track of the current target and the new running tracks of other targets have intersection points, determining that the probability of collision between the current target and other targets is a third-level probability; the third level probability is lower than the second level probability.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring a target pixel position of each target in a plurality of targets on a two-dimensional image at the current moment;
acquiring depth information corresponding to each target pixel position at the current moment by using a preset mapping model; the mapping model comprises a mapping relation between pixel positions of the two-dimensional image and depth information of a point cloud of the radar sensor;
acquiring the state information of each target at the current moment; the state information is used for representing the moving state of the corresponding target at the current moment;
predicting the subsequent state of each target based on the state information of each target at the current moment and the corresponding depth information to obtain the predicted depth information of each target at the subsequent moment;
and determining the probability of collision between each target and other targets in the plurality of targets according to the predicted depth information of each target.
In one embodiment, the computer program when executed by the processor further performs the steps of:
carrying out target detection on the two-dimensional image to obtain the position of a target frame corresponding to each target on the two-dimensional image; and determining the target pixel position of each target on the two-dimensional image according to the position of the target frame of each target.
In one embodiment, the computer program when executed by the processor further performs the steps of:
determining the bottom edge center point position of each target frame as the target pixel position of each target; alternatively, the position of each target frame is determined as the target pixel position of each target.
In one embodiment, the computer program when executed by the processor further performs the steps of:
performing curve fitting processing on the depth information corresponding to each target and the predicted depth information to determine the running track of each target; and determining the probability of collision between each target and other targets in the plurality of targets according to the running track of each target.
In one embodiment, the computer program when executed by the processor further performs the steps of:
judging whether the running track of the current target and the running tracks of other targets have intersection points or not to obtain a first judgment result; and determining the probability of collision between the current target and other targets according to the first judgment result.
In one embodiment, the computer program when executed by the processor further performs the steps of:
and if the first judgment result is that the running track of the current target and the running tracks of other targets have intersection points, determining that the probability of collision between the current target and other targets is a first-level probability.
In one embodiment, the computer program when executed by the processor further performs the steps of:
if the first judgment result is that the running track of the current target and the running tracks of other targets do not have intersection points, acquiring the actual size position of the current target and the actual size positions of other targets; and determining the probability of collision between the current target and other targets according to the actual size position of the current target and the actual size positions of other targets.
In one embodiment, the computer program when executed by the processor further performs the steps of:
obtaining the position of the bottom edge of each target frame from the position of each target frame, and processing by using a mapping model to obtain depth information corresponding to the position of the bottom edge of each target frame; constructing the length and width of the current target and the length and width of other targets according to the depth information corresponding to the positions of the bottom edges of the target frames; acquiring the height of each target frame from the position of each target frame, and constructing the actual height of the current target and the actual heights of other targets according to the height of each target frame; and obtaining the actual size position of the current target according to the length, the width and the actual height of the current target, and obtaining the actual size positions of other targets according to the length, the width and the actual height of other targets.
In one embodiment, the computer program when executed by the processor further performs the steps of:
judging whether the actual size position of the current target and the actual size positions of other targets are overlapped at the current moment and the subsequent moment to obtain a second judgment result; and determining the probability of collision between the current target and other targets according to the second judgment result.
In one embodiment, the computer program when executed by the processor further performs the steps of:
if the second judgment result is that the actual size position of the current target and the actual size positions of other targets are overlapped at any moment, determining the probability of collision between the current target and the other targets as a second-level probability; the second level probability is lower than the first level probability.
In one embodiment, the computer program when executed by the processor further performs the steps of:
if the second judgment result shows that the actual size position of the current target and the actual size positions of other targets do not overlap at the current moment and at subsequent moments, acquiring a first course angle of the current target at the current moment and second course angles of the other targets at the current moment; acquiring a first course angle variation of a current target at a subsequent moment and a second course angle variation of other targets at the subsequent moment; predicting a new operation track of the current target and new operation tracks of other targets according to the first course angle, the first course angle variation, the second course angle and the second course angle variation; and determining the probability of collision between the current target and other targets according to the new running track of the current target and the new running tracks of other targets.
In one embodiment, the computer program when executed by the processor further performs the steps of:
determining new first predicted depth information of the current target at the subsequent moment and new second predicted depth information of other targets at the subsequent moment according to the first course angle, the first course angle variation, the second course angle and the second course angle variation; performing curve fitting processing on the first depth information and the new first predicted depth information to determine a new running track of the current target; and performing curve fitting processing on the second depth information and the new second predicted depth information to determine new running tracks of other targets.
In one embodiment, the computer program when executed by the processor further performs the steps of:
performing mathematical operation processing on the first course angle and the first course angle variation to obtain a first predicted course angle of the current target at the subsequent time; performing mathematical operation processing on the second course angle and the second course angle variation to obtain a second predicted course angle of other targets at the subsequent time; determining new first predicted depth information of the current target at the subsequent moment based on the first predicted course angle and the first depth information of the current target; and determining new second predicted depth information of other targets at the subsequent moment based on the second predicted course angles and the second depth information of other targets.
In one embodiment, the computer program when executed by the processor further performs the steps of:
judging whether the new running track of the current target and the new running tracks of other targets have intersection points or not; if the new running track of the current target and the new running tracks of other targets have intersection points, determining that the probability of collision between the current target and other targets is a third-level probability; the third level probability is lower than the second level probability. It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (17)

1. A collision detection method, applied to a vision sensor, the method comprising:
acquiring a target pixel position of each target in a plurality of targets on a two-dimensional image at the current moment;
acquiring depth information corresponding to each target pixel position at the current moment by using a preset mapping model; the mapping model comprises a mapping relation between pixel positions of the two-dimensional image and depth information of a point cloud of the radar sensor;
acquiring the state information of each target at the current moment; the state information is used for representing the moving state of the corresponding target at the current moment;
predicting the subsequent state of each target based on the state information of each target at the current moment and the corresponding depth information to obtain the predicted depth information of each target at the subsequent moment;
and determining the probability of collision between each target and other targets in the plurality of targets according to the predicted depth information of each target.
2. The method of claim 1, wherein obtaining a target pixel location on a two-dimensional image of each of a plurality of targets at a current time comprises:
carrying out target detection on the two-dimensional image to obtain the position of a target frame corresponding to each target on the two-dimensional image;
and determining the target pixel position of each target on the two-dimensional image according to the position of the target frame of each target.
3. The method of claim 2, wherein determining the target pixel location of each of the targets on the two-dimensional image based on the location of the target frame of each of the targets comprises:
determining the bottom edge central point position of each target frame as the target pixel position of each target; or, the position of each target frame is determined as the target pixel position of each target.
4. The method of claim 3, wherein determining the probability of each of the targets colliding with other targets of the plurality of targets based on the predicted depth information for each of the targets comprises:
performing curve fitting processing on the depth information and the predicted depth information corresponding to each target to determine the running track of each target;
and determining the probability of collision between each target and other targets in the plurality of targets according to the running track of each target.
5. The method of claim 4, wherein the plurality of targets includes a current target and other targets; determining the probability of collision between each target and other targets in the plurality of targets according to the running track of each target, including:
judging whether the running track of the current target and the running tracks of the other targets have intersection points or not to obtain a first judgment result;
and determining the probability of collision between the current target and the other targets according to the first judgment result.
6. The method according to claim 5, wherein the determining the probability of the collision between the current target and the other targets according to the first determination result comprises:
and if the first judgment result shows that the operation track of the current target and the operation tracks of the other targets have intersection points, determining that the probability of collision between the current target and the other targets is a first-level probability.
7. The method of claim 6, further comprising:
if the first judgment result is that the running track of the current target and the running tracks of the other targets do not have intersection points, acquiring the actual size position of the current target and the actual size positions of the other targets;
and determining the probability of collision between the current target and the other targets according to the actual size position of the current target and the actual size positions of the other targets.
8. The method of claim 7, wherein the obtaining the actual size position of the current target and the actual size positions of the other targets comprises:
obtaining the position of the bottom edge of each target frame from the position of each target frame, and processing by using the mapping model to obtain depth information corresponding to the position of the bottom edge of each target frame;
constructing the length and width of the current target and the length and width of the other targets according to the depth information corresponding to the positions of the bottom edges of the target frames;
obtaining the height of each target frame from the position of each target frame, and constructing the actual height of the current target and the actual height of the other targets according to the height of each target frame;
and obtaining the actual size position of the current target according to the length, the width and the actual height of the current target, and obtaining the actual size positions of the other targets according to the length, the width and the actual height of the other targets.
9. The method of claim 8, wherein determining the probability of the collision between the current target and the other target based on the actual size position of the current target and the actual size positions of the other targets comprises:
judging whether the actual size position of the current target and the actual size positions of the other targets are overlapped at the current moment and the subsequent moment to obtain a second judgment result;
and determining the probability of collision between the current target and the other targets according to the second judgment result.
10. The method according to claim 9, wherein the determining the probability of the collision between the current target and the other target according to the second determination result comprises:
if the second judgment result is that the actual size position of the current target and the actual size positions of the other targets are overlapped at any moment, determining that the probability of collision between the current target and the other targets is a second-level probability; the second level probability is lower than the first level probability.
11. The method of claim 10, further comprising:
if the second judgment result is that the actual size position of the current target and the actual size positions of the other targets do not overlap at the current moment and at the subsequent moment, acquiring a first course angle of the current target at the current moment and a second course angle of the other targets at the current moment;
acquiring a first course angle variation of the current target at the subsequent time and a second course angle variation of the other targets at the subsequent time;
predicting the new operation track of the current target and the new operation tracks of the other targets according to the first course angle, the first course angle variation, the second course angle and the second course angle variation;
and determining the probability of collision between the current target and the other targets according to the new running track of the current target and the new running tracks of the other targets.
12. The method of claim 11, wherein predicting the new trajectory of the current target and the new trajectory of the other target based on the first course angle and the first course angle variation, and the second course angle variation comprises:
determining new first predicted depth information of the current target at the subsequent time and new second predicted depth information of other targets at the subsequent time according to the first course angle, the first course angle variation, the second course angle and the second course angle variation;
performing curve fitting processing on the first depth information and the new first predicted depth information to determine a new running track of the current target;
and performing curve fitting processing on the second depth information and the new second predicted depth information to determine new running tracks of other targets.
13. The method of claim 12, wherein determining the new first predicted depth information of the current target at the subsequent time and the new second predicted depth information of the other target at the subsequent time according to the first course angle and the first course angle variation, and the second course angle variation comprises:
performing mathematical operation processing on the first course angle and the first course angle variation to obtain a first predicted course angle of the current target at a subsequent moment;
performing mathematical operation processing on the second course angle and the second course angle variation to obtain a second predicted course angle of the other targets at the subsequent time;
determining new first predicted depth information of the current target at a subsequent moment based on the first predicted course angle of the current target and the first depth information;
and determining new second predicted depth information of the other targets at the subsequent time based on the second predicted course angles of the other targets and the second depth information.
14. The method of claim 13, wherein determining the probability of the collision between the current target and the other targets according to the new operation trajectory of the current target and the new operation trajectories of the other targets comprises:
judging whether the new running track of the current target and the new running tracks of the other targets have intersection points or not;
if the new operation track of the current target and the new operation tracks of the other targets have intersection points, determining that the probability of collision between the current target and the other targets is a third-level probability; the third level probability is lower than the second level probability.
15. A collision detection device, applied to a visual sensor, the device comprising:
the first acquisition module is used for acquiring the target pixel position of each target in the plurality of targets on the two-dimensional image at the current moment;
the depth determining module is used for acquiring depth information corresponding to each target pixel position at the current moment by using a preset mapping model; the mapping model comprises a mapping relation between pixel positions of the two-dimensional image and depth information of a point cloud of the radar sensor;
the second acquisition module is used for acquiring the state information of each target at the current moment; the state information is used for representing the moving state of the corresponding target at the current moment;
the prediction module is used for predicting the subsequent state of each target based on the state information of each target at the current moment and the corresponding depth information to obtain the predicted depth information of each target at the subsequent moment;
and the collision determining module is used for determining the probability of collision between each target and other targets in the plurality of targets according to the predicted depth information of each target.
16. A vision sensor comprising a camera, a memory and a processor, the memory storing a computer program which when executed by the processor implements the steps of the method of any one of claims 1 to 14.
17. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 14.
CN202010838469.8A 2020-08-19 2020-08-19 Collision detection method, device, visual sensor and storage medium Active CN114078326B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010838469.8A CN114078326B (en) 2020-08-19 2020-08-19 Collision detection method, device, visual sensor and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010838469.8A CN114078326B (en) 2020-08-19 2020-08-19 Collision detection method, device, visual sensor and storage medium

Publications (2)

Publication Number Publication Date
CN114078326A true CN114078326A (en) 2022-02-22
CN114078326B CN114078326B (en) 2023-04-07

Family

ID=80281650

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010838469.8A Active CN114078326B (en) 2020-08-19 2020-08-19 Collision detection method, device, visual sensor and storage medium

Country Status (1)

Country Link
CN (1) CN114078326B (en)

Citations (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1475764A2 (en) * 2003-05-02 2004-11-10 IBEO Automobile Sensor GmbH Method and apparatus for calculating the probability of a collision between a vehicle and an object
JP2009169813A (en) * 2008-01-18 2009-07-30 Honda Motor Co Ltd Vehicular collision avoidance support system
CN101604448A (en) * 2009-03-16 2009-12-16 北京中星微电子有限公司 A kind of speed-measuring method of moving target and system
CN102141398A (en) * 2010-12-28 2011-08-03 北京航空航天大学 Monocular vision-based method for measuring positions and postures of multiple robots
US20130261947A1 (en) * 2012-04-03 2013-10-03 Denso Corporation Driving assistance device
JP2014021709A (en) * 2012-07-18 2014-02-03 Honda Motor Co Ltd Object position detecting device
US20140303882A1 (en) * 2013-04-05 2014-10-09 Electronics And Telecommunications Research Institute Apparatus and method for providing intersection collision-related information
WO2015063422A2 (en) * 2013-11-04 2015-05-07 Renault S.A.S. Device for detecting the lateral position of a pedestrian relative to the trajectory of the vehicle
EP2881829A2 (en) * 2013-12-05 2015-06-10 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Method for automatically controlling a vehicle, device for generating control signals for a vehicle and vehicle
CN204667566U (en) * 2015-04-30 2015-09-23 湖南华诺星空电子技术有限公司 Radar video merges intelligent warning system
US20150286219A1 (en) * 2012-10-29 2015-10-08 Audi Ag Method for coordinating the operation of motor vehicles that drive in fully automated mode
CN105512641A (en) * 2015-12-31 2016-04-20 哈尔滨工业大学 Method for using laser radar scanning method to calibrate dynamic pedestrians and vehicles in video in snowing or raining state
CN106251699A (en) * 2016-08-19 2016-12-21 深圳市元征科技股份有限公司 Vehicle running collision method for early warning and device
DE102016012376A1 (en) * 2016-10-15 2017-06-01 Daimler Ag Method for operating a vehicle and driver assistance device
CN108062600A (en) * 2017-12-18 2018-05-22 北京星云互联科技有限公司 A kind of vehicle collision prewarning method and device based on rectangle modeling
CN108597251A (en) * 2018-04-02 2018-09-28 昆明理工大学 A kind of traffic intersection distribution vehicle collision prewarning method based on car networking
CN108932475A (en) * 2018-05-31 2018-12-04 中国科学院西安光学精密机械研究所 A kind of Three-dimensional target recognition system and method based on laser radar and monocular vision
CN108986161A (en) * 2018-06-19 2018-12-11 亮风台(上海)信息科技有限公司 A kind of three dimensional space coordinate estimation method, device, terminal and storage medium
CN109190508A (en) * 2018-08-13 2019-01-11 南京财经大学 A kind of multi-cam data fusion method based on space coordinates
CN109263637A (en) * 2018-10-12 2019-01-25 北京双髻鲨科技有限公司 A kind of method and device of prediction of collision
CN109509143A (en) * 2018-10-31 2019-03-22 太原理工大学 A kind of method of three-dimensional point cloud conversion two dimensional image
CN109523830A (en) * 2018-11-08 2019-03-26 中交第公路勘察设计研究院有限公司 Track of vehicle prediction and anti-collision warning method based on high-frequency high-precision location information
CN109747638A (en) * 2018-12-25 2019-05-14 东软睿驰汽车技术(沈阳)有限公司 A kind of vehicle driving intension recognizing method and device
WO2019175130A1 (en) * 2018-03-14 2019-09-19 Renault S.A.S Robust method for detecting obstacles, in particular for autonomous vehicles
CN209640478U (en) * 2019-01-23 2019-11-15 闫海涛 A kind of weather radar trouble hunting system
US10598788B1 (en) * 2018-10-25 2020-03-24 Aeye, Inc. Adaptive control of Ladar shot selection using spatial index of prior Ladar return data
WO2020067751A1 (en) * 2018-09-28 2020-04-02 재단법인대구경북과학기술원 Device and method for data fusion between heterogeneous sensors
CN111091591A (en) * 2019-12-23 2020-05-01 百度国际科技(深圳)有限公司 Collision detection method and device, electronic equipment and storage medium

Patent Citations (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1475764A2 (en) * 2003-05-02 2004-11-10 IBEO Automobile Sensor GmbH Method and apparatus for calculating the probability of a collision between a vehicle and an object
JP2009169813A (en) * 2008-01-18 2009-07-30 Honda Motor Co Ltd Vehicular collision avoidance support system
CN101604448A (en) * 2009-03-16 2009-12-16 北京中星微电子有限公司 A kind of speed-measuring method of moving target and system
CN102141398A (en) * 2010-12-28 2011-08-03 北京航空航天大学 Monocular vision-based method for measuring positions and postures of multiple robots
US20130261947A1 (en) * 2012-04-03 2013-10-03 Denso Corporation Driving assistance device
JP2014021709A (en) * 2012-07-18 2014-02-03 Honda Motor Co Ltd Object position detecting device
US20150286219A1 (en) * 2012-10-29 2015-10-08 Audi Ag Method for coordinating the operation of motor vehicles that drive in fully automated mode
US20140303882A1 (en) * 2013-04-05 2014-10-09 Electronics And Telecommunications Research Institute Apparatus and method for providing intersection collision-related information
WO2015063422A2 (en) * 2013-11-04 2015-05-07 Renault S.A.S. Device for detecting the lateral position of a pedestrian relative to the trajectory of the vehicle
EP2881829A2 (en) * 2013-12-05 2015-06-10 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Method for automatically controlling a vehicle, device for generating control signals for a vehicle and vehicle
CN204667566U (en) * 2015-04-30 2015-09-23 湖南华诺星空电子技术有限公司 Radar video merges intelligent warning system
CN105512641A (en) * 2015-12-31 2016-04-20 哈尔滨工业大学 Method for using laser radar scanning method to calibrate dynamic pedestrians and vehicles in video in snowing or raining state
WO2018032642A1 (en) * 2016-08-19 2018-02-22 深圳市元征科技股份有限公司 Driving vehicle collision warning method and device
CN106251699A (en) * 2016-08-19 2016-12-21 深圳市元征科技股份有限公司 Vehicle running collision method for early warning and device
DE102016012376A1 (en) * 2016-10-15 2017-06-01 Daimler Ag Method for operating a vehicle and driver assistance device
CN108062600A (en) * 2017-12-18 2018-05-22 北京星云互联科技有限公司 A kind of vehicle collision prewarning method and device based on rectangle modeling
WO2019175130A1 (en) * 2018-03-14 2019-09-19 Renault S.A.S Robust method for detecting obstacles, in particular for autonomous vehicles
CN108597251A (en) * 2018-04-02 2018-09-28 昆明理工大学 A kind of traffic intersection distribution vehicle collision prewarning method based on car networking
CN108932475A (en) * 2018-05-31 2018-12-04 中国科学院西安光学精密机械研究所 A kind of Three-dimensional target recognition system and method based on laser radar and monocular vision
CN108986161A (en) * 2018-06-19 2018-12-11 亮风台(上海)信息科技有限公司 A kind of three dimensional space coordinate estimation method, device, terminal and storage medium
CN109190508A (en) * 2018-08-13 2019-01-11 南京财经大学 A kind of multi-cam data fusion method based on space coordinates
WO2020067751A1 (en) * 2018-09-28 2020-04-02 재단법인대구경북과학기술원 Device and method for data fusion between heterogeneous sensors
CN109263637A (en) * 2018-10-12 2019-01-25 北京双髻鲨科技有限公司 A kind of method and device of prediction of collision
US10598788B1 (en) * 2018-10-25 2020-03-24 Aeye, Inc. Adaptive control of Ladar shot selection using spatial index of prior Ladar return data
CN109509143A (en) * 2018-10-31 2019-03-22 太原理工大学 A kind of method of three-dimensional point cloud conversion two dimensional image
CN109523830A (en) * 2018-11-08 2019-03-26 中交第公路勘察设计研究院有限公司 Track of vehicle prediction and anti-collision warning method based on high-frequency high-precision location information
CN109747638A (en) * 2018-12-25 2019-05-14 东软睿驰汽车技术(沈阳)有限公司 A kind of vehicle driving intension recognizing method and device
CN209640478U (en) * 2019-01-23 2019-11-15 闫海涛 A kind of weather radar trouble hunting system
CN111091591A (en) * 2019-12-23 2020-05-01 百度国际科技(深圳)有限公司 Collision detection method and device, electronic equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
李红等: "基于Matlab的多约束自动平行泊车轨迹规划", 《中南大学学报(自然科学版)》 *
王进成等: "使用三维场景绘制技术模拟雷达图像", 《大连海事大学学报》 *
金立生等: "基于毫米波雷达和机器视觉的夜间前方车辆检测", 《汽车安全与节能学报》 *

Also Published As

Publication number Publication date
CN114078326B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
Chen et al. AI-empowered speed extraction via port-like videos for vehicular trajectory analysis
WO2021004077A1 (en) Method and apparatus for detecting blind areas of vehicle
CN109829351B (en) Method and device for detecting lane information and computer readable storage medium
CN110286389B (en) Grid management method for obstacle identification
US11755917B2 (en) Generating depth from camera images and known depth data using neural networks
CN110674705A (en) Small-sized obstacle detection method and device based on multi-line laser radar
CN113674523A (en) Traffic accident analysis method, device and equipment
WO2021016920A1 (en) Method, system and device for identifying accessibility, and computer-readable storage medium
CN114022846A (en) Anti-collision monitoring method, device, equipment and medium for working vehicle
Yu et al. An evidential sensor model for velodyne scan grids
CN116859413A (en) Perception model building method for open-air mine car
CN114170499A (en) Target detection method, tracking method, device, visual sensor and medium
Fakhfakh et al. Weighted v-disparity approach for obstacles localization in highway environments
CN114078326B (en) Collision detection method, device, visual sensor and storage medium
CN116563801A (en) Traffic accident detection method, device, electronic equipment and medium
CN115359332A (en) Data fusion method and device based on vehicle-road cooperation, electronic equipment and system
CN115373402A (en) Loader running control method, device and equipment and storage medium
CN115100632A (en) Expansion point cloud identification method and device, computer equipment and storage medium
KR20230036243A (en) Real-time 3D object detection and tracking system using visual and LiDAR
CN115236672A (en) Obstacle information generation method, device, equipment and computer readable storage medium
Huang et al. Rear obstacle warning for reverse driving using stereo vision techniques
Seeger et al. 2-d evidential grid mapping with narrow vertical field of view sensors using multiple hypotheses and spatial neighborhoods
KR102531281B1 (en) Method and system for generating passing object information using the sensing unit
CN115431968B (en) Vehicle controller, vehicle and vehicle control method
CN114078331B (en) Overspeed detection method, overspeed detection device, visual sensor and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant