CN111516605A - Multi-sensor monitoring equipment and monitoring method - Google Patents

Multi-sensor monitoring equipment and monitoring method Download PDF

Info

Publication number
CN111516605A
CN111516605A CN202010347251.2A CN202010347251A CN111516605A CN 111516605 A CN111516605 A CN 111516605A CN 202010347251 A CN202010347251 A CN 202010347251A CN 111516605 A CN111516605 A CN 111516605A
Authority
CN
China
Prior art keywords
target
image
vehicle
detection
control module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010347251.2A
Other languages
Chinese (zh)
Other versions
CN111516605B (en
Inventor
潘胜利
李鑫
朱国章
陈桢
周庆文
孙健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SAIC Volkswagen Automotive Co Ltd
Original Assignee
SAIC Volkswagen Automotive Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SAIC Volkswagen Automotive Co Ltd filed Critical SAIC Volkswagen Automotive Co Ltd
Priority to CN202010347251.2A priority Critical patent/CN111516605B/en
Publication of CN111516605A publication Critical patent/CN111516605A/en
Application granted granted Critical
Publication of CN111516605B publication Critical patent/CN111516605B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R11/00Arrangements for holding or mounting articles, not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R16/00Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for
    • B60R16/02Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements
    • B60R16/03Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements for supply of electrical power to vehicle subsystems or for
    • B60R16/033Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements for supply of electrical power to vehicle subsystems or for characterised by the use of electrical cells or batteries
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/86Combinations of sonar systems with lidar systems; Combinations of sonar systems with systems not using wave reflection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/08Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle or waiting time
    • G07C5/0841Registering performance data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/57Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R11/00Arrangements for holding or mounting articles, not otherwise provided for
    • B60R2011/0001Arrangements for holding or mounting articles, not otherwise provided for characterised by position
    • B60R2011/004Arrangements for holding or mounting articles, not otherwise provided for characterised by position outside the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/625License plates

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Mechanical Engineering (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a multi-sensor monitoring device, comprising: the system comprises a target detection device, an image acquisition device, a target identification device, a processing device and a power supply device. The invention also discloses a multi-sensor monitoring method, which comprises the following steps: and detecting a target approaching the vehicle, judging whether the target is an effective target, ignoring an ineffective target, and entering the next step for the effective target. And acquiring an environment image, and extracting image information of the effective target from the environment image. And acquiring the physical characteristics and the moving track of the effective target, and identifying the effective target according to the physical characteristics and the moving track. And carrying out data fusion to generate an identification result, image data, physical characteristics and a moving track of the effective target. And storing data, namely storing the original data of the effective target and the identification result, the image data, the physical characteristics and the moving track of the effective target generated by data fusion.

Description

Multi-sensor monitoring equipment and monitoring method
Technical Field
The invention relates to the technical field of automobile safety, in particular to the technical field of monitoring of the surrounding environment of an automobile.
Background
With the advance and development of intelligent driving and artificial intelligence technologies, the demand of people on vehicle-mounted monitoring systems is continuously increased. Particularly, if no complete solution is provided for the conditions of collision, scratch and the like of the vehicle after the parking process, certain economic loss and unnecessary troubles can be caused.
In the existing vehicle-mounted monitoring system and method, the system is mainly divided into three categories:
1) most of the automobile data recorders mainly based on vehicle-mounted monitoring are fixed on a front windshield of a vehicle by using a camera sensor and used for detecting the environmental conditions in front of the vehicle. The characteristics and limitations of this approach are: the automobile data recorder mainly records a driving state, and mainly focuses on the condition of a front road or a rear road during driving. Since the attention of the drive recorder is focused on the traffic condition during the traveling of the vehicle, although the state information of the vehicle can be recorded to some extent, the detection field of vision is relatively limited (mainly forward), and the situation around the vehicle body cannot be monitored. More importantly, the automobile data recorder is usually closed after parking and flameout, the detection function is unavailable in a flameout state, and the surrounding environment cannot be monitored in the parking process.
2) The vehicle is equipped with a monitoring device equipped with a sensing means, the monitoring device being connected to the on-board battery. In a state where the vehicle is stopped and stalled, the monitoring device is powered by the on-vehicle battery. The monitoring equipment detects whether the surrounding environment has the possibility of invading the vehicle through the perception sensor, and when the sensor detects that the possibility of danger occurrence exists, the camera sensor is triggered to record the image state information of the vehicle in the current environment. The characteristics and limitations of this approach are: the method utilizes the vehicle-mounted battery as a power supply, can continue to work after the vehicle is stopped and flamed out, and monitors the surrounding state through the sensor. When the possibility that the vehicle is damaged is found, the camera is started to acquire image information of the surrounding environment. For example, CN109591716A discloses a method and a system for monitoring and processing a vehicle-mounted video, in which after a vibration sensor detects that there is a possibility that a vehicle is damaged, a monitoring camera of a cloud platform records the condition of the vehicle. But the method has no effect under dark environment. Because the monitoring camera is not equipped with the light filling device, therefore it is relatively poor to shoot the effect at night or the dim position of light, even obtained the image information of surrounding environment, this image is because the relation of light and definition, often can't reach anticipated purpose.
3) Aiming at the defects of the second type of monitoring equipment, the third type of monitoring equipment is improved in the aspect of light supplement, the vehicle carries the sensing monitoring equipment with the photosensitive function device, and on the basis that the vehicle is detected to be damaged, the ambient brightness is detected by using the photosensitive sensor so as to determine whether to start the light supplement device. For example, CN104309577A discloses a vehicle-mounted intelligent sensing anti-invasion alarm system, and the system utilizes an ambient light sensor to detect the light condition of the current detection environment, and under the environment with poor light, the infrared light supplement lamp is used for making up the defects. However, there are still many problems in the actual use process. Although the third type of monitoring device performs light sensing and light supplementing, and solves the problem of dim light, the data processing mode of the third type of monitoring device is only simple recording, and no further processing is performed. During use, the following problems exist: on one hand, false alarms are easy to occur, for example, when a vehicle is parked too close to certain fixed obstacles or a non-motor vehicle is parked beside the vehicle, the false alarms are easy to trigger. On the other hand, when an infringement situation is encountered, the camera is simply turned on to record the situation within the visual field. If the visual field range of the camera is small, the invasion target is not in the visual field range, and the related information cannot be acquired. If the invasion target leaves the visual field range of the camera, the image can not be obtained continuously, and the information is incomplete. In addition, when the invasion occurs, a plurality of targets exist in the visual field of the camera, and at the moment, the targets are not distinguished and identified, so that the difficulty is caused to the later processing.
Disclosure of Invention
The invention aims to provide a vehicle monitoring technology based on multiple sensors and data fusion processing.
According to an embodiment of the present invention, there is provided a multi-sensor monitoring apparatus including: the system comprises a target detection device, an image acquisition device, a target identification device, a processing device and a power supply device. The object detection device is arranged around the vehicle body, detects an object approaching the vehicle, and emits a detection signal when detecting the object approaching the vehicle. The image acquisition device acquires an environmental image around the vehicle and a driving image in front of the vehicle. The object recognition devices are arranged at the four corners of the vehicle body, and the object recognition devices record the physical characteristics and the movement locus of the object. The processing device is connected with the target detection device, the image acquisition device and the target recognition device, receives detection signals sent by the target detection device, judges whether the target is an effective target or not according to the detection signals, for the effective target, the processing device starts the image acquisition device and the target recognition device, the image acquisition device acquires images of the effective target, the target recognition device records physical characteristics and moving tracks of the effective target, and the processing device processes the images, the physical characteristics and the moving tracks of the effective target, identifies the type of the effective target and stores the images, the physical characteristics and the moving tracks of the effective target. The power supply device is connected to a power supply source of the vehicle, supplies power for the target detection device, the image acquisition device, the target identification device and the processing device, monitors the electric quantity of the power supply source of the vehicle, supplies power when the electric quantity of the power supply source is not lower than the safe electric quantity, and stops supplying power to the target detection device, the image acquisition device, the target identification device and the processing device when the electric quantity of the power supply source is lower than the safe electric quantity.
In one embodiment, the object detection device is an ultrasonic sensor comprising: long range ultrasonic sensors and short range ultrasonic sensors. The long-distance ultrasonic sensor is arranged on the side surface of the vehicle body, and the detection distance of the long-distance ultrasonic sensor is not less than 8 m. The short-distance ultrasonic sensors are arranged at the front end and the rear end of the vehicle body, the detection precision of the short-distance ultrasonic sensors is not lower than 0.1cm, and the distance resolution is not lower than 1 cm.
In one embodiment, the image acquisition device is a camera comprising: perception camera and the camera that traveles. The perception cameras are arranged on four sides of the vehicle body and used for collecting environmental images around the vehicle, and the perception cameras are fisheye cameras. The driving camera points to the front of the vehicle, the driving camera collects driving images in front of the vehicle, and the driving camera is a common camera.
In one embodiment, the perspective detection horizontal visual angle of the sensing cameras is not less than 190 degrees, the detection distance is not less than 10m, and the four sensing cameras collect 360-degree perspective images around the vehicle. The perception camera is provided with an infrared light supplementing device.
In one embodiment, the target recognition device is a laser radar, the four laser radars are respectively arranged at the four corners of the vehicle body and form 45-degree included angles with the vehicle body, the horizontal detection visual angle of the laser radar is not less than 120 degrees, the detection distance is not less than 50m, and the laser radar records the physical characteristics and the moving track of the target.
In one embodiment, a processing device comprises: the device comprises a detection control module, an image control module, an identification control module, a data fusion module and a data storage module. The detection control module is connected with the target detection device and used for receiving a detection signal sent by the target detection device, judging whether the target is an effective target or not according to the detection signal, and generating a trigger instruction for the effective target. The image processing module is connected with the image acquisition device, the image processing module starts the image acquisition device according to the trigger instruction, and the image processing module extracts image information of the effective target from the environment image acquired by the image processing device. The identification control module is connected with the target identification device, starts the target identification device according to the trigger instruction, records the physical characteristics and the moving track of the effective target obtained by the target identification device, and identifies the effective target according to the physical characteristics and the moving track. The data fusion module is connected to the detection control module, the image control module and the identification control module, and is used for fusing data generated by the detection control module, the image control module and the identification control module to generate an identification result, image data, physical characteristics and a moving track of an effective target. The data storage module is connected with the detection control module, the image control module, the identification control module and the data fusion module, and stores data generated by the detection control module, the image control module and the identification control module and an identification result, image data, physical characteristics and a moving track of an effective target generated by the data fusion module.
In one embodiment, the detection control module processes the detection signal according to the target distance and a dynamic and static detection algorithm, judges a dynamic target with the distance less than the safe distance as an effective target, generates a trigger instruction, and continuously records the direction, the distance and the moving speed of the target.
In one embodiment, the image processing module extracting image information of the effective target from the environment image acquired by the image processing device comprises: extracting a face image, extracting a license plate image, tracking and extracting continuous images of the moving track of the effective target.
In one embodiment, the precise physical characteristics and the precise moving track of the effective target acquired by the identification device are identified, and the identification control module identifies and classifies the effective target according to the precise physical characteristics and the precise moving track.
According to an embodiment of the present invention, a multi-sensor monitoring method is provided, which includes the following steps:
detecting a target approaching a vehicle, judging whether the target is an effective target, neglecting an ineffective target, and entering the next step for the effective target;
collecting an environment image, and extracting image information of an effective target from the environment image;
acquiring physical characteristics and a moving track of an effective target, and identifying the effective target according to the physical characteristics and the moving track;
carrying out data fusion to generate an identification result, image data, physical characteristics and a moving track of an effective target;
and storing data, namely storing the original data of the effective target and the identification result, the image data, the physical characteristics and the moving track of the effective target generated by data fusion.
In one embodiment, determining whether the target is a valid target comprises:
detecting the distance of the target, judging the target to be an invalid target when the distance between the target and the vehicle is greater than the safe distance, and entering the next step when the distance between the target and the vehicle is less than or equal to the safe distance;
and applying a dynamic and static detection algorithm to the target, judging the target to be a non-effective target if the target is a static target, and judging the target to be an effective target if the target is a dynamic target.
In one embodiment, extracting image information of the effective target from the environment image comprises: extracting a face image, extracting a license plate image, tracking and extracting continuous images of the moving track of the effective target.
In one embodiment, the precise physical characteristics and the precise movement track of the effective target are obtained, and the effective target is identified and classified according to the precise physical characteristics and the precise movement track.
In one embodiment, the multi-sensor monitoring method further includes a power supply step of supplying power from a power supply source of the vehicle, monitoring a power amount of the power supply source of the vehicle, supplying power when the power amount of the power supply source is not less than a safety power amount, and stopping the power supply when the power amount of the power supply source is less than the safety power amount.
The multi-sensor monitoring equipment and the monitoring method of the invention utilize a plurality of sensors of different types to record the surrounding environment of the vehicle in the signal mode of the plurality of sensors when the vehicle is possibly invaded, and utilize the deep learning and multi-sensor signal fusion scheme to effectively identify, classify, predict the behavior track and the like, thereby being convenient for a user to quickly and efficiently locate accident information, providing accident evidence and protecting the safety of the user.
The multi-sensor monitoring equipment and the monitoring method have the following positive effects: 1) the environmental adaptation is strong. The monitoring scheme provided by the invention is suitable for monitoring and identifying under different illumination, and can fully utilize the laser radar to identify the obstacle information and extract the effective information under the condition of dark light. If the light illumination is insufficient, the characteristic information of a license plate and the like of a target can be identified by utilizing the self characteristic of the laser radar, and the characteristic information is used for assisting in judging the situation around the vehicle body; 2) the application range is wide. The invention is suitable for monitoring and identifying multiple scenes, not only can be applied to the detection of the surrounding environment when the vehicle is static, but also can be used for the state detection of the surrounding vehicle in the driving process; 3) and the searching of the positioning information is efficient. The invention can quickly position the surrounding environment target of the vehicle, and outputs the classification of the barrier and the surrounding information, such as the information of the license plate number, the human face characteristic and the like, through the information fusion of multiple sensors, so as to be used for quickly identifying effective information of a user; 4) the safety level is high. Under necessary conditions, the method can be used as a judgment basis of a public security system to further guarantee the maximum legal rights and interests of users.
Drawings
FIG. 1 discloses a layout diagram of a multi-sensor monitoring device according to an embodiment of the invention.
Fig. 2 discloses a block diagram of a processing device in a multi-sensor monitoring apparatus according to an embodiment of the invention.
FIG. 3 discloses a flow chart of a multi-sensor monitoring method according to an embodiment of the invention.
Fig. 4 discloses a process of detecting and identifying an object in a multi-sensor monitoring method according to an embodiment of the invention.
Fig. 5 discloses a process of fusing data in the multi-sensor monitoring method according to an embodiment of the invention.
Detailed Description
Referring to fig. 1, fig. 1 discloses a layout diagram of a multi-sensor monitoring apparatus according to an embodiment of the invention. The multi-sensor monitoring apparatus includes: the target detection device 101, the image acquisition device 102, the target recognition device 103, the processing device 104 and the power supply device 105.
The object detection device 101 is disposed around the vehicle body, the object detection device 101 detects an object approaching the vehicle, and the object detection device 101 emits a detection signal when detecting the object approaching the vehicle. In one embodiment, the object detecting device 101 is an ultrasonic sensor that is disposed around the vehicle body to ensure that the detection range of the ultrasonic sensor covers 360 degrees around the vehicle body. In the illustrated embodiment, a total of 12 ultrasonic sensors are arranged around the vehicle body, specifically, 4 ultrasonic sensors are arranged at the positions of the vehicle head and the vehicle tail respectively, 2 ultrasonic sensors are arranged at the two sides of the vehicle body respectively, and the ultrasonic sensors at the two sides of the vehicle body are arranged at the lower sides of the front fender and the rear fender. In one embodiment, the ultrasonic sensors are classified into two categories: long range ultrasonic sensors and short range ultrasonic sensors. The long-distance ultrasonic sensor is arranged on the side surface of the vehicle body, and the detection distance of the long-distance ultrasonic sensor is not less than 8 m. In one embodiment, the side of the vehicle body is provided with 4 long-distance ultrasonic sensors. The short-distance ultrasonic sensors are arranged at the front end and the rear end of the vehicle body, the detection precision of the short-distance ultrasonic sensors is not lower than 0.1cm, and the distance resolution is not lower than 1 cm. In one embodiment, the front end and the rear end of the vehicle body are provided with 8 short-distance ultrasonic sensors.
The image capturing device 102 captures an environmental image around the vehicle and a traveling image in front of the vehicle. In one embodiment, image capture device 102 is a camera. The cameras include two types of cameras: perception camera and the camera that traveles. The perception cameras are arranged on four sides of the vehicle body and used for collecting environmental images around the vehicle, and the perception cameras are fisheye cameras. In one embodiment, 4 sensing cameras are arranged on four surfaces of a vehicle body and are respectively installed on the front bumper, the rear bumper and two sides of a vehicle rearview mirror, and the four sensing cameras face to be parallel to the ground and face to the front, the rear, the left and the right. The all-round detection horizontal visual angle of each perception camera is not less than 190 degrees, the detection distance is not less than 10m, and the four perception cameras collect all-round images of 360 degrees around the vehicle. In order to ensure 360-degree dead-angle-free image acquisition, the visual field ranges of adjacent perception cameras have certain overlap. In one embodiment, the sensing camera is provided with an infrared light supplement device in order to adapt to an application environment with poor light. The infrared light supplementing device can supplement light when ambient light is poor. The driving camera points to the front of the vehicle, the driving camera collects driving images in front of the vehicle, and the driving camera is a common camera. The running camera is fixed in the front windshield of the vehicle and mainly collects running images, and the running camera has the function similar to that of the existing automobile data recorder.
The object recognition devices 103 are arranged at the four corners of the vehicle body, and record the physical features and the movement locus of the object. In one embodiment, the target recognition device 103 is a lidar, and four radars are respectively arranged at four corners of the vehicle body and form an angle of 45 degrees with the vehicle body. In one embodiment, 4 laser radars are respectively arranged on front and rear bumpers of the vehicle, the orientation of the laser radars is parallel to the ground, and the laser radars are arranged at positions forming an included angle of 45 degrees with the direction of the vehicle body. The horizontal detection visual angle of the laser radar is not less than 120 degrees, the detection distance is not less than 50m, and the laser radar records the physical characteristics and the moving track of the target. The laser radar can obtain the accurate physical characteristics and the moving track of the target, and can identify and track the target in various environments including poor light.
The processing device 104 is connected with the object detection device 101, the image acquisition device 102 and the object recognition device 103. The processing device 104 receives the detection signal from the object detection device 101, and determines whether the object is a valid object according to the detection signal. For valid targets, the processing means 104 activates the image acquisition means 102 and the target recognition means 103. The image acquisition device 102 acquires an image of the effective target, and the target recognition device 103 records the physical characteristics and the movement track of the effective target. The processing device 104 processes the image, the physical feature and the movement track of the effective target, identifies the type of the effective target, and stores the image, the physical feature and the movement track of the effective target. Fig. 2 discloses a block diagram of a processing device in a multi-sensor monitoring apparatus according to an embodiment of the invention. In the embodiment shown in fig. 2, the processing device 104 comprises: a detection control module 141, an image control module 142, a recognition control module 143, a data fusion module 144, and a data storage module 145.
The detection control module 141 is connected to the target detection device 101, receives a detection signal from the target detection device, determines whether the target is an effective target according to the detection signal, and generates a trigger command for the effective target. In one embodiment, the detection control module 141 processes the detection signal according to the target distance and a dynamic and static detection algorithm, determines a dynamic target with a distance less than the safe distance as an effective target, generates a trigger instruction, and continuously records the direction, distance, and moving speed of the target. In one embodiment, the detection control module 141 determines whether the target is a valid target by two steps: firstly, detecting the distance between the target and the vehicle, judging the target to be a non-effective target when the distance between the target and the vehicle is greater than the safe distance, and entering the next step when the distance between the target and the vehicle is less than or equal to the safe distance. In one embodiment, the safe distance is set to 0.6m, and for targets within 0.6m of the vehicle, it is further determined whether the target is a dynamic target. And applying a dynamic and static detection algorithm to the target, judging the target to be a non-effective target if the target is a static target, and judging the target to be an effective target if the target is a dynamic target. In one embodiment, the manner of determining whether the target is a dynamic target is to detect the position of the target at fixed intervals, such as at intervals of 5s, 3s, 1s, 0.5s, and so on. And if the position of the target is changed according to the continuous detection result, judging the target to be a dynamic target. And if the position of the target does not change, judging the target to be a static target. In one embodiment, objects within 0.6m of distance and varying in position within a detection interval of 5s are determined as dynamic objects with a distance less than the safe distance as valid objects.
The image processing module 142 is connected to the image capturing device 102, and the image processing module 142 starts the image capturing device 102 according to the trigger command generated by the detection control module. The image processing module 142 extracts image information of a valid target from an environment image captured by the image processing apparatus. In one embodiment, the image processing module 142 extracting the image information of the effective target from the environment image captured by the image processing device includes: extracting a face image, extracting a license plate image, tracking and extracting continuous images of the moving track of the effective target. The image processing module 142 may learn feature images such as a face and a license plate in advance by means of a deep learning technique, extract feature images such as a face image and a license plate image from an environment image collected by the image processing device, and track a movement trajectory of the feature images to obtain a continuous image of the movement trajectory. The characteristic image is usually an image of the effective target, so that tracking the characteristic image can track the moving track of the effective target and obtain continuous images.
The recognition control module 143 is connected to the object recognition device 103, and the recognition control module 143 activates the object recognition device 103 according to the trigger command generated by the detection control module. The recognition control module 143 records the physical characteristics and the movement trajectory of the effective target obtained by the target recognition device, and recognizes the effective target according to the physical characteristics and the movement trajectory. In one embodiment, the identification device is a lidar capable of acquiring precise physical characteristics and precise movement trajectories of the active target. Based on the precise physical characteristics and the precise movement trajectory obtained by the laser radar, the recognition control module 143 can recognize and classify the effective target through a classification and detection algorithm. In one embodiment, the effective targets are mainly divided into: pedestrians, non-motor vehicles, small motor vehicles, large motor vehicles, others, etc. are a few of the major categories. Pedestrians, non-motor vehicles, small motor vehicles, large motor vehicles are clearly characterized and recognizable, and other objects, such as small animals, other transportation vehicles, etc., cannot be classified into the former categories as other categories.
The data fusion module 144 is connected to the detection control module 141, the image control module 142, and the recognition control module 143. The data fusion module 144 performs fusion processing on the data generated by the detection control module 141, the image control module 142, and the recognition control module 143 to generate a recognition result, image data, physical characteristics, and a movement trajectory of the effective target. Because various sensors of the ultrasonic sensor, the camera and the laser radar have respective advantages in different environments, the data of the laser radar, the camera, the ultrasonic sensor and the like are further fused and processed through the data fusion module, and the accuracy and the efficiency of target identification are guaranteed.
The data storage module 145 is connected to the detection control module 141, the image control module 142, the recognition control module 143, and the data fusion module 144. The data storage module 145 stores data generated by the detection control module, the image control module, and the recognition result, the image data, the physical characteristics, and the movement trajectory of the effective target generated by the data fusion module. The data storage module 145 stores two types of data: one is raw data generated by the detection control module, the image control module, and the recognition control module. And the other type is the recognition result, the image data, the physical characteristics and the moving track of the effective target generated by the fusion processing of the data fusion module.
The power supply device 105 is connected to a power supply source of the vehicle, and supplies power to the object detection device 101, the image capturing device 102, the object recognition device 103, and the processing device 104. The power supply device 105 monitors the amount of power of the power supply source of the vehicle, and supplies power when the amount of power of the power supply source is not lower than the safety amount of power. And when the electric quantity of the power supply source is lower than the safe electric quantity, stopping supplying power to the target detection device, the image acquisition device, the target identification device and the processing device so as to ensure that the normal use of the vehicle is not influenced. In one embodiment, the power supply for the vehicle is an on-board battery or on-board battery.
The invention further provides a multi-sensor monitoring method. FIG. 3 discloses a flow chart of a multi-sensor monitoring method according to an embodiment of the invention. Referring to fig. 3, the multi-sensor monitoring method includes the steps of:
s301, detecting a target close to the vehicle, judging whether the target is an effective target, ignoring an ineffective target, and entering the next step for the effective target. In one embodiment, the step S301 of determining whether the target is a valid target includes the following processes:
and detecting the distance between the target and the vehicle, judging the target to be a non-effective target when the distance between the target and the vehicle is greater than the safe distance, and entering the next step when the distance between the target and the vehicle is less than or equal to the safe distance. In one embodiment, the safe distance is set to 0.6m, and for targets within 0.6m of the vehicle, it is further determined whether the target is a dynamic target.
And applying a dynamic and static detection algorithm to the target, judging the target to be a non-effective target if the target is a static target, and judging the target to be an effective target if the target is a dynamic target. In one embodiment, the manner of determining whether the target is a dynamic target is to detect the position of the target at fixed intervals, such as at intervals of 5s, 3s, 1s, 0.5s, and so on. And if the position of the target is changed according to the continuous detection result, judging the target to be a dynamic target. And if the position of the target does not change, judging the target to be a static target.
In one embodiment, objects within 0.6m of distance and varying in position within a detection interval of 5s are determined as dynamic objects with a distance less than the safe distance as valid objects.
S302, collecting an environment image, and extracting image information of an effective target from the environment image. In one embodiment, extracting image information of the effective target from the environment image comprises: extracting a face image, extracting a license plate image, tracking and extracting continuous images of the moving track of the effective target. In step S302, feature images such as a face and a license plate may be learned in advance by means of a deep learning technique, and then feature images such as a face image and a license plate image are extracted from an environment image collected by an image processing apparatus, and a movement trajectory of the feature images is tracked to obtain a continuous image of the movement trajectory. The characteristic image is usually an image of the effective target, so that tracking the characteristic image can track the moving track of the effective target and obtain continuous images.
S303, acquiring the physical characteristics and the moving track of the effective target, and identifying the effective target according to the physical characteristics and the moving track. In one embodiment, the precise physical characteristics and the precise movement track of the effective target are obtained, and the effective target is identified and classified according to the precise physical characteristics and the precise movement track. In one embodiment, a lidar is used to obtain precise physical characteristics and precise movement trajectories of the active target. Based on the accurate physical characteristics and the accurate moving track obtained by the laser radar, the effective targets can be identified and classified through a classification and detection algorithm. In one embodiment, the effective targets are mainly divided into: pedestrians, non-motor vehicles, small motor vehicles, large motor vehicles, others, etc. are a few of the major categories. Pedestrians, non-motor vehicles, small motor vehicles, large motor vehicles are clearly characterized and recognizable, and other objects, such as small animals, other transportation vehicles, etc., cannot be classified into the former categories as other categories.
Fig. 4 discloses a process of detecting and identifying an object in a multi-sensor monitoring method according to an embodiment of the invention. Referring to fig. 4, in an exemplary process, after the vehicle is shut down, the ultrasonic sensor is powered by the power supply device, and the ultrasonic sensor is triggered at a frequency of 1Hz per second under the control of the detection control module to detect the surrounding distance information. The ultrasonic sensor detects that obstacles exist around the vehicle, and judges whether the target distance is smaller than a set threshold value of 0.6 m. For a target of not less than 0.6m, it is judged as a long-distance ineffective obstacle. For targets smaller than 0.6m, it is determined as a close-range obstacle. And next, identifying and detecting the dynamic and static targets: in the process of detecting the dynamic and static targets, the dynamic and static judgment of the targets is carried out on the behavior information such as whether the distance of the obstacles changes within the time of 5s and the track prediction through a detection and identification algorithm. If the static target is judged, the static target is a static non-effective obstacle. If it is a dynamic target, the target is identified as a dynamic obstacle. And judging the dynamic barrier in the range of 0.6m as an effective potential collision target, and triggering the perception camera and the laser radar to start working.
And S304, carrying out data fusion to generate an identification result, image data, physical characteristics and a moving track of the effective target. Because various sensors of the ultrasonic sensor, the camera and the laser radar have respective advantages in different environments, the data of the laser radar, the camera, the ultrasonic sensor and the like are further fused and processed through the data fusion module, and the accuracy and the efficiency of target identification are guaranteed.
And S305, storing data, and saving the original data of the effective target, and the identification result, the image data, the physical characteristics and the moving track of the effective target generated by data fusion. In one embodiment, two types of data are saved in step S305: one is raw data generated by the detection control module, the image control module, and the recognition control module. And the other type is the recognition result, the image data, the physical characteristics and the moving track of the effective target generated by the fusion processing of the data fusion module.
Fig. 5 discloses a process of fusing data in the multi-sensor monitoring method according to an embodiment of the invention. The data of the sensing camera is inputted to the image processing module 142, the data of the lidar is inputted to the recognition control module 143, data fusion is performed via the data fusion module 144, and then data storage is performed by the data storage module 145. Note that, in the example shown in fig. 5, the ultrasonic sensor and the detection control module 141 are not illustrated. In other embodiments, however, the data fusion module 144 performs a fusion process on the data generated by the detection control module 141, the image control module 142 and the recognition control module 143. Similarly, in the example shown in fig. 5, the data storage module 145 is not connected to the image processing module and the recognition control module, but directly saves the data output by the data fusion module 144. In other embodiments, however, the data storage module 145 stores two types of data: one is raw data generated by the detection control module, the image control module, and the recognition control module. And the other type is the recognition result, the image data, the physical characteristics and the moving track of the effective target generated by the fusion processing of the data fusion module.
In one embodiment, the multi-sensor monitoring method further comprises a power supply step of supplying power by a power supply source of the vehicle, monitoring the power of the power supply source of the vehicle, supplying power when the power of the power supply source is not lower than a safe power, and stopping supplying power when the power of the power supply source is lower than the safe power so as to ensure that normal use of the vehicle is not affected. In one embodiment, the power supply for the vehicle is an on-board battery or on-board battery.
The multi-sensor monitoring equipment and the monitoring method of the invention utilize a plurality of sensors of different types to record the surrounding environment of the vehicle in the signal mode of the plurality of sensors when the vehicle is possibly invaded, and utilize the deep learning and multi-sensor signal fusion scheme to effectively identify, classify, predict the behavior track and the like, thereby being convenient for a user to quickly and efficiently locate accident information, providing accident evidence and protecting the safety of the user.
The multi-sensor monitoring equipment and the monitoring method have the following positive effects: 1) the environmental adaptation is strong. The monitoring scheme provided by the invention is suitable for monitoring and identifying under different illumination, and can fully utilize the laser radar to identify the obstacle information and extract the effective information under the condition of dark light. If the light illumination is insufficient, the characteristic information of a license plate and the like of a target can be identified by utilizing the self characteristic of the laser radar, and the characteristic information is used for assisting in judging the situation around the vehicle body; 2) the application range is wide. The invention is suitable for monitoring and identifying multiple scenes, not only can be applied to the detection of the surrounding environment when the vehicle is static, but also can be used for the state detection of the surrounding vehicle in the driving process; 3) and the searching of the positioning information is efficient. The invention can quickly position the surrounding environment target of the vehicle, and outputs the classification of the barrier and the surrounding information, such as the information of the license plate number, the human face characteristic and the like, through the information fusion of multiple sensors, so as to be used for quickly identifying effective information of a user; 4) the safety level is high. Under necessary conditions, the method can be used as a judgment basis of a public security system to further guarantee the maximum legal rights and interests of users.
It should also be noted that the above-mentioned embodiments are only specific embodiments of the present invention. It is apparent that the present invention is not limited to the above embodiments and similar changes or modifications can be easily made by those skilled in the art from the disclosure of the present invention and shall fall within the scope of the present invention. The embodiments described above are provided to enable persons skilled in the art to make or use the invention and that modifications or variations can be made to the embodiments described above by persons skilled in the art without departing from the inventive concept of the present invention, so that the scope of protection of the present invention is not limited by the embodiments described above but should be accorded the widest scope consistent with the innovative features set forth in the claims.

Claims (14)

1. A multi-sensor monitoring device, comprising:
the target detection device is arranged around the vehicle body, detects a target close to the vehicle, and sends out a detection signal when detecting the target close to the vehicle;
the image acquisition device acquires an environment image around the vehicle and a running image in front of the vehicle;
the target recognition device is arranged at the positions of four corners of the vehicle body and records the physical characteristics and the moving track of the target;
the processing device is connected with the target detection device, the image acquisition device and the target recognition device, receives a detection signal sent by the target detection device, judges whether the target is an effective target or not according to the detection signal, starts the image acquisition device and the target recognition device for the effective target, acquires an image of the effective target, records the physical characteristics and the moving track of the effective target, processes the image, the physical characteristics and the moving track of the effective target, identifies the type of the effective target, and stores the image, the physical characteristics and the moving track of the effective target;
the power supply device is connected to a power supply source of the vehicle and supplies power to the target detection device, the image acquisition device, the target identification device and the processing device, monitors the electric quantity of the power supply source of the vehicle, supplies power when the electric quantity of the power supply source is not lower than the safe electric quantity, and stops supplying power to the target detection device, the image acquisition device, the target identification device and the processing device when the electric quantity of the power supply source is lower than the safe electric quantity.
2. The multi-sensor monitoring apparatus of claim 1, wherein the object detecting device is an ultrasonic sensor comprising:
the long-distance ultrasonic sensor is arranged on the side surface of the vehicle body, and the detection distance of the long-distance ultrasonic sensor is not less than 8 m;
the short-distance ultrasonic sensors are arranged at the front end and the rear end of the vehicle body, the detection precision of the short-distance ultrasonic sensors is not lower than 0.1cm, and the distance resolution is not lower than 1 cm.
3. The multi-sensor monitoring device of claim 1, wherein the image capture device is a camera comprising:
the sensing cameras are arranged on four sides of the vehicle body and collect environmental images around the vehicle, and the sensing cameras are fisheye cameras;
the driving camera points to the front of the vehicle, acquires driving images in front of the vehicle, and is a common camera.
4. The multi-sensor monitoring device of claim 3,
the panoramic detection horizontal visual angle of the perception cameras is not less than 190 degrees, the detection distance is not less than 10m, and the four perception cameras collect 360-degree panoramic images around the vehicle;
the perception camera is provided with an infrared light supplementing device.
5. The multi-sensor monitoring device according to claim 1, wherein the target recognition means is a laser radar, four laser radars are respectively arranged at four corners of the vehicle body and form an angle of 45 degrees with the vehicle body, a horizontal detection view angle of the laser radar is not less than 120 °, a detection distance is not less than 50m, and the laser radar records physical characteristics and a moving track of the target.
6. The multi-sensor monitoring device of claim 1, wherein the processing means comprises: the system comprises a detection control module, an image control module, an identification control module, a data fusion module and a data storage module;
the detection control module is connected with the target detection device and used for receiving a detection signal sent by the target detection device, judging whether the target is an effective target or not according to the detection signal, and generating a trigger instruction for the effective target;
the image processing module is connected with the image acquisition device, the image processing module starts the image acquisition device according to the trigger instruction, and the image processing module extracts image information of an effective target from an environment image acquired by the image processing device;
the identification control module is connected with the target identification device, starts the target identification device according to the trigger instruction, records the physical characteristics and the moving track of the effective target acquired by the target identification device, and identifies the effective target according to the physical characteristics and the moving track;
the data fusion module is connected to the detection control module, the image control module and the identification control module, and is used for carrying out fusion processing on data generated by the detection control module, the image control module and the identification control module to generate an identification result, image data, physical characteristics and a moving track of an effective target;
the data storage module is connected with the detection control module, the image control module, the identification control module and the data fusion module, and stores data generated by the detection control module, the image control module and the identification control module and an identification result, image data, physical characteristics and a moving track of an effective target generated by the data fusion module.
7. The multi-sensor monitoring device according to claim 6, wherein the detection control module processes the detection signal according to the target distance and a dynamic and static detection algorithm, determines a dynamic target with a distance less than a safe distance as a valid target, generates a trigger command, and continuously records the direction, distance and moving speed of the target.
8. The multi-sensor monitoring device of claim 6, wherein the image processing module extracts image information of the valid target from the environment image captured by the image processing means comprises: extracting a face image, extracting a license plate image, tracking and extracting continuous images of the moving track of the effective target.
9. The multi-sensor monitoring device according to claim 6, wherein the precise physical characteristics and the precise movement trajectory of the effective target acquired by the identification means are identified, and the identification control module identifies and classifies the effective target according to the precise physical characteristics and the precise movement trajectory.
10. A multi-sensor monitoring method, comprising:
detecting a target approaching a vehicle, judging whether the target is an effective target, neglecting an ineffective target, and entering the next step for the effective target;
collecting an environment image, and extracting image information of an effective target from the environment image;
acquiring physical characteristics and a moving track of an effective target, and identifying the effective target according to the physical characteristics and the moving track;
carrying out data fusion to generate an identification result, image data, physical characteristics and a moving track of an effective target;
and storing data, namely storing the original data of the effective target and the identification result, the image data, the physical characteristics and the moving track of the effective target generated by data fusion.
11. The multi-sensor monitoring method of claim 10, wherein determining whether the target is a valid target comprises:
detecting the distance of the target, judging the target to be an invalid target when the distance between the target and the vehicle is greater than the safe distance, and entering the next step when the distance between the target and the vehicle is less than or equal to the safe distance;
and applying a dynamic and static detection algorithm to the target, judging the target to be a non-effective target if the target is a static target, and judging the target to be an effective target if the target is a dynamic target.
12. The multi-sensor monitoring method of claim 10, wherein extracting image information of valid targets from the environmental image comprises: extracting a face image, extracting a license plate image, tracking and extracting continuous images of the moving track of the effective target.
13. The multi-sensor monitoring method of claim 10, wherein an accurate physical characteristic and an accurate moving track of the effective target are obtained, and the effective target is identified and classified according to the accurate physical characteristic and the accurate moving track.
14. The multi-sensor monitoring method of claim 10, further comprising a power supplying step of supplying power from a power supply source of the vehicle, monitoring a power amount of the power supply source of the vehicle, supplying power when the power amount of the power supply source is not lower than a safety power amount, and stopping the power supply when the power amount of the power supply source is lower than the safety power amount.
CN202010347251.2A 2020-04-28 2020-04-28 Multi-sensor monitoring equipment and monitoring method Active CN111516605B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010347251.2A CN111516605B (en) 2020-04-28 2020-04-28 Multi-sensor monitoring equipment and monitoring method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010347251.2A CN111516605B (en) 2020-04-28 2020-04-28 Multi-sensor monitoring equipment and monitoring method

Publications (2)

Publication Number Publication Date
CN111516605A true CN111516605A (en) 2020-08-11
CN111516605B CN111516605B (en) 2021-07-27

Family

ID=71905960

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010347251.2A Active CN111516605B (en) 2020-04-28 2020-04-28 Multi-sensor monitoring equipment and monitoring method

Country Status (1)

Country Link
CN (1) CN111516605B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112241763A (en) * 2020-10-19 2021-01-19 中国科学技术大学 Multi-source multi-mode dynamic information fusion and cognition method and system
CN112333409A (en) * 2020-11-11 2021-02-05 沈阳美行科技有限公司 Data processing method and device of automatic driving system and electronic equipment
CN113505673A (en) * 2021-06-30 2021-10-15 扬州明晟新能源科技有限公司 Glass carrying track identification method
CN115147952A (en) * 2022-06-07 2022-10-04 中国第一汽车股份有限公司 Recording method and device for vehicle emergency state and electronic equipment

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102799862A (en) * 2012-06-29 2012-11-28 陕西省交通规划设计研究院 System and method for pedestrian rapid positioning and event detection based on high definition video monitor image
KR101656302B1 (en) * 2016-03-07 2016-09-09 (주)디지탈엣지 Accident prevention and handling system and method
CN106379319A (en) * 2016-10-13 2017-02-08 上汽大众汽车有限公司 Automobile driving assistance system and control method
CN108490941A (en) * 2018-03-29 2018-09-04 奇瑞汽车股份有限公司 Applied to the automated driving system and its control method of road sweeper, device
CN109581358A (en) * 2018-12-20 2019-04-05 奇瑞汽车股份有限公司 Recognition methods, device and the storage medium of barrier
CN109658700A (en) * 2019-03-05 2019-04-19 上汽大众汽车有限公司 Intersection anti-collision prewarning apparatus and method for early warning
CN109740461A (en) * 2018-12-21 2019-05-10 北京智行者科技有限公司 Target is with subsequent processing method
CN109829386A (en) * 2019-01-04 2019-05-31 清华大学 Intelligent vehicle based on Multi-source Information Fusion can traffic areas detection method
CN110361741A (en) * 2019-07-16 2019-10-22 扬州瑞控汽车电子有限公司 A kind of the frontal collisions method for early warning and its system of view-based access control model and radar fusion
CN110371108A (en) * 2019-06-14 2019-10-25 浙江零跑科技有限公司 Cartborne ultrasound wave radar and vehicle-mounted viewing system fusion method
GB2573635A (en) * 2018-03-21 2019-11-13 Headlight Ai Ltd Object detection system and method
CN209955868U (en) * 2019-06-04 2020-01-17 福建省威盛机械发展有限公司 Forklift truck with real-time monitoring device for vehicle body environment

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102799862A (en) * 2012-06-29 2012-11-28 陕西省交通规划设计研究院 System and method for pedestrian rapid positioning and event detection based on high definition video monitor image
KR101656302B1 (en) * 2016-03-07 2016-09-09 (주)디지탈엣지 Accident prevention and handling system and method
CN106379319A (en) * 2016-10-13 2017-02-08 上汽大众汽车有限公司 Automobile driving assistance system and control method
GB2573635A (en) * 2018-03-21 2019-11-13 Headlight Ai Ltd Object detection system and method
CN108490941A (en) * 2018-03-29 2018-09-04 奇瑞汽车股份有限公司 Applied to the automated driving system and its control method of road sweeper, device
CN109581358A (en) * 2018-12-20 2019-04-05 奇瑞汽车股份有限公司 Recognition methods, device and the storage medium of barrier
CN109740461A (en) * 2018-12-21 2019-05-10 北京智行者科技有限公司 Target is with subsequent processing method
CN109829386A (en) * 2019-01-04 2019-05-31 清华大学 Intelligent vehicle based on Multi-source Information Fusion can traffic areas detection method
CN109658700A (en) * 2019-03-05 2019-04-19 上汽大众汽车有限公司 Intersection anti-collision prewarning apparatus and method for early warning
CN209955868U (en) * 2019-06-04 2020-01-17 福建省威盛机械发展有限公司 Forklift truck with real-time monitoring device for vehicle body environment
CN110371108A (en) * 2019-06-14 2019-10-25 浙江零跑科技有限公司 Cartborne ultrasound wave radar and vehicle-mounted viewing system fusion method
CN110361741A (en) * 2019-07-16 2019-10-22 扬州瑞控汽车电子有限公司 A kind of the frontal collisions method for early warning and its system of view-based access control model and radar fusion

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112241763A (en) * 2020-10-19 2021-01-19 中国科学技术大学 Multi-source multi-mode dynamic information fusion and cognition method and system
CN112333409A (en) * 2020-11-11 2021-02-05 沈阳美行科技有限公司 Data processing method and device of automatic driving system and electronic equipment
CN113505673A (en) * 2021-06-30 2021-10-15 扬州明晟新能源科技有限公司 Glass carrying track identification method
CN115147952A (en) * 2022-06-07 2022-10-04 中国第一汽车股份有限公司 Recording method and device for vehicle emergency state and electronic equipment

Also Published As

Publication number Publication date
CN111516605B (en) 2021-07-27

Similar Documents

Publication Publication Date Title
CN111516605B (en) Multi-sensor monitoring equipment and monitoring method
US10421436B2 (en) Systems and methods for surveillance of a vehicle using camera images
Broggi et al. A new approach to urban pedestrian detection for automatic braking
CN104648254B (en) A kind of vehicle mirrors auto-folder system
US9466210B2 (en) Stop violation detection system and method
CN102765365B (en) Pedestrian detection method based on machine vision and pedestrian anti-collision warning system based on machine vision
US8731816B2 (en) Method for classifying an object as an obstacle
CN109686031B (en) Identification following method based on security
US8140226B2 (en) Security system and a method to derive a security signal
CN102592475A (en) Crossing traffic early warning system
EP1671216A2 (en) Moving object detection using low illumination depth capable computer vision
US20120163671A1 (en) Context-aware method and apparatus based on fusion of data of image sensor and distance sensor
CN107015219A (en) Collision-proof method and its system with radar imagery function
Gavrila et al. A multi-sensor approach for the protection of vulnerable traffic participants the PROTECTOR project
JP2009280109A (en) Vehicle vicinity monitoring system
CN106297314A (en) A kind of drive in the wrong direction or the detection method of line ball vehicle behavior, device and a kind of ball machine
CN103871243A (en) Wireless vehicle management system and method based on active safety platform
CN111717243B (en) Rail transit monitoring system and method
Chen et al. Real-time approaching vehicle detection in blind-spot area
KR101861525B1 (en) Pedestrian Protection System for autonomous car
Saboune et al. A visual blindspot monitoring system for safe lane changes
CN215264887U (en) Event recorder
CN113223276B (en) Pedestrian hurdling behavior alarm method and device based on video identification
CN112519801A (en) Safety protection system and method for running vehicle
CN218825297U (en) Multi-sensor integrated robot for intelligent cell patrol

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant