WO2022228023A1 - Procédé et appareil de surveillance de zone aveugle, dispositif électronique et support de stockage - Google Patents

Procédé et appareil de surveillance de zone aveugle, dispositif électronique et support de stockage Download PDF

Info

Publication number
WO2022228023A1
WO2022228023A1 PCT/CN2022/084399 CN2022084399W WO2022228023A1 WO 2022228023 A1 WO2022228023 A1 WO 2022228023A1 CN 2022084399 W CN2022084399 W CN 2022084399W WO 2022228023 A1 WO2022228023 A1 WO 2022228023A1
Authority
WO
WIPO (PCT)
Prior art keywords
frame
information
distance information
monitoring image
historical
Prior art date
Application number
PCT/CN2022/084399
Other languages
English (en)
Chinese (zh)
Inventor
罗铨
李弘扬
蒋沁宏
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Publication of WO2022228023A1 publication Critical patent/WO2022228023A1/fr

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/80Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
    • B60R2300/802Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for monitoring and displaying vehicle exterior blind spot views
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Definitions

  • the present disclosure is based on the Chinese patent application with the application number of 202110467776.4, the application date of April 28, 2021, and the application name of "a blind spot monitoring method, device, electronic device and storage medium", and requires the priority of the above-mentioned Chinese patent application
  • the entire contents of the above-mentioned Chinese patent application are hereby incorporated into the present disclosure by reference.
  • the present disclosure relates to the technical field of image processing, and in particular, to a blind spot monitoring method, device, electronic device, and storage medium.
  • blind spots are prone to occur due to the limited range that can be observed. Due to the existence of blind spots, it is very easy to cause errors in judgment and operation of drivers or autonomous vehicles, reducing the safety of driving.
  • the embodiments of the present disclosure provide at least a blind spot monitoring method, an apparatus, an electronic device, and a storage medium.
  • an embodiment of the present disclosure provides a blind spot monitoring method, including: acquiring a current frame monitoring image collected by a collection device on a target vehicle; performing object detection on the current frame monitoring image to obtain the current frame monitoring image Type information and position of the object included in the image; according to the position of the object and the blind spot of the target vehicle, determine the target object located in the blind spot of the target vehicle; according to the type information and position of the target object and the driving state of the target vehicle to generate monitoring results.
  • the current frame monitoring image acquired by the acquisition device on the target vehicle is acquired, and the object detection is performed on the current frame monitoring image to determine the type information and position of the object included in the current frame monitoring image, and then according to the position of the object and the blind spot of the target vehicle, to determine the target object located in the blind spot of the target vehicle; and then generate monitoring results according to the type information and location of the target object and the driving state of the target vehicle.
  • different monitoring results can be generated for different types of target objects, thereby improving driving safety and blind spot monitoring performance.
  • the monitoring result includes warning information
  • the driving state of the target vehicle includes steering information of the target vehicle
  • the type information and position of the target object and the target vehicle According to the type information and position of the target object and the steering information of the target vehicle, determining the level of the alarm information; generating and prompting the alarm information of the determined level.
  • the monitoring result includes vehicle control instructions
  • the driving state of the target vehicle includes steering information of the target vehicle
  • the driving state of the vehicle and generating a monitoring result includes: generating the vehicle control instruction according to the type information and position of the target object and the steering information of the target vehicle; the blind spot monitoring method further includes: based on the vehicle control instruction to control the target vehicle to travel.
  • determining the target object located in the blind spot of the target vehicle according to the position of the object and the blind spot of the target vehicle includes: monitoring the image in the current frame according to the The position of the object is determined, and the current first distance information between the target vehicle and the object in the current frame monitoring image is determined; according to the current first distance information, the target object located in the blind spot of the target vehicle is determined.
  • the target object located in the blind spot of the target vehicle can be accurately detected from all the objects included in the monitoring image of the current frame.
  • determining the current first distance information between the target vehicle and the object in the current frame monitoring image according to the position of the object in the current frame monitoring image includes: Based on the current frame monitoring image, determine the distance information to be adjusted between the target vehicle and the object in the current frame monitoring image; based on the object in the multi-frame historical frame monitoring image collected by the collection device, two adjacent two The scale change information between the scales in the frame monitoring image, and the historical first distance information between the object and the target vehicle in each frame of the historical frame monitoring image in the multi-frame historical frame monitoring image, for all The distance information to be adjusted is adjusted to obtain the current first distance information between the target vehicle and the object.
  • the adjusting the distance information to be adjusted to obtain the current first distance information between the target vehicle and the object includes: adjusting the distance information to be adjusted , until the error amount of the scale change information is the smallest, and the adjusted distance information is obtained; wherein, the error amount is based on the to-be-adjusted distance information, the scale change information, and each frame in the multi-frame historical frame monitoring image The historical first distance information corresponding to the monitoring image of the historical frame is determined; and the current first distance information is determined based on the adjusted distance information.
  • the blind spot monitoring method before determining the current first distance information based on the adjusted distance information, further includes: performing target detection on the monitoring image of the current frame, and determining The position information of the detection frame of the object contained in the current frame monitoring image; the current second distance information is determined based on the position information of the detection frame and the calibration parameters of the acquisition device; the adjustment is based on the adjustment
  • determining the current first distance information includes: based on the current second distance information, the object and the target vehicle in each frame of the historical frame monitoring image in the multi-frame historical frame monitoring image between the historical second distance information, the historical first distance information corresponding to the frame of the historical frame monitoring image, and the adjusted distance information, determine the distance offset information for the adjusted distance information;
  • the adjusted distance information is adjusted by the distance offset information to obtain the current first distance information.
  • the adjusted distance information can be further adjusted, so as to obtain the current distance information of the target vehicle and the object with high accuracy.
  • Second distance information based on the current second distance information, the historical distance between the object and the target vehicle in each frame of the historical frame monitoring image in the multi-frame historical frame monitoring image Second distance information, the historical first distance information corresponding to the frame of the historical frame monitoring image, and the adjusted distance information, and determining distance offset information for the adjusted distance information, including: based on the current first distance information
  • the second distance information and the historical second distance information corresponding to each frame of the historical frame monitoring image in the multi-frame historical frame monitoring images determine the distance corresponding to each historical frame monitoring image in the multi-frame historical frame monitoring images.
  • the first linear fitting coefficient of the first fitting curve fitted by the historical second distance information and the current second distance information based on each frame of the historical frame monitoring image corresponding to the multi-frame historical frame monitoring image
  • the historical first distance information and the adjusted distance information determine the historical first distance information and the adjusted distance information corresponding to each frame of the historical frame monitoring image in the multi-frame historical frame monitoring image.
  • a second linear fitting coefficient of the second fitting curve fitted to the distance information determining a distance bias for the adjusted distance information based on the first linear fitting coefficient and the second linear fitting coefficient setting information.
  • the determining the current second distance information based on the position information of the detection frame and the calibration parameters of the collection device includes: obtaining, based on the position information of the detection frame, The pixel coordinate value of the corner point is set in the detection frame; based on the pixel coordinate value of the set corner point, the calibration parameters of the acquisition device, and the vanishing point of the lane line used when determining the calibration parameters of the acquisition device The pixel coordinate value of , determines the current second distance information.
  • the calibration parameters of the acquisition device include a first height value of the acquisition device relative to the ground and a focal length of the acquisition device; the pixel coordinate value based on the set corner point , the calibration parameters of the collection device and the pixel coordinate value of the vanishing point of the lane line used when determining the calibration parameters of the collection device, and determining the current second distance information includes: based on the pixels of the vanishing point of the lane line The coordinate value and the pixel coordinate value of the set corner point in the detection frame, determine the first pixel height value of the acquisition device relative to the ground; based on the pixel coordinate value of the set corner point, determine the current frame monitoring a second pixel height value of the object in the image relative to the ground; determining a second pixel height value of the object relative to the ground based on the first pixel height value, the second pixel height value, and the first height value height value; determining the current second distance information based on the second height value, the focal length of the collecting device, and the second pixel
  • the actual height of the object can be obtained quickly and accurately by introducing the pixel coordinate value of the vanishing point of the lane line and the calibration parameters of the acquisition device value, further, the current second distance information between the target vehicle and the object can be quickly and accurately determined.
  • the determining, based on the monitoring image of the current frame, the distance information to be adjusted between the target vehicle and the object in the monitoring image of the current frame includes: The scale change information between the scale in the frame monitoring image and the scale in the historical frame monitoring image adjacent to the current frame monitoring image; based on the scale change information and the scale adjacent to the current frame monitoring image The historical first distance information corresponding to the historical frame monitoring image is determined to determine the to-be-adjusted distance information.
  • the historical first distance information with high accuracy corresponding to the monitoring image of the historical frame adjacent to the monitoring image of the current frame, and the monitoring image of the object in the current frame and the monitoring image of the current frame adjacent to the monitoring image of the current frame are used.
  • the scale change information between scales in the historical frame monitoring image can more accurately obtain the distance information to be adjusted, so that the adjustment speed can be improved when the current first distance information is determined based on the distance information to be adjusted later.
  • the scale change information between the scales of the object in two adjacent frames of monitoring images is determined in the following manner: by extracting a plurality of feature points contained in the object in the adjacent two frames respectively.
  • the frame monitoring image In the frame monitoring image, the first position information in the previous frame monitoring image, and the second position information in the following frame monitoring image; based on the first position information and the second position information, it is determined that the object is in Scale change information between scales in two adjacent frames of monitoring images.
  • the determining, based on the first position information and the second position information, the scale change information between the scales of the object in two adjacent frames of monitoring images includes: based on For the first position information, determine the first scale value of the target line segment formed by a plurality of feature points included in the object in the monitoring image of the previous frame; based on the second position information, determine the target line segment the second scale value in the monitoring image of the next frame; based on the first scale value and the second scale value, determine the scale change information of the object between the scales in the two adjacent frames of the monitoring image .
  • the position information of the multiple feature points contained in the object in the monitoring image can be more accurately represented, so as to obtain more accurate scale change information, which is convenient for the When the change information adjusts the to-be-adjusted distance information, more accurate current first distance information can be obtained.
  • an embodiment of the present disclosure provides a blind spot monitoring device, including:
  • an acquisition module used to acquire the current frame monitoring image acquired by the acquisition device on the target vehicle
  • a detection module configured to perform object detection on the current frame monitoring image to obtain type information and position of the object included in the image
  • a determining module configured to determine the target object located in the blind spot of the target vehicle according to the position of the object and the blind spot of the target vehicle;
  • the generating module is configured to generate monitoring results according to the type information and position of the target object and the driving state of the target vehicle.
  • embodiments of the present disclosure provide an electronic device, including: a processor, a memory, and a bus, where the memory stores machine-readable instructions executable by the processor, and when the electronic device runs, the processing A bus communicates between the processor and the memory, and when the machine-readable instructions are executed by the processor, the steps of the blind spot monitoring method according to the first aspect are performed.
  • an embodiment of the present disclosure provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and the computer program is executed by a processor to execute the blind spot monitoring method according to the first aspect. step.
  • FIG. 1 shows a flowchart of a blind spot monitoring method provided by an embodiment of the present disclosure
  • FIG. 2 shows a schematic diagram of determining a blind spot of a visual field provided by an embodiment of the present disclosure
  • FIG. 3 shows a flowchart of a specific method for determining current first distance information provided by an embodiment of the present disclosure
  • FIG. 4 shows a flowchart of a method for determining scale change information provided by an embodiment of the present disclosure
  • FIG. 5 shows a flowchart of a method for determining distance information to be adjusted provided by an embodiment of the present disclosure
  • FIG. 6 shows a flowchart of a method for determining current first distance information provided by an embodiment of the present disclosure
  • FIG. 7 shows a flowchart of a method for determining current second distance information provided by an embodiment of the present disclosure
  • FIG. 8 shows a schematic diagram of a positional relationship among a target device, a collection device, and a target object provided by an embodiment of the present disclosure
  • FIG. 9 shows a schematic diagram of a detection frame of a target object provided by an embodiment of the present disclosure.
  • FIG. 10 shows a schematic diagram of a principle for determining current second distance information provided by an embodiment of the present disclosure
  • FIG. 11 shows a schematic diagram of a scenario for determining current second distance information provided by an embodiment of the present disclosure
  • FIG. 12 shows a schematic structural diagram of a blind spot monitoring device provided by an embodiment of the present disclosure
  • FIG. 13 shows a schematic diagram of an electronic device provided by an embodiment of the present disclosure.
  • an embodiment of the present disclosure provides a blind spot monitoring method.
  • the embodiment of the present disclosure determines the current frame monitoring image by acquiring the current frame monitoring image collected by the acquisition device on the target vehicle, and performing object detection on the current frame monitoring image.
  • the type information and position of the object included in the image and then according to the position of the object and the blind spot of the target vehicle, determine the target object located in the blind spot of the target vehicle; then according to the type information and position of the target object, and the driving of the target vehicle status and generate monitoring results.
  • different monitoring results can be generated for different types of target objects, thereby improving blind spot monitoring performance and improving driving safety.
  • the device includes, for example, a terminal device or a server or other processing device, and the terminal device may be a computing device, a vehicle-mounted device, or the like.
  • the blind spot monitoring method may be implemented by the processor calling computer-readable instructions stored in the memory.
  • the blind spot monitoring method includes the following S101-S104:
  • the target vehicle in a scenario where a driver drives a vehicle, the target vehicle may include, for example, a vehicle driven by the driver; in an autonomous driving scenario, the target vehicle may include, for example, an autonomous vehicle; in a warehousing and freight scenario, the target vehicle may include, for example, an autonomous vehicle.
  • a collection device may also be mounted on the target vehicle, and the collection device may be a monocular camera set on the target vehicle, which is used for shooting during the driving of the target vehicle.
  • the target area includes: a blind spot of the vehicle's field of vision
  • the capture device may be installed on a column of the vehicle, and the camera of the capture device faces the blind spot of the vehicle's field of view.
  • FIG. 2 is a schematic diagram of determining the blind area of the field of view provided by the embodiment of the present disclosure.
  • the vehicle 1 is equipped with a collection device 2
  • the blind area of the visual field included in the target area collected by the collection device 2 includes the positions indicated by 3 and 4 .
  • the following methods may be used: acquiring the monitoring video obtained by the acquisition device on the target vehicle performing image acquisition on the target area; determining the current frame from the monitoring video Frame monitoring images; wherein, the target area includes: an area within the shooting field of view of the acquisition device.
  • the direction of the capture can be pre-set; after the capture device is mounted on the target vehicle, it can be determined that the target area to be captured is the area within the capture range of the capture device.
  • the acquisition device can collect images of the target area, obtain monitoring videos, and determine the monitoring images of the current frame from the monitoring videos.
  • the method of determining the video frame image from the monitoring video can be used, for example, the frame of the captured video frame image that is closest to the current time is used as the monitoring image of the current frame.
  • object detection may also be performed on the current frame monitoring image to obtain the type information and position of the object included in the current frame monitoring image.
  • a pre-trained target detection neural network can be used to perform object detection processing on the current frame monitoring image
  • object detection algorithms can be used: convolutional neural network (Convolutional Neural Networks, CNN), target detection network (Region-based CNN, RCNN) , Fast Neural Network (Fast RCNN), and Faster Neural Network (Faster RCNN).
  • the objects that can be detected include, for example, other driving vehicles, pedestrians, road facilities, and road obstacles.
  • the position of the object included in the image can also be obtained. Through the position of the detected object in the image, the actual position of the object in the actual driving process of the target vehicle can be further determined.
  • the target object located in the blind spot of the target vehicle can be determined in the following manner: According to the position of the object in the current frame monitoring image, determine the target vehicle and the current frame monitoring image The current first distance information of the object in the target vehicle; according to the current first distance information, the target object located in the blind spot of the target vehicle's field of view is determined.
  • the monocular camera mounted on the smart car will have problems such as road bumps or obstructions due to changes in the driving road conditions during the driving of the smart car.
  • the distance measurement is performed based on the detection frame corresponding to the object in the monitoring image of the current frame, the accurate distance to the object may not be detected.
  • the distance to the object is continuously detected based on the detection frame, the obtained distance between the smart car and the object is not stable in time series.
  • an embodiment of the present disclosure also proposes a distance detection scheme
  • the method includes the following S301-S302:
  • the objects may include, but are not limited to, vehicles, pedestrians, fixed obstacles, and the like.
  • the object is a vehicle as an example for introduction.
  • the current frame monitoring images provided by the embodiments of the present disclosure are all monitoring images that are not detected for the first time. If the current frame monitoring images are monitoring images that detect objects for the first time, it can be directly
  • the position information, the parameter information of the acquisition device and the pixel coordinate value of the vanishing point obtained in the above calibration process determine the current second distance information between the object and the object, and the current second distance information can be directly used as the current first distance information, specifically The process of determining the current second distance information is described later in detail.
  • the current first distance information corresponding to the current frame monitoring image, or the historical first distance information corresponding to each frame of the historical frame monitoring image indicates that after adjustment obtained distance information.
  • the distance information to be adjusted between the target vehicle and the object based on the current frame monitoring image it can be based on the historical first distance information corresponding to the historical frame monitoring image adjacent to the current frame monitoring image and the current frame monitoring
  • the scale change information between the image and the scale in the historical frame monitoring image adjacent to the current frame monitoring image is determined, and the to-be-adjusted distance information is adjusted subsequently.
  • the scale change information of the object in two adjacent frames of monitoring images (such as including monitoring image i and monitoring image j) in the multi-frame historical frame monitoring images collected by the acquisition device includes the size of the object in the next frame of monitoring image j.
  • the ratio of the scale to the scale of the object in the monitoring image i in the previous frame, and the specific determination process will be described later.
  • the embodiment of the present disclosure determines the historical first distance information between the target vehicle and the object corresponding to each frame of the historical frame monitoring image in the same manner as the current first distance information between the target vehicle and the object. Therefore, In this embodiment of the present disclosure, the process of determining the historical first distance information will not be described repeatedly.
  • the scale change information in two adjacent frames of monitoring images in the multi-frame historical frame monitoring images based on the object and the historical first distance information between the target vehicle and the object that have been adjusted in the historical process can be used. , adjust the distance information to be adjusted obtained based on the current monitoring image, so that the distance between the target vehicle and the object corresponding to two adjacent frames of monitoring images changes relatively smoothly, and can truly reflect the distance between the target vehicle and the object during the driving process.
  • the actual distance changes can improve the stability of the predicted distance between the target vehicle and the object in the time series.
  • the scale change information of the object in two adjacent frames of monitoring images can also reflect the distance change between the target vehicle and the object.
  • each historical frame monitoring image corresponds to the history between the target vehicle and the object.
  • the first distance information is relatively accurate distance information obtained after adjustment. Therefore, in the multi-frame historical frame monitoring images collected by the object-based acquisition device, the scale change information in two adjacent frames of monitoring images, and the multi-frame historical frame monitoring images. After adjusting the historical first distance information between the target vehicle and the object corresponding to each frame of the historical frame monitoring image in the to-be-adjusted distance information, more accurate current first distance information can be obtained.
  • the current first distance information when determining the current first distance information, it can be based on the scale change information between scales in two adjacent frames of monitoring images in multiple frames of historical frame monitoring images based on the object, and the information that has been adjusted in the historical process.
  • the scale change information between the scales of an object in two adjacent frames of monitoring images can be determined in the following manner, including the following S401 to S402:
  • S401 respectively extracting the first position information of the multiple feature points included in the object in the monitoring image of the previous frame of the two adjacent frames of the monitoring image, and the second position information of the monitoring image of the following frame.
  • target detection can be performed on the monitoring image based on a pre-trained target detection model to obtain a detection frame for representing the position of the object in the monitoring image, and then a plurality of feature points constituting the object can be extracted in the detection frame, these Feature points refer to points in the object where the pixels change drastically, such as inflection points, corner points, etc.
  • the connecting line of any two feature points in the same frame of monitoring image can form a line segment, so that the first position information of any two feature points in the previous frame of monitoring image can be obtained.
  • the scale of the line segment formed by any two feature points in the previous frame of monitoring image, and similarly, from the second position information of any two feature points in the next frame of monitoring image, the line segment formed by any two feature points can be obtained in For the scale in the monitoring image of the next frame, in this way, the scale of the multiple line segments on the object in the monitoring image of the previous frame and the scale of the monitoring image of the subsequent frame can be obtained respectively.
  • the scale change information of the object in two adjacent frames of monitoring images can be determined according to the respective scales of the multiple line segments in the previous frame of the monitoring image and the respective scales in the next frame of the monitoring image.
  • the target line segment includes n, where n is greater than or equal to 1 and less than a set threshold, and based on the first position information of the feature points included in each target line segment, the first scale value corresponding to the target line segment can be obtained, And, based on the second position information of the feature points included in each target line segment, a second scale value corresponding to the target line segment can be obtained.
  • the ratio between the second scale value and the first scale value corresponding to any target line segment can be used to represent the scale change information corresponding to the target line segment, and further according to the scale change information corresponding to the multiple target line segments. , to determine the scale change information of the object in two adjacent frames of monitoring images.
  • the average value of the scale change information corresponding to the set number of target line segments can be used as the scale change information of the object in two adjacent frames of monitoring images.
  • the position information of the upper left corner and the lower right corner of the detection frame in the monitoring image indicates that the object is in the monitoring image.
  • the scale change information of the object in the two adjacent frames of monitoring images is determined.
  • the included position information of multiple feature points in the monitoring image can more accurately represent the position information of the object in the monitoring image, thereby obtaining more accurate scale change information.
  • the position information of the multiple feature points contained in the object in the monitoring image can be more accurately represented, so as to obtain more accurate scale change information, which is convenient for the When the change information adjusts the to-be-adjusted distance information, more accurate current first distance information can be obtained.
  • S501-S502 when determining the distance information to be adjusted between the target vehicle and the object in the current frame monitoring image based on the current frame monitoring image, as shown in FIG. 5, the following S501-S502 may be included:
  • S501 Acquire the scale change information between the scale of the object in the monitoring image of the current frame and the scale in the monitoring image of the historical frame adjacent to the monitoring image of the current frame.
  • the historical frame monitoring image adjacent to the current frame monitoring image refers to the previous frame monitoring image before the current frame monitoring image at the time of acquisition, and the object is in the current frame monitoring image and the historical frame adjacent to the current frame monitoring image.
  • the scale change information between monitoring images can be represented by the ratio of the scale of the object in the monitoring image of the current frame to the scale of the object in the monitoring image of the historical frame adjacent to the monitoring image of the current frame. The specific determination process will be carried out later. elaborate.
  • the scale of the object in the collected monitoring images will gradually increase, that is, the scale of the object in two adjacent frames of monitoring images and the target vehicle and object corresponding to the two frames of monitoring images.
  • the distance information to be adjusted can be determined by the following formula (1):
  • d 0_scale represents the distance information to be adjusted
  • scale represents the ratio of the scale of the object in the monitoring image of the current frame to the scale of the object in the monitoring image of the historical frame adjacent to the monitoring image of the current frame
  • D 1_final represents the monitoring image of the current frame The historical first distance information corresponding to the adjacent historical frame monitoring images.
  • the historical first distance information corresponding to the monitoring image of the historical frame adjacent to the monitoring image of the current frame, and the scale of the object in the monitoring image of the current frame and the monitoring image of the historical frame adjacent to the monitoring image of the current frame are used. It is possible to obtain relatively accurate distance information to be adjusted, so that the adjustment speed can be increased when the current first distance information is determined based on the distance information to be adjusted later.
  • the scale change information obtained based on this will have a sudden change compared with the scale change information in the adjacent two frames of monitoring images in the multi-frame historical frame monitoring image, so the distance information to be adjusted obtained based on this is compared with the adjacent ones.
  • the scale change information in two adjacent frames of monitoring images in the multi-frame historical frame monitoring image collected by the object, and each frame in the multi-frame historical frame monitoring image can be used.
  • the historical first distance information corresponding to the frame historical frame monitoring image adjusts the to-be-adjusted distance information.
  • the following steps S601 to S603 may be included:
  • the amount of error used to represent the scale change information of the object between the current frame monitoring image and the historical frame monitoring image adjacent to the current frame monitoring image can be predicted based on the following formula (2):
  • the above formula (2) can be optimized through a variety of optimization methods, for example, the d 0_scale in the above formula (2) can be adjusted in a manner including but not limited to the Newton gradient descent method.
  • E is the smallest, it is obtained:
  • the obtained object in the monitoring image of the current frame and the difference between the monitoring image of the current frame and the current frame can be reduced.
  • the error of the scale change information between the adjacent historical frames of the monitoring images is monitored, thereby improving the stability of the determined adjusted distance information.
  • the adjusted distance information may be further adjusted to obtain the current first distance information between the target vehicle and the object. .
  • the blind spot monitoring method provided by the embodiment of the present disclosure further includes the following S701 to S702:
  • S701 perform target detection on the monitoring image of the current frame, and determine the position information of the detection frame of the object included in the monitoring image of the current frame.
  • S702 Determine the current second distance information based on the position information of the detection frame and the calibration parameters of the collection device.
  • the acquisition device provided on the target vehicle can be calibrated, for example, the acquisition device is installed on the top of the target vehicle, as shown in FIG.
  • the optical axis of the collecting device is parallel to the horizontal ground and the advancing direction of the target vehicle. In this way, the focal length (f x , f y ) of the collecting device and the height H c of the collecting device relative to the ground can be obtained.
  • target detection can be performed on the monitoring image of the current frame by using a pre-trained target detection model to obtain the object contained in the monitoring image of the current frame, and the detection frame corresponding to the object.
  • the position information of the detection frame is obtained.
  • the position information of the corner points of the detection frame in the monitoring image of the current frame may be included, for example, the pixel coordinate values of the corner points A, B, C and D in the monitoring image of the current frame may be included.
  • H x represents the actual width of the object
  • H y represents the actual height of the object relative to the ground
  • w b represents the pixel width of the object in the monitoring image of the current frame, which can be determined by the pixel width of the detection frame ABCD of the object
  • h b represents The pixel height of the object relative to the ground can be determined by the pixel height of the detection frame ABCD of the object
  • D 0 represents the current second distance information between the target vehicle and the object.
  • H x and Hy can be determined based on the type of the detected object, for example, when the object is a vehicle, it can be determined based on the detected type of the target vehicle and the pre-stored vehicle type and The corresponding height of the vehicle and the corresponding relationship between the widths determine the actual width and actual height of the target vehicle.
  • the width w b of the object in the monitoring image of the current frame can be determined by the pixel coordinate value of the corner point AB in the monitoring image of the current frame in the detection frame ABCD as shown in Figure 9, or by the corner point CD in the current frame.
  • the pixel coordinate value in the monitoring image is determined;
  • the height h b of the object in the monitoring image of the current frame can be determined by the pixel coordinate value of the corner point BC in the monitoring image of the current frame, or the pixel coordinate value of the corner point AD in the monitoring image of the current frame can be determined.
  • the coordinate value is determined, which will not be repeated here.
  • the location information and the calibration parameters of the acquisition device, when determining the current second distance information include the following S7021-S7022:
  • S7022 Determine the current second distance information based on the pixel coordinate value of the set corner point, the calibration parameters of the collection device, and the pixel coordinate value of the vanishing point of the lane line used in determining the calibration parameter of the collection device.
  • the current second distance is determined based on the pixel coordinate value of the set corner point, the calibration parameters of the acquisition device, and the pixel coordinate value of the vanishing point of the lane line used when determining the calibration parameters of the acquisition device.
  • the principle of information is explained:
  • the target vehicle in the initial calibration process of the acquisition device, can be parked between parallel lane lines, and the parallel lane lines in the distance intersect at a point when the phase plane of the acquisition device is projected, which can be called the disappearance of the lane lines.
  • the vanishing point of the lane line approximately coincides with point V in Figure 9.
  • the vanishing point of the lane line can be used to represent the projection position of the acquisition device in the monitoring image, and the pixel coordinate value of the vanishing point of the lane line can indicate that the acquisition device is in the current frame. Monitor pixel coordinate values in an image.
  • the distance between the two points EG can represent the actual height H c of the acquisition device relative to the ground; the distance between the two points FG can represent the actual height Hy of the object relative to the ground; the distance between the two points MN
  • the distance of MV can represent the pixel height h b of the object relative to the ground; the distance between the two points of MV can represent the pixel height of the acquisition device relative to the ground.
  • the ratio between the actual height H c of the acquisition device relative to the ground and the actual height H y of the object relative to the ground when the acquisition device captures the monitoring image of the current frame is equal to The ratio of the pixel height of the acquisition device relative to the ground and the pixel height h b of the object relative to the ground.
  • the pixel height h b and the pixel height of the acquisition device relative to the ground are used to predict the actual height H y of the object relative to the ground.
  • the current second distance information can be determined in combination with the above formula (3).
  • an image coordinate system is established for the current frame monitoring image, and the pixel coordinate values (x v , y v ) of the vanishing point V of the road line are marked in the image coordinate system ); the pixel coordinate value of the upper left point A of the detection frame of the object (x tl , y tl ), the pixel coordinate value of the lower right point C (x br , y br ), and further, you can pass the corner point AC along the y-axis
  • the pixel coordinate value in the direction determines the distance between the two points MN as shown in Figure 9; the distance between the two points MV as shown in Figure 9 can be determined by the pixel coordinate value of the corner point CV along the y-axis direction.
  • the calibration parameters of the collection device include the first height value of the collection device relative to the ground and the focal length of the collection device; for the above S7022, in the pixel coordinate value based on the set corner point, the calibration parameters of the collection device, and the determination of the collection device The pixel coordinate value of the vanishing point of the lane line used in the calibration parameters of the
  • the first pixel height value can be obtained: y br -y v .
  • S70222 Determine, based on the pixel coordinate value of the set corner point, a second pixel height value of the object in the monitoring image of the current frame relative to the ground.
  • the difference between the pixel coordinate values along the y-axis of the two corners of AC in the above-mentioned FIG. 11 may be used as the second pixel height value here, which may be represented by h b .
  • S70223 Determine a second height value of the object relative to the ground based on the first pixel height value, the second pixel height value, and the first height value.
  • H c represents the first height value, which is used to represent the actual height of the acquisition device relative to the ground, which can be obtained when the acquisition device is calibrated
  • H y represents the second height value, which is used to represent the actual height of the object relative to the ground.
  • S70224 Determine current second distance information based on the second height value, the focal length of the acquisition device, and the second pixel height value.
  • the current second distance information may be determined by the above formula (3).
  • the actual detection frame of the object can be obtained quickly and accurately by introducing the pixel coordinate value of the vanishing point of the lane line and the calibration parameters of the acquisition device.
  • the height value can further quickly and accurately determine the current second distance information between the target vehicle and the object.
  • the current second distance information and each historical second distance information are the distance information between the target vehicle and the object determined based on the single-frame monitoring image.
  • the accurate and complete detection frame of the object can be based on the position information of the detection frame to obtain the second distance information with high accuracy between the target vehicle and the object.
  • the accuracy of the plurality of second distance information determined based on this method is high, but the fluctuation is large.
  • each historical first distance information is distance information determined based on multiple frames of monitoring images
  • the adjusted distance information is also distance information adjusted based on multiple historical first distance information. Therefore, based on this method
  • the fluctuation between the obtained multiple historical first distance information and the adjusted distance information is small, but because the scale changes corresponding to two adjacent frames of monitoring images are used when determining the historical first distance information and the adjusted distance information
  • the process of determining the scale change information depends on the position information of the feature points of the recognized object in the monitoring image. When there is an error, the error will be accumulated. Therefore, the determined multiple historical first distance information and adjusted distance information
  • the accuracy is compared to the accuracy of the second distance information determined based on the complete detection frame.
  • the adjusted distance information can be further adjusted for the distance information between the target vehicle and the object corresponding to the determined multi-frame monitoring images respectively in two ways.
  • S6032 Adjust the adjusted distance information based on the distance offset information to obtain current first distance information.
  • the adjusted distance information may be further adjusted based on the distance offset information, so that the current first distance information is more accurate.
  • the adjusted distance information can be further adjusted, so as to obtain the current distance information of the target vehicle and the object with high accuracy.
  • the following steps S60311 to S60313 may be included:
  • the current second distance information can be represented by D 0
  • a plurality of historical second distance information can be represented by D 1 , D 2 , D 3 . . . respectively, which can be performed by D 0 and D 1 , D 2 , D 3 .
  • Linear fitting is performed to obtain a first fitting curve composed of multiple historical second distance information and current second distance information, and the first fitting curve can be represented by the following formula (6):
  • the frame numbers 0, 1, 2, 3 of the monitoring images used in determining the plurality of second distance information can be used as the x value, and the second distance information D 0 , D corresponding to the frame numbers respectively 1 , D 2 , D 3 , .
  • the adjusted distance information can be represented by D 0_scale
  • a plurality of historical first distance information can be represented by D 1_final , D 2_final , D 3_final . . respectively, which can be performed by D 0_scale and D 1_final , D 2_final , D 3_final .
  • Linear fitting is performed to obtain a second fitting curve composed of multiple historical first distance information and adjusted distance information, and the second fitting curve can be expressed by the following formula (7):
  • the frame numbers 0, 1, 2, 3 of the monitoring images used in determining the multiple historical first distance information and the adjusted distance information can be used as the x value, and the adjustment corresponding to the frame number respectively
  • S60313 Determine distance offset information for the adjusted distance information based on the first linear fitting coefficient and the second linear fitting coefficient.
  • the distance offset information can be determined by the following formula (8):
  • the distance offset information determined in this way can be adjusted according to the following formula (9) to obtain the current first distance information D 0_final :
  • the historical second distance information between the object and the target vehicle in each frame of the historical frame monitoring image in the multi-frame historical frame monitoring image can also be determined by the Kalman filtering algorithm, and further determine the current first distance based on the Kalman filtering algorithm. information.
  • D 0_final kal(D 0_scale , D 0 , R, Q) (10);
  • R represents the variance of D 0_scale and D 1_final , D 2_final , D 3_final . . .
  • Q represents the variance of D 0 and D 1 , D 2 , D 3 . , and further correct the adjusted distance information based on the distance offset information to obtain the current first distance information with higher accuracy.
  • the target object located in the blind spot of the target vehicle can be determined according to each object in the current frame image and the current first distance information of the target vehicle.
  • the method of determining the blind spot of the target vehicle may be shown in FIG. 2 above, and the blind spot of the vision corresponding to the target vehicle may be determined.
  • the current first distance information and the determined blind spot of the target vehicle it is possible to determine the object that falls into the blind spot of the visual field among the detected objects, that is, the target object.
  • the target object may include, for example, other driving vehicles, pedestrians, road facilities, and parts of road obstacles among the above-mentioned objects.
  • the monitoring result can be generated according to the type information and position of the target object and the driving state of the target vehicle.
  • the generated monitoring result is determined according to the image
  • multiple frames of images will be acquired, but the multiple frames of images are discrete relative to this continuous time.
  • tracking and smoothing processing can also be performed when the continuous frame image is monitored for the object, for example, interpolation is used to further improve the accuracy.
  • the tracking and smoothing method can use multiple frames of monitoring images corresponding to discrete time to determine the distance between the target object and the target vehicle in continuous time, this method can also alleviate the need for the acquisition device to acquire multiple frames of monitoring images in rapid succession. pressure and reduce equipment wear.
  • the frequency of acquiring a frame of monitoring images in 0.1 seconds requires more power consumption for the acquisition device than the frequency of acquiring a frame of monitoring images in 0.2 seconds. Within 0.1 seconds, a relatively accurate frame of predictive monitoring images can ensure safety.
  • the monitoring result may include alarm information, for example.
  • the driving state of the target vehicle may include, for example, steering information of the target vehicle.
  • the following method when generating the monitoring result according to the type information and position of the target object and the driving state of the target vehicle, for example, the following method may be used: determine the warning information according to the type information and position of the target object and the steering information of the target vehicle level; generate alarm information of a certain level and prompt.
  • the type information of the target object may include pedestrians, for example. Since the target object is located in the blind spot of the target vehicle, that is, the target vehicle may affect the driving safety because the target object is located in the blind spot of the target vehicle, the monitoring result including the warning information can be generated.
  • the steering information of the target vehicle indicates that the target vehicle turns left, and the position of the target object indicates that the target object is in the blind spot on the left side of the target vehicle; or, the steering information of the target vehicle indicates that the target vehicle turns right, And the position of the target object indicates that when the target object is in the blind spot on the right side of the target vehicle, it is considered that the target vehicle has a greater impact on the safety of the target object when driving.
  • the target vehicle may collide with pedestrians while driving, and the monitoring results can include the highest level monitoring results.
  • the monitoring results can also be divided into multiple levels, such as level 1, level 2, level 3, and level 4; the higher the level, the greater the impact on the driving safety of the target vehicle, and the corresponding alarm information is also The corresponding representation has a great influence on the driving safety of the target vehicle.
  • the first monitoring result corresponds to, for example, level 1, and includes a "beep” sound at a higher frequency, or a voice prompt message "Currently too close to the vehicle, please drive carefully”.
  • the first monitoring result may be further refined according to the position of the target object. Take the driving state of the target device as driving to the left, and the position of the target object indicates that there is a target object on the left side of the target vehicle. If the first monitoring result indicates that the target device is constantly approaching the target object, gradually increase the "di ” sound, or generate more accurate warning information such as “currently 1 meter away from the pedestrian on the left” and “currently 0.5 meters away from the pedestrian on the left”.
  • the steering information of the target vehicle indicates that the target vehicle is turning left, and the position of the target object indicates that the target object is in the blind spot on the right side of the target vehicle; or, the steering information of the target vehicle indicates that the target vehicle is turning right , and the position of the target object indicates that when the target object is in the blind spot on the left side of the target vehicle, it is believed that the target vehicle has a considerable probability of having a certain impact on safety when driving. For example, when pedestrians approach the target vehicle, collision may occur, then monitor The results can correspond to Level 2, including a “beep” sound at a lower frequency than the monitoring results of the corresponding Level 1, or a voice prompt message “Currently close to pedestrians, please drive carefully”.
  • the type information of the target object may also include, for example, a vehicle; here, the vehicle is a vehicle other than the target vehicle.
  • the steering information of the target vehicle indicates that the target vehicle turns left, and the current position of the target object indicates that the target object is in The blind spot on the left side of the target vehicle; or, when the steering information of the target vehicle indicates that the target vehicle is turning right, and the position of the current target object indicates that the target object is in the blind spot on the right side of the target vehicle, it is considered that the target vehicle has a greater impact on safety when driving.
  • the target vehicle may collide with other vehicles when turning, and the monitoring results may include monitoring results corresponding to three levels.
  • the monitoring results can be further refined according to the driving state. Take the driving state of the target vehicle as an example of the target vehicle turning left, and the monitoring result representing the target object on the left side of the target vehicle. frequency, or generate alarm information with more accurate prompt information, such as "currently 1 meter away from the left vehicle” and "currently 0.5 meters away from the left vehicle”.
  • the steering information of the target vehicle indicates that the target vehicle is turning left, and the position of the target object indicates that the target object is in the blind spot on the right side of the target vehicle; or, the steering information of the target vehicle indicates that the target vehicle is turning right , and the position of the target object indicates that when the target object is in the blind spot on the left side of the target vehicle, it is believed that the target object has a considerable probability of affecting the safety of the target vehicle when driving.
  • the monitoring result can correspond to Level 4, including a “beep” sound at a lower frequency than the monitoring result corresponding to Level 3, or a voice prompt message “Currently close to the vehicle, please drive carefully”.
  • the monitoring results may also include vehicle control instructions, for example.
  • the driving state of the target vehicle may include, for example, steering information of the target vehicle.
  • the traveling device is, for example, but not limited to, any of the following: an autonomous vehicle, a vehicle equipped with an advanced driving assistance system (Advanced Driving Assistance System, ADAS), or a robot.
  • ADAS Advanced Driving Assistance System
  • the vehicle control instruction when generating the monitoring result according to the type information and position of the target object and the driving state of the target vehicle, for example, the vehicle control instruction can be generated according to the type information and position of the target object and the steering information of the target vehicle.
  • the target vehicle when generating the vehicle control command according to the type information and position of the target object and the steering information of the target vehicle, for example, the target vehicle can determine the generated vehicle control command according to the type information and position of the target object, so as to ensure that the target vehicle can avoid Collision with the target object to ensure safe driving.
  • the monitoring results are more conducive to deployment in the intelligent driving device, improve the safety of the intelligent driving device during the automatic driving control process, that is, can better meet the needs of the automatic driving field.
  • the writing order of each step does not mean a strict execution order but constitutes any limitation on the implementation process, and the specific execution order of each step should be based on its function and possible Internal logic is determined.
  • the embodiment of the present disclosure also provides a blind spot monitoring device corresponding to the blind spot monitoring method. Reference may be made to the implementation of the method, and repeated descriptions will not be repeated.
  • the blind spot monitoring device includes: an acquisition module 121 , a detection module 122 , a determination module 123 , and a generation module 124 ; wherein,
  • an acquisition module 121 configured to acquire the current frame monitoring image acquired by the acquisition device on the target vehicle
  • a detection module 122 configured to perform object detection on the current frame monitoring image to obtain type information and position of the object included in the image
  • a determination module 123 configured to determine the target object located in the blind spot of the target vehicle according to the position of the object and the blind spot of the target vehicle;
  • the generating module 124 is configured to generate monitoring results according to the type information and position of the target object and the driving state of the target vehicle.
  • the monitoring result includes warning information
  • the driving state of the target vehicle includes steering information of the target vehicle
  • the generating module 124 is based on the type information and location of the target object and The driving state of the target vehicle, when generating the monitoring result, is used to: determine the level of the alarm information according to the type information and position of the target object and the steering information of the target vehicle; generate the alarm information of the determined level and prompt .
  • the monitoring result includes vehicle control instructions
  • the driving state of the target vehicle includes steering information of the target vehicle
  • the generation module 124 is based on the type information and position of the target object.
  • the driving state of the target vehicle when generating the monitoring result, it is used to: generate the vehicle control instruction according to the type information and position of the target object and the steering information of the target vehicle
  • the blind spot monitoring device further includes: The control module 125 is configured to: control the target vehicle to travel based on the vehicle control instruction.
  • the determining module 123 when determining the target object located in the blind spot of the target vehicle according to the position of the object and the blind spot of the target vehicle, is configured to: The position of the object in the current frame monitoring image is determined, and the current first distance information between the target vehicle and the object in the current frame monitoring image is determined; according to the current first distance information, it is determined that the target vehicle is located in the target vehicle. target object in the blind spot of the field of view.
  • the determining module 123 determines the current first distance between the target vehicle and the object in the monitoring image of the current frame according to the position of the object in the monitoring image of the current frame information is used to: determine the distance information to be adjusted between the target vehicle and the object in the current frame monitoring image based on the current frame monitoring image; based on the multiple frames collected by the object in the collecting device The scale change information between the scales in the two adjacent frames of monitoring images in the historical frame monitoring images, and the relationship between the object and the target vehicle in each frame of the historical frame monitoring images in the multi-frame historical frame monitoring images.
  • the historical first distance information is adjusted, and the to-be-adjusted distance information is adjusted to obtain the current first distance information between the target vehicle and the object.
  • the determining module 123 when the determining module 123 adjusts the distance information to be adjusted to obtain the current first distance information between the target vehicle and the object, the determining module 123 is configured to: The distance information to be adjusted is adjusted until the error amount of the scale change information is the smallest, and the adjusted distance information is obtained; wherein the error amount is based on the distance information to be adjusted, the scale change information and the multi-frame history The historical first distance information corresponding to each frame of the historical frame monitoring image in the frame monitoring image is determined; and the current first distance information is determined based on the adjusted distance information.
  • the determining module 123 before determining the current first distance information based on the adjusted distance information, is further configured to: monitor the distance of the object in the image based on the current frame position, and the calibration parameters of the collection device, to determine the current second distance information; the determining module 123, when determining the current first distance information based on the adjusted distance information, is used for: based on the current second distance information, historical second distance information between the object and the target vehicle in each frame of the historical frame monitoring image in the multi-frame historical frame monitoring image, and the history corresponding to the historical frame monitoring image of the frame The first distance information and the adjusted distance information determine distance offset information for the adjusted distance information; adjust the adjusted distance information based on the distance offset information to obtain the current First distance information.
  • the determining module 123 determines the relationship between the object and the target vehicle in each frame of the historical frame monitoring image in the multi-frame historical frame monitoring image based on the current second distance information.
  • determining the distance offset information for the adjusted distance information it is used for : Based on the current second distance information and the historical second distance information corresponding to each frame of the historical frame monitoring image in the multi-frame historical frame monitoring image, determine that each frame in the multi-frame historical frame monitoring image is determined by The first linear fitting coefficient of the first fitting curve fitted by the historical second distance information corresponding to the historical frame monitoring image and the current second distance information; The historical first distance information and the adjusted distance information corresponding to the frame historical frame monitoring images, determine the historical first distance information corresponding to each frame of the historical frame monitoring images in the multi-frame historical frame monitoring images a second linear fitting coefficient of the second fitting curve fitted with the adjusted distance information; based on the first linear fitting coefficient and the second linear fitting coefficient, determine The distance offset information of the distance information.
  • the determining module 123 determines the current second distance information based on the position of the object in the monitoring image of the current frame and the calibration parameters of the acquisition device
  • the determining module 123 is configured to: : based on the position information of the detection frame of the object in the monitoring image of the current frame, obtain the pixel coordinate value of the set corner point in the detection frame; based on the pixel coordinate value of the set corner point, the acquisition
  • the current second distance information is determined by the calibration parameters of the device and the pixel coordinate value of the vanishing point of the lane line used when determining the calibration parameters of the acquisition device.
  • the calibration parameters of the acquisition device include a first height value of the acquisition device relative to the ground and a focal length of the acquisition device; the determining module 123 is based on the set corner point The pixel coordinate value of the collection device, the calibration parameter of the collection device, and the pixel coordinate value of the vanishing point of the lane line used when determining the calibration parameter of the collection device, when determining the current second distance information, for: based on the The pixel coordinate value of the vanishing point of the lane line and the pixel coordinate value of the corner point set in the detection frame, determine the first pixel height value of the collection device relative to the ground; based on the pixel coordinate value of the set corner point, determining a second pixel height value of the object in the monitoring image of the current frame relative to the ground; determining the object based on the first pixel height value, the second pixel height value and the first height value a second height value relative to the ground; determining the current second distance information based on the second height value, the focal length of
  • the determining module 123 when determining the distance information to be adjusted between the target vehicle and the object in the current frame monitoring image based on the current frame monitoring image, is configured to: Obtain the scale change information between the scale of the object in the current frame monitoring image and the scale in the historical frame monitoring image adjacent to the current frame monitoring image; The historical first distance information corresponding to the historical frame monitoring image adjacent to the current frame monitoring image is determined, and the to-be-adjusted distance information is determined.
  • the determining module 123 determines the scale change information between the scales of the object in two adjacent frames of monitoring images in the following manner: extracting a plurality of feature points contained in the object respectively in The first position information in the previous frame of monitoring images in the two adjacent frames of monitoring images, and the second position information in the next frame of monitoring images; based on the first position information and the second position information, The scale change information between the scales of the object in two adjacent frames of monitoring images is determined.
  • the determining module 123 determines, based on the first position information and the second position information, when determining the scale change information of the object between scales in two adjacent frames of monitoring images , for: determining, based on the first position information, the first scale value of the target line segment composed of multiple feature points contained in the object in the monitoring image of the previous frame; based on the second position information, Determine the second scale value of the target line segment in the next frame of monitoring images; based on the first scale value and the second scale value, determine the difference between the scales of the object in two adjacent frames of monitoring images Scale change information between.
  • an embodiment of the present disclosure further provides an electronic device 1300 .
  • a schematic structural diagram of the electronic device 1300 provided by an embodiment of the present disclosure includes:
  • the processor 10 and the memory 20 communicate through the bus 30, so that the processor 10 executes the following instructions : obtain the current frame monitoring image collected by the acquisition device on the target vehicle; perform object detection on the current frame monitoring image to obtain the type information and position of the object included in the current frame monitoring image; according to the position of the object and the blind spot of the target vehicle, determine the target object located in the blind spot of the target vehicle; generate monitoring results according to the type information and position of the target object and the driving state of the target vehicle.
  • Embodiments of the present disclosure further provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is run by a processor, the steps of the blind spot monitoring method described in the above method embodiments are executed.
  • the storage medium may be a volatile or non-volatile computer-readable storage medium.
  • Embodiments of the present disclosure further provide a computer program product, where the computer program product carries program codes, and the instructions included in the program codes can be used to execute the steps of the blind spot monitoring method described in the foregoing method embodiments.
  • the computer program product carries program codes
  • the instructions included in the program codes can be used to execute the steps of the blind spot monitoring method described in the foregoing method embodiments.
  • the foregoing method please refer to the foregoing method. The embodiments are not repeated here.
  • the above-mentioned computer program product can be specifically implemented by means of hardware, software or a combination thereof.
  • the computer program product is embodied as a computer storage medium, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), etc. Wait.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the functions, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a processor-executable non-volatile computer-readable storage medium.
  • the computer software products are stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in various embodiments of the present disclosure.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Mechanical Engineering (AREA)
  • Traffic Control Systems (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

L'invention concerne un procédé et un appareil de surveillance de zone aveugle, un dispositif électronique et un support de stockage. Le procédé de surveillance de zone aveugle consiste : à obtenir une image de surveillance de trame actuelle acquise par un dispositif d'acquisition (2) sur un véhicule cible (S101) ; à réaliser une détection d'objet sur l'image de surveillance de trame actuelle pour obtenir des informations de type et des positions d'objets comprises dans l'image de surveillance de trame actuelle (S102) ; à déterminer, en fonction des positions des objets et d'un champ de vision aveugle du véhicule cible, un objet cible situé dans la zone aveugle de champ de vision du véhicule cible (S103) ; et à générer un résultat de surveillance en fonction des informations de type et de la position de l'objet cible et d'un état de conduite du véhicule cible (S104), ce qui permet d'améliorer la sécurité de conduite et la performance de surveillance de zone aveugle.
PCT/CN2022/084399 2021-04-28 2022-03-31 Procédé et appareil de surveillance de zone aveugle, dispositif électronique et support de stockage WO2022228023A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110467776.4A CN113103957B (zh) 2021-04-28 2021-04-28 一种盲区监测方法、装置、电子设备及存储介质
CN202110467776.4 2021-04-28

Publications (1)

Publication Number Publication Date
WO2022228023A1 true WO2022228023A1 (fr) 2022-11-03

Family

ID=76720492

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/084399 WO2022228023A1 (fr) 2021-04-28 2022-03-31 Procédé et appareil de surveillance de zone aveugle, dispositif électronique et support de stockage

Country Status (2)

Country Link
CN (1) CN113103957B (fr)
WO (1) WO2022228023A1 (fr)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113103957B (zh) * 2021-04-28 2023-07-28 上海商汤临港智能科技有限公司 一种盲区监测方法、装置、电子设备及存储介质
CN116184992A (zh) * 2021-11-29 2023-05-30 上海商汤临港智能科技有限公司 车辆控制方法、装置、电子设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004001658A (ja) * 2002-06-03 2004-01-08 Nissan Motor Co Ltd 車載カメラの光軸ずれ検出装置
CN106524922A (zh) * 2016-10-28 2017-03-22 深圳地平线机器人科技有限公司 测距校准方法、装置和电子设备
CN110386065A (zh) * 2018-04-20 2019-10-29 比亚迪股份有限公司 车辆盲区的监控方法、装置、计算机设备及存储介质
CN111942282A (zh) * 2019-05-17 2020-11-17 比亚迪股份有限公司 车辆及其驾驶盲区预警方法、装置、系统和存储介质
CN113103957A (zh) * 2021-04-28 2021-07-13 上海商汤临港智能科技有限公司 一种盲区监测方法、装置、电子设备及存储介质

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8164628B2 (en) * 2006-01-04 2012-04-24 Mobileye Technologies Ltd. Estimating distance to an object using a sequence of images recorded by a monocular camera
CN105279770A (zh) * 2015-10-21 2016-01-27 浪潮(北京)电子信息产业有限公司 一种目标跟踪控制方法及装置
CN108596116B (zh) * 2018-04-27 2021-11-05 深圳市商汤科技有限公司 测距方法、智能控制方法及装置、电子设备和存储介质
CN109311425A (zh) * 2018-08-23 2019-02-05 深圳市锐明技术股份有限公司 一种汽车盲区的监测报警方法、装置、设备及存储介质
WO2020151560A1 (fr) * 2019-01-24 2020-07-30 杭州海康汽车技术有限公司 Procédé, appareil et système de détection d'angle mort de véhicule
CN111998780B (zh) * 2019-05-27 2022-07-01 杭州海康威视数字技术股份有限公司 目标测距方法、装置及系统
CN111829484B (zh) * 2020-06-03 2022-05-03 江西江铃集团新能源汽车有限公司 基于视觉的目标距离测算方法
CN112489136B (zh) * 2020-11-30 2024-04-16 商汤集团有限公司 标定方法、位置确定方法、装置、电子设备及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004001658A (ja) * 2002-06-03 2004-01-08 Nissan Motor Co Ltd 車載カメラの光軸ずれ検出装置
CN106524922A (zh) * 2016-10-28 2017-03-22 深圳地平线机器人科技有限公司 测距校准方法、装置和电子设备
CN110386065A (zh) * 2018-04-20 2019-10-29 比亚迪股份有限公司 车辆盲区的监控方法、装置、计算机设备及存储介质
CN111942282A (zh) * 2019-05-17 2020-11-17 比亚迪股份有限公司 车辆及其驾驶盲区预警方法、装置、系统和存储介质
CN113103957A (zh) * 2021-04-28 2021-07-13 上海商汤临港智能科技有限公司 一种盲区监测方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN113103957A (zh) 2021-07-13
CN113103957B (zh) 2023-07-28

Similar Documents

Publication Publication Date Title
US11318928B2 (en) Vehicular automated parking system
WO2022228023A1 (fr) Procédé et appareil de surveillance de zone aveugle, dispositif électronique et support de stockage
EP4141737A1 (fr) Procédé et dispositif de détection de cible
US11544936B2 (en) In-path obstacle detection and avoidance system
EP3436879A1 (fr) Véhicule automobile à capacité de détection visuelle améliorée
CN111630460A (zh) 自主移动装置的路径规划
US10825186B2 (en) Information processing device, information processing method, and computer program product
EP2960858B1 (fr) Système de capteur pour déterminer des informations de distance basées sur des images stéréoscopiques
US11049275B2 (en) Method of predicting depth values of lines, method of outputting three-dimensional (3D) lines, and apparatus thereof
CN111469127B (zh) 代价地图更新方法、装置、机器人及存储介质
CN110667474B (zh) 通用障碍物检测方法、装置与自动驾驶系统
US11482007B2 (en) Event-based vehicle pose estimation using monochromatic imaging
WO2022227708A1 (fr) Procédé et appareil de télémétrie, dispositif électronique et support de stockage
CN116601667A (zh) 用单目监视相机进行3d对象检测和跟踪的系统和方法
JP2008310440A (ja) 歩行者検出装置
US11869253B2 (en) Vehicle environment modeling with a camera
CN110864670A (zh) 目标障碍物位置的获取方法和系统
CN116385997A (zh) 一种车载障碍物精确感知方法、系统及存储介质
CN108416305B (zh) 连续型道路分割物的位姿估计方法、装置及终端
JP4847303B2 (ja) 障害物検出方法、障害物検出プログラムおよび障害物検出装置
Hartmann et al. Night time road curvature estimation based on Convolutional Neural Networks
KR102497488B1 (ko) 주행속도에 따른 인식범위조절이 가능한 자율주행 차량의 영상인식 장치
US20240144701A1 (en) Determining lanes from drivable area
US20220230535A1 (en) Electronic device and method for navigating pedestrian
CN117885734A (zh) 车道偏移距离的确定方法、装置、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22794495

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22794495

Country of ref document: EP

Kind code of ref document: A1