WO2022228023A1 - Blind area monitoring method and apparatus, electronic device, and storage medium - Google Patents

Blind area monitoring method and apparatus, electronic device, and storage medium Download PDF

Info

Publication number
WO2022228023A1
WO2022228023A1 PCT/CN2022/084399 CN2022084399W WO2022228023A1 WO 2022228023 A1 WO2022228023 A1 WO 2022228023A1 CN 2022084399 W CN2022084399 W CN 2022084399W WO 2022228023 A1 WO2022228023 A1 WO 2022228023A1
Authority
WO
WIPO (PCT)
Prior art keywords
frame
information
distance information
monitoring image
historical
Prior art date
Application number
PCT/CN2022/084399
Other languages
French (fr)
Chinese (zh)
Inventor
罗铨
李弘扬
蒋沁宏
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Publication of WO2022228023A1 publication Critical patent/WO2022228023A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/80Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
    • B60R2300/802Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for monitoring and displaying vehicle exterior blind spot views
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Definitions

  • the present disclosure is based on the Chinese patent application with the application number of 202110467776.4, the application date of April 28, 2021, and the application name of "a blind spot monitoring method, device, electronic device and storage medium", and requires the priority of the above-mentioned Chinese patent application
  • the entire contents of the above-mentioned Chinese patent application are hereby incorporated into the present disclosure by reference.
  • the present disclosure relates to the technical field of image processing, and in particular, to a blind spot monitoring method, device, electronic device, and storage medium.
  • blind spots are prone to occur due to the limited range that can be observed. Due to the existence of blind spots, it is very easy to cause errors in judgment and operation of drivers or autonomous vehicles, reducing the safety of driving.
  • the embodiments of the present disclosure provide at least a blind spot monitoring method, an apparatus, an electronic device, and a storage medium.
  • an embodiment of the present disclosure provides a blind spot monitoring method, including: acquiring a current frame monitoring image collected by a collection device on a target vehicle; performing object detection on the current frame monitoring image to obtain the current frame monitoring image Type information and position of the object included in the image; according to the position of the object and the blind spot of the target vehicle, determine the target object located in the blind spot of the target vehicle; according to the type information and position of the target object and the driving state of the target vehicle to generate monitoring results.
  • the current frame monitoring image acquired by the acquisition device on the target vehicle is acquired, and the object detection is performed on the current frame monitoring image to determine the type information and position of the object included in the current frame monitoring image, and then according to the position of the object and the blind spot of the target vehicle, to determine the target object located in the blind spot of the target vehicle; and then generate monitoring results according to the type information and location of the target object and the driving state of the target vehicle.
  • different monitoring results can be generated for different types of target objects, thereby improving driving safety and blind spot monitoring performance.
  • the monitoring result includes warning information
  • the driving state of the target vehicle includes steering information of the target vehicle
  • the type information and position of the target object and the target vehicle According to the type information and position of the target object and the steering information of the target vehicle, determining the level of the alarm information; generating and prompting the alarm information of the determined level.
  • the monitoring result includes vehicle control instructions
  • the driving state of the target vehicle includes steering information of the target vehicle
  • the driving state of the vehicle and generating a monitoring result includes: generating the vehicle control instruction according to the type information and position of the target object and the steering information of the target vehicle; the blind spot monitoring method further includes: based on the vehicle control instruction to control the target vehicle to travel.
  • determining the target object located in the blind spot of the target vehicle according to the position of the object and the blind spot of the target vehicle includes: monitoring the image in the current frame according to the The position of the object is determined, and the current first distance information between the target vehicle and the object in the current frame monitoring image is determined; according to the current first distance information, the target object located in the blind spot of the target vehicle is determined.
  • the target object located in the blind spot of the target vehicle can be accurately detected from all the objects included in the monitoring image of the current frame.
  • determining the current first distance information between the target vehicle and the object in the current frame monitoring image according to the position of the object in the current frame monitoring image includes: Based on the current frame monitoring image, determine the distance information to be adjusted between the target vehicle and the object in the current frame monitoring image; based on the object in the multi-frame historical frame monitoring image collected by the collection device, two adjacent two The scale change information between the scales in the frame monitoring image, and the historical first distance information between the object and the target vehicle in each frame of the historical frame monitoring image in the multi-frame historical frame monitoring image, for all The distance information to be adjusted is adjusted to obtain the current first distance information between the target vehicle and the object.
  • the adjusting the distance information to be adjusted to obtain the current first distance information between the target vehicle and the object includes: adjusting the distance information to be adjusted , until the error amount of the scale change information is the smallest, and the adjusted distance information is obtained; wherein, the error amount is based on the to-be-adjusted distance information, the scale change information, and each frame in the multi-frame historical frame monitoring image The historical first distance information corresponding to the monitoring image of the historical frame is determined; and the current first distance information is determined based on the adjusted distance information.
  • the blind spot monitoring method before determining the current first distance information based on the adjusted distance information, further includes: performing target detection on the monitoring image of the current frame, and determining The position information of the detection frame of the object contained in the current frame monitoring image; the current second distance information is determined based on the position information of the detection frame and the calibration parameters of the acquisition device; the adjustment is based on the adjustment
  • determining the current first distance information includes: based on the current second distance information, the object and the target vehicle in each frame of the historical frame monitoring image in the multi-frame historical frame monitoring image between the historical second distance information, the historical first distance information corresponding to the frame of the historical frame monitoring image, and the adjusted distance information, determine the distance offset information for the adjusted distance information;
  • the adjusted distance information is adjusted by the distance offset information to obtain the current first distance information.
  • the adjusted distance information can be further adjusted, so as to obtain the current distance information of the target vehicle and the object with high accuracy.
  • Second distance information based on the current second distance information, the historical distance between the object and the target vehicle in each frame of the historical frame monitoring image in the multi-frame historical frame monitoring image Second distance information, the historical first distance information corresponding to the frame of the historical frame monitoring image, and the adjusted distance information, and determining distance offset information for the adjusted distance information, including: based on the current first distance information
  • the second distance information and the historical second distance information corresponding to each frame of the historical frame monitoring image in the multi-frame historical frame monitoring images determine the distance corresponding to each historical frame monitoring image in the multi-frame historical frame monitoring images.
  • the first linear fitting coefficient of the first fitting curve fitted by the historical second distance information and the current second distance information based on each frame of the historical frame monitoring image corresponding to the multi-frame historical frame monitoring image
  • the historical first distance information and the adjusted distance information determine the historical first distance information and the adjusted distance information corresponding to each frame of the historical frame monitoring image in the multi-frame historical frame monitoring image.
  • a second linear fitting coefficient of the second fitting curve fitted to the distance information determining a distance bias for the adjusted distance information based on the first linear fitting coefficient and the second linear fitting coefficient setting information.
  • the determining the current second distance information based on the position information of the detection frame and the calibration parameters of the collection device includes: obtaining, based on the position information of the detection frame, The pixel coordinate value of the corner point is set in the detection frame; based on the pixel coordinate value of the set corner point, the calibration parameters of the acquisition device, and the vanishing point of the lane line used when determining the calibration parameters of the acquisition device The pixel coordinate value of , determines the current second distance information.
  • the calibration parameters of the acquisition device include a first height value of the acquisition device relative to the ground and a focal length of the acquisition device; the pixel coordinate value based on the set corner point , the calibration parameters of the collection device and the pixel coordinate value of the vanishing point of the lane line used when determining the calibration parameters of the collection device, and determining the current second distance information includes: based on the pixels of the vanishing point of the lane line The coordinate value and the pixel coordinate value of the set corner point in the detection frame, determine the first pixel height value of the acquisition device relative to the ground; based on the pixel coordinate value of the set corner point, determine the current frame monitoring a second pixel height value of the object in the image relative to the ground; determining a second pixel height value of the object relative to the ground based on the first pixel height value, the second pixel height value, and the first height value height value; determining the current second distance information based on the second height value, the focal length of the collecting device, and the second pixel
  • the actual height of the object can be obtained quickly and accurately by introducing the pixel coordinate value of the vanishing point of the lane line and the calibration parameters of the acquisition device value, further, the current second distance information between the target vehicle and the object can be quickly and accurately determined.
  • the determining, based on the monitoring image of the current frame, the distance information to be adjusted between the target vehicle and the object in the monitoring image of the current frame includes: The scale change information between the scale in the frame monitoring image and the scale in the historical frame monitoring image adjacent to the current frame monitoring image; based on the scale change information and the scale adjacent to the current frame monitoring image The historical first distance information corresponding to the historical frame monitoring image is determined to determine the to-be-adjusted distance information.
  • the historical first distance information with high accuracy corresponding to the monitoring image of the historical frame adjacent to the monitoring image of the current frame, and the monitoring image of the object in the current frame and the monitoring image of the current frame adjacent to the monitoring image of the current frame are used.
  • the scale change information between scales in the historical frame monitoring image can more accurately obtain the distance information to be adjusted, so that the adjustment speed can be improved when the current first distance information is determined based on the distance information to be adjusted later.
  • the scale change information between the scales of the object in two adjacent frames of monitoring images is determined in the following manner: by extracting a plurality of feature points contained in the object in the adjacent two frames respectively.
  • the frame monitoring image In the frame monitoring image, the first position information in the previous frame monitoring image, and the second position information in the following frame monitoring image; based on the first position information and the second position information, it is determined that the object is in Scale change information between scales in two adjacent frames of monitoring images.
  • the determining, based on the first position information and the second position information, the scale change information between the scales of the object in two adjacent frames of monitoring images includes: based on For the first position information, determine the first scale value of the target line segment formed by a plurality of feature points included in the object in the monitoring image of the previous frame; based on the second position information, determine the target line segment the second scale value in the monitoring image of the next frame; based on the first scale value and the second scale value, determine the scale change information of the object between the scales in the two adjacent frames of the monitoring image .
  • the position information of the multiple feature points contained in the object in the monitoring image can be more accurately represented, so as to obtain more accurate scale change information, which is convenient for the When the change information adjusts the to-be-adjusted distance information, more accurate current first distance information can be obtained.
  • an embodiment of the present disclosure provides a blind spot monitoring device, including:
  • an acquisition module used to acquire the current frame monitoring image acquired by the acquisition device on the target vehicle
  • a detection module configured to perform object detection on the current frame monitoring image to obtain type information and position of the object included in the image
  • a determining module configured to determine the target object located in the blind spot of the target vehicle according to the position of the object and the blind spot of the target vehicle;
  • the generating module is configured to generate monitoring results according to the type information and position of the target object and the driving state of the target vehicle.
  • embodiments of the present disclosure provide an electronic device, including: a processor, a memory, and a bus, where the memory stores machine-readable instructions executable by the processor, and when the electronic device runs, the processing A bus communicates between the processor and the memory, and when the machine-readable instructions are executed by the processor, the steps of the blind spot monitoring method according to the first aspect are performed.
  • an embodiment of the present disclosure provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and the computer program is executed by a processor to execute the blind spot monitoring method according to the first aspect. step.
  • FIG. 1 shows a flowchart of a blind spot monitoring method provided by an embodiment of the present disclosure
  • FIG. 2 shows a schematic diagram of determining a blind spot of a visual field provided by an embodiment of the present disclosure
  • FIG. 3 shows a flowchart of a specific method for determining current first distance information provided by an embodiment of the present disclosure
  • FIG. 4 shows a flowchart of a method for determining scale change information provided by an embodiment of the present disclosure
  • FIG. 5 shows a flowchart of a method for determining distance information to be adjusted provided by an embodiment of the present disclosure
  • FIG. 6 shows a flowchart of a method for determining current first distance information provided by an embodiment of the present disclosure
  • FIG. 7 shows a flowchart of a method for determining current second distance information provided by an embodiment of the present disclosure
  • FIG. 8 shows a schematic diagram of a positional relationship among a target device, a collection device, and a target object provided by an embodiment of the present disclosure
  • FIG. 9 shows a schematic diagram of a detection frame of a target object provided by an embodiment of the present disclosure.
  • FIG. 10 shows a schematic diagram of a principle for determining current second distance information provided by an embodiment of the present disclosure
  • FIG. 11 shows a schematic diagram of a scenario for determining current second distance information provided by an embodiment of the present disclosure
  • FIG. 12 shows a schematic structural diagram of a blind spot monitoring device provided by an embodiment of the present disclosure
  • FIG. 13 shows a schematic diagram of an electronic device provided by an embodiment of the present disclosure.
  • an embodiment of the present disclosure provides a blind spot monitoring method.
  • the embodiment of the present disclosure determines the current frame monitoring image by acquiring the current frame monitoring image collected by the acquisition device on the target vehicle, and performing object detection on the current frame monitoring image.
  • the type information and position of the object included in the image and then according to the position of the object and the blind spot of the target vehicle, determine the target object located in the blind spot of the target vehicle; then according to the type information and position of the target object, and the driving of the target vehicle status and generate monitoring results.
  • different monitoring results can be generated for different types of target objects, thereby improving blind spot monitoring performance and improving driving safety.
  • the device includes, for example, a terminal device or a server or other processing device, and the terminal device may be a computing device, a vehicle-mounted device, or the like.
  • the blind spot monitoring method may be implemented by the processor calling computer-readable instructions stored in the memory.
  • the blind spot monitoring method includes the following S101-S104:
  • the target vehicle in a scenario where a driver drives a vehicle, the target vehicle may include, for example, a vehicle driven by the driver; in an autonomous driving scenario, the target vehicle may include, for example, an autonomous vehicle; in a warehousing and freight scenario, the target vehicle may include, for example, an autonomous vehicle.
  • a collection device may also be mounted on the target vehicle, and the collection device may be a monocular camera set on the target vehicle, which is used for shooting during the driving of the target vehicle.
  • the target area includes: a blind spot of the vehicle's field of vision
  • the capture device may be installed on a column of the vehicle, and the camera of the capture device faces the blind spot of the vehicle's field of view.
  • FIG. 2 is a schematic diagram of determining the blind area of the field of view provided by the embodiment of the present disclosure.
  • the vehicle 1 is equipped with a collection device 2
  • the blind area of the visual field included in the target area collected by the collection device 2 includes the positions indicated by 3 and 4 .
  • the following methods may be used: acquiring the monitoring video obtained by the acquisition device on the target vehicle performing image acquisition on the target area; determining the current frame from the monitoring video Frame monitoring images; wherein, the target area includes: an area within the shooting field of view of the acquisition device.
  • the direction of the capture can be pre-set; after the capture device is mounted on the target vehicle, it can be determined that the target area to be captured is the area within the capture range of the capture device.
  • the acquisition device can collect images of the target area, obtain monitoring videos, and determine the monitoring images of the current frame from the monitoring videos.
  • the method of determining the video frame image from the monitoring video can be used, for example, the frame of the captured video frame image that is closest to the current time is used as the monitoring image of the current frame.
  • object detection may also be performed on the current frame monitoring image to obtain the type information and position of the object included in the current frame monitoring image.
  • a pre-trained target detection neural network can be used to perform object detection processing on the current frame monitoring image
  • object detection algorithms can be used: convolutional neural network (Convolutional Neural Networks, CNN), target detection network (Region-based CNN, RCNN) , Fast Neural Network (Fast RCNN), and Faster Neural Network (Faster RCNN).
  • the objects that can be detected include, for example, other driving vehicles, pedestrians, road facilities, and road obstacles.
  • the position of the object included in the image can also be obtained. Through the position of the detected object in the image, the actual position of the object in the actual driving process of the target vehicle can be further determined.
  • the target object located in the blind spot of the target vehicle can be determined in the following manner: According to the position of the object in the current frame monitoring image, determine the target vehicle and the current frame monitoring image The current first distance information of the object in the target vehicle; according to the current first distance information, the target object located in the blind spot of the target vehicle's field of view is determined.
  • the monocular camera mounted on the smart car will have problems such as road bumps or obstructions due to changes in the driving road conditions during the driving of the smart car.
  • the distance measurement is performed based on the detection frame corresponding to the object in the monitoring image of the current frame, the accurate distance to the object may not be detected.
  • the distance to the object is continuously detected based on the detection frame, the obtained distance between the smart car and the object is not stable in time series.
  • an embodiment of the present disclosure also proposes a distance detection scheme
  • the method includes the following S301-S302:
  • the objects may include, but are not limited to, vehicles, pedestrians, fixed obstacles, and the like.
  • the object is a vehicle as an example for introduction.
  • the current frame monitoring images provided by the embodiments of the present disclosure are all monitoring images that are not detected for the first time. If the current frame monitoring images are monitoring images that detect objects for the first time, it can be directly
  • the position information, the parameter information of the acquisition device and the pixel coordinate value of the vanishing point obtained in the above calibration process determine the current second distance information between the object and the object, and the current second distance information can be directly used as the current first distance information, specifically The process of determining the current second distance information is described later in detail.
  • the current first distance information corresponding to the current frame monitoring image, or the historical first distance information corresponding to each frame of the historical frame monitoring image indicates that after adjustment obtained distance information.
  • the distance information to be adjusted between the target vehicle and the object based on the current frame monitoring image it can be based on the historical first distance information corresponding to the historical frame monitoring image adjacent to the current frame monitoring image and the current frame monitoring
  • the scale change information between the image and the scale in the historical frame monitoring image adjacent to the current frame monitoring image is determined, and the to-be-adjusted distance information is adjusted subsequently.
  • the scale change information of the object in two adjacent frames of monitoring images (such as including monitoring image i and monitoring image j) in the multi-frame historical frame monitoring images collected by the acquisition device includes the size of the object in the next frame of monitoring image j.
  • the ratio of the scale to the scale of the object in the monitoring image i in the previous frame, and the specific determination process will be described later.
  • the embodiment of the present disclosure determines the historical first distance information between the target vehicle and the object corresponding to each frame of the historical frame monitoring image in the same manner as the current first distance information between the target vehicle and the object. Therefore, In this embodiment of the present disclosure, the process of determining the historical first distance information will not be described repeatedly.
  • the scale change information in two adjacent frames of monitoring images in the multi-frame historical frame monitoring images based on the object and the historical first distance information between the target vehicle and the object that have been adjusted in the historical process can be used. , adjust the distance information to be adjusted obtained based on the current monitoring image, so that the distance between the target vehicle and the object corresponding to two adjacent frames of monitoring images changes relatively smoothly, and can truly reflect the distance between the target vehicle and the object during the driving process.
  • the actual distance changes can improve the stability of the predicted distance between the target vehicle and the object in the time series.
  • the scale change information of the object in two adjacent frames of monitoring images can also reflect the distance change between the target vehicle and the object.
  • each historical frame monitoring image corresponds to the history between the target vehicle and the object.
  • the first distance information is relatively accurate distance information obtained after adjustment. Therefore, in the multi-frame historical frame monitoring images collected by the object-based acquisition device, the scale change information in two adjacent frames of monitoring images, and the multi-frame historical frame monitoring images. After adjusting the historical first distance information between the target vehicle and the object corresponding to each frame of the historical frame monitoring image in the to-be-adjusted distance information, more accurate current first distance information can be obtained.
  • the current first distance information when determining the current first distance information, it can be based on the scale change information between scales in two adjacent frames of monitoring images in multiple frames of historical frame monitoring images based on the object, and the information that has been adjusted in the historical process.
  • the scale change information between the scales of an object in two adjacent frames of monitoring images can be determined in the following manner, including the following S401 to S402:
  • S401 respectively extracting the first position information of the multiple feature points included in the object in the monitoring image of the previous frame of the two adjacent frames of the monitoring image, and the second position information of the monitoring image of the following frame.
  • target detection can be performed on the monitoring image based on a pre-trained target detection model to obtain a detection frame for representing the position of the object in the monitoring image, and then a plurality of feature points constituting the object can be extracted in the detection frame, these Feature points refer to points in the object where the pixels change drastically, such as inflection points, corner points, etc.
  • the connecting line of any two feature points in the same frame of monitoring image can form a line segment, so that the first position information of any two feature points in the previous frame of monitoring image can be obtained.
  • the scale of the line segment formed by any two feature points in the previous frame of monitoring image, and similarly, from the second position information of any two feature points in the next frame of monitoring image, the line segment formed by any two feature points can be obtained in For the scale in the monitoring image of the next frame, in this way, the scale of the multiple line segments on the object in the monitoring image of the previous frame and the scale of the monitoring image of the subsequent frame can be obtained respectively.
  • the scale change information of the object in two adjacent frames of monitoring images can be determined according to the respective scales of the multiple line segments in the previous frame of the monitoring image and the respective scales in the next frame of the monitoring image.
  • the target line segment includes n, where n is greater than or equal to 1 and less than a set threshold, and based on the first position information of the feature points included in each target line segment, the first scale value corresponding to the target line segment can be obtained, And, based on the second position information of the feature points included in each target line segment, a second scale value corresponding to the target line segment can be obtained.
  • the ratio between the second scale value and the first scale value corresponding to any target line segment can be used to represent the scale change information corresponding to the target line segment, and further according to the scale change information corresponding to the multiple target line segments. , to determine the scale change information of the object in two adjacent frames of monitoring images.
  • the average value of the scale change information corresponding to the set number of target line segments can be used as the scale change information of the object in two adjacent frames of monitoring images.
  • the position information of the upper left corner and the lower right corner of the detection frame in the monitoring image indicates that the object is in the monitoring image.
  • the scale change information of the object in the two adjacent frames of monitoring images is determined.
  • the included position information of multiple feature points in the monitoring image can more accurately represent the position information of the object in the monitoring image, thereby obtaining more accurate scale change information.
  • the position information of the multiple feature points contained in the object in the monitoring image can be more accurately represented, so as to obtain more accurate scale change information, which is convenient for the When the change information adjusts the to-be-adjusted distance information, more accurate current first distance information can be obtained.
  • S501-S502 when determining the distance information to be adjusted between the target vehicle and the object in the current frame monitoring image based on the current frame monitoring image, as shown in FIG. 5, the following S501-S502 may be included:
  • S501 Acquire the scale change information between the scale of the object in the monitoring image of the current frame and the scale in the monitoring image of the historical frame adjacent to the monitoring image of the current frame.
  • the historical frame monitoring image adjacent to the current frame monitoring image refers to the previous frame monitoring image before the current frame monitoring image at the time of acquisition, and the object is in the current frame monitoring image and the historical frame adjacent to the current frame monitoring image.
  • the scale change information between monitoring images can be represented by the ratio of the scale of the object in the monitoring image of the current frame to the scale of the object in the monitoring image of the historical frame adjacent to the monitoring image of the current frame. The specific determination process will be carried out later. elaborate.
  • the scale of the object in the collected monitoring images will gradually increase, that is, the scale of the object in two adjacent frames of monitoring images and the target vehicle and object corresponding to the two frames of monitoring images.
  • the distance information to be adjusted can be determined by the following formula (1):
  • d 0_scale represents the distance information to be adjusted
  • scale represents the ratio of the scale of the object in the monitoring image of the current frame to the scale of the object in the monitoring image of the historical frame adjacent to the monitoring image of the current frame
  • D 1_final represents the monitoring image of the current frame The historical first distance information corresponding to the adjacent historical frame monitoring images.
  • the historical first distance information corresponding to the monitoring image of the historical frame adjacent to the monitoring image of the current frame, and the scale of the object in the monitoring image of the current frame and the monitoring image of the historical frame adjacent to the monitoring image of the current frame are used. It is possible to obtain relatively accurate distance information to be adjusted, so that the adjustment speed can be increased when the current first distance information is determined based on the distance information to be adjusted later.
  • the scale change information obtained based on this will have a sudden change compared with the scale change information in the adjacent two frames of monitoring images in the multi-frame historical frame monitoring image, so the distance information to be adjusted obtained based on this is compared with the adjacent ones.
  • the scale change information in two adjacent frames of monitoring images in the multi-frame historical frame monitoring image collected by the object, and each frame in the multi-frame historical frame monitoring image can be used.
  • the historical first distance information corresponding to the frame historical frame monitoring image adjusts the to-be-adjusted distance information.
  • the following steps S601 to S603 may be included:
  • the amount of error used to represent the scale change information of the object between the current frame monitoring image and the historical frame monitoring image adjacent to the current frame monitoring image can be predicted based on the following formula (2):
  • the above formula (2) can be optimized through a variety of optimization methods, for example, the d 0_scale in the above formula (2) can be adjusted in a manner including but not limited to the Newton gradient descent method.
  • E is the smallest, it is obtained:
  • the obtained object in the monitoring image of the current frame and the difference between the monitoring image of the current frame and the current frame can be reduced.
  • the error of the scale change information between the adjacent historical frames of the monitoring images is monitored, thereby improving the stability of the determined adjusted distance information.
  • the adjusted distance information may be further adjusted to obtain the current first distance information between the target vehicle and the object. .
  • the blind spot monitoring method provided by the embodiment of the present disclosure further includes the following S701 to S702:
  • S701 perform target detection on the monitoring image of the current frame, and determine the position information of the detection frame of the object included in the monitoring image of the current frame.
  • S702 Determine the current second distance information based on the position information of the detection frame and the calibration parameters of the collection device.
  • the acquisition device provided on the target vehicle can be calibrated, for example, the acquisition device is installed on the top of the target vehicle, as shown in FIG.
  • the optical axis of the collecting device is parallel to the horizontal ground and the advancing direction of the target vehicle. In this way, the focal length (f x , f y ) of the collecting device and the height H c of the collecting device relative to the ground can be obtained.
  • target detection can be performed on the monitoring image of the current frame by using a pre-trained target detection model to obtain the object contained in the monitoring image of the current frame, and the detection frame corresponding to the object.
  • the position information of the detection frame is obtained.
  • the position information of the corner points of the detection frame in the monitoring image of the current frame may be included, for example, the pixel coordinate values of the corner points A, B, C and D in the monitoring image of the current frame may be included.
  • H x represents the actual width of the object
  • H y represents the actual height of the object relative to the ground
  • w b represents the pixel width of the object in the monitoring image of the current frame, which can be determined by the pixel width of the detection frame ABCD of the object
  • h b represents The pixel height of the object relative to the ground can be determined by the pixel height of the detection frame ABCD of the object
  • D 0 represents the current second distance information between the target vehicle and the object.
  • H x and Hy can be determined based on the type of the detected object, for example, when the object is a vehicle, it can be determined based on the detected type of the target vehicle and the pre-stored vehicle type and The corresponding height of the vehicle and the corresponding relationship between the widths determine the actual width and actual height of the target vehicle.
  • the width w b of the object in the monitoring image of the current frame can be determined by the pixel coordinate value of the corner point AB in the monitoring image of the current frame in the detection frame ABCD as shown in Figure 9, or by the corner point CD in the current frame.
  • the pixel coordinate value in the monitoring image is determined;
  • the height h b of the object in the monitoring image of the current frame can be determined by the pixel coordinate value of the corner point BC in the monitoring image of the current frame, or the pixel coordinate value of the corner point AD in the monitoring image of the current frame can be determined.
  • the coordinate value is determined, which will not be repeated here.
  • the location information and the calibration parameters of the acquisition device, when determining the current second distance information include the following S7021-S7022:
  • S7022 Determine the current second distance information based on the pixel coordinate value of the set corner point, the calibration parameters of the collection device, and the pixel coordinate value of the vanishing point of the lane line used in determining the calibration parameter of the collection device.
  • the current second distance is determined based on the pixel coordinate value of the set corner point, the calibration parameters of the acquisition device, and the pixel coordinate value of the vanishing point of the lane line used when determining the calibration parameters of the acquisition device.
  • the principle of information is explained:
  • the target vehicle in the initial calibration process of the acquisition device, can be parked between parallel lane lines, and the parallel lane lines in the distance intersect at a point when the phase plane of the acquisition device is projected, which can be called the disappearance of the lane lines.
  • the vanishing point of the lane line approximately coincides with point V in Figure 9.
  • the vanishing point of the lane line can be used to represent the projection position of the acquisition device in the monitoring image, and the pixel coordinate value of the vanishing point of the lane line can indicate that the acquisition device is in the current frame. Monitor pixel coordinate values in an image.
  • the distance between the two points EG can represent the actual height H c of the acquisition device relative to the ground; the distance between the two points FG can represent the actual height Hy of the object relative to the ground; the distance between the two points MN
  • the distance of MV can represent the pixel height h b of the object relative to the ground; the distance between the two points of MV can represent the pixel height of the acquisition device relative to the ground.
  • the ratio between the actual height H c of the acquisition device relative to the ground and the actual height H y of the object relative to the ground when the acquisition device captures the monitoring image of the current frame is equal to The ratio of the pixel height of the acquisition device relative to the ground and the pixel height h b of the object relative to the ground.
  • the pixel height h b and the pixel height of the acquisition device relative to the ground are used to predict the actual height H y of the object relative to the ground.
  • the current second distance information can be determined in combination with the above formula (3).
  • an image coordinate system is established for the current frame monitoring image, and the pixel coordinate values (x v , y v ) of the vanishing point V of the road line are marked in the image coordinate system ); the pixel coordinate value of the upper left point A of the detection frame of the object (x tl , y tl ), the pixel coordinate value of the lower right point C (x br , y br ), and further, you can pass the corner point AC along the y-axis
  • the pixel coordinate value in the direction determines the distance between the two points MN as shown in Figure 9; the distance between the two points MV as shown in Figure 9 can be determined by the pixel coordinate value of the corner point CV along the y-axis direction.
  • the calibration parameters of the collection device include the first height value of the collection device relative to the ground and the focal length of the collection device; for the above S7022, in the pixel coordinate value based on the set corner point, the calibration parameters of the collection device, and the determination of the collection device The pixel coordinate value of the vanishing point of the lane line used in the calibration parameters of the
  • the first pixel height value can be obtained: y br -y v .
  • S70222 Determine, based on the pixel coordinate value of the set corner point, a second pixel height value of the object in the monitoring image of the current frame relative to the ground.
  • the difference between the pixel coordinate values along the y-axis of the two corners of AC in the above-mentioned FIG. 11 may be used as the second pixel height value here, which may be represented by h b .
  • S70223 Determine a second height value of the object relative to the ground based on the first pixel height value, the second pixel height value, and the first height value.
  • H c represents the first height value, which is used to represent the actual height of the acquisition device relative to the ground, which can be obtained when the acquisition device is calibrated
  • H y represents the second height value, which is used to represent the actual height of the object relative to the ground.
  • S70224 Determine current second distance information based on the second height value, the focal length of the acquisition device, and the second pixel height value.
  • the current second distance information may be determined by the above formula (3).
  • the actual detection frame of the object can be obtained quickly and accurately by introducing the pixel coordinate value of the vanishing point of the lane line and the calibration parameters of the acquisition device.
  • the height value can further quickly and accurately determine the current second distance information between the target vehicle and the object.
  • the current second distance information and each historical second distance information are the distance information between the target vehicle and the object determined based on the single-frame monitoring image.
  • the accurate and complete detection frame of the object can be based on the position information of the detection frame to obtain the second distance information with high accuracy between the target vehicle and the object.
  • the accuracy of the plurality of second distance information determined based on this method is high, but the fluctuation is large.
  • each historical first distance information is distance information determined based on multiple frames of monitoring images
  • the adjusted distance information is also distance information adjusted based on multiple historical first distance information. Therefore, based on this method
  • the fluctuation between the obtained multiple historical first distance information and the adjusted distance information is small, but because the scale changes corresponding to two adjacent frames of monitoring images are used when determining the historical first distance information and the adjusted distance information
  • the process of determining the scale change information depends on the position information of the feature points of the recognized object in the monitoring image. When there is an error, the error will be accumulated. Therefore, the determined multiple historical first distance information and adjusted distance information
  • the accuracy is compared to the accuracy of the second distance information determined based on the complete detection frame.
  • the adjusted distance information can be further adjusted for the distance information between the target vehicle and the object corresponding to the determined multi-frame monitoring images respectively in two ways.
  • S6032 Adjust the adjusted distance information based on the distance offset information to obtain current first distance information.
  • the adjusted distance information may be further adjusted based on the distance offset information, so that the current first distance information is more accurate.
  • the adjusted distance information can be further adjusted, so as to obtain the current distance information of the target vehicle and the object with high accuracy.
  • the following steps S60311 to S60313 may be included:
  • the current second distance information can be represented by D 0
  • a plurality of historical second distance information can be represented by D 1 , D 2 , D 3 . . . respectively, which can be performed by D 0 and D 1 , D 2 , D 3 .
  • Linear fitting is performed to obtain a first fitting curve composed of multiple historical second distance information and current second distance information, and the first fitting curve can be represented by the following formula (6):
  • the frame numbers 0, 1, 2, 3 of the monitoring images used in determining the plurality of second distance information can be used as the x value, and the second distance information D 0 , D corresponding to the frame numbers respectively 1 , D 2 , D 3 , .
  • the adjusted distance information can be represented by D 0_scale
  • a plurality of historical first distance information can be represented by D 1_final , D 2_final , D 3_final . . respectively, which can be performed by D 0_scale and D 1_final , D 2_final , D 3_final .
  • Linear fitting is performed to obtain a second fitting curve composed of multiple historical first distance information and adjusted distance information, and the second fitting curve can be expressed by the following formula (7):
  • the frame numbers 0, 1, 2, 3 of the monitoring images used in determining the multiple historical first distance information and the adjusted distance information can be used as the x value, and the adjustment corresponding to the frame number respectively
  • S60313 Determine distance offset information for the adjusted distance information based on the first linear fitting coefficient and the second linear fitting coefficient.
  • the distance offset information can be determined by the following formula (8):
  • the distance offset information determined in this way can be adjusted according to the following formula (9) to obtain the current first distance information D 0_final :
  • the historical second distance information between the object and the target vehicle in each frame of the historical frame monitoring image in the multi-frame historical frame monitoring image can also be determined by the Kalman filtering algorithm, and further determine the current first distance based on the Kalman filtering algorithm. information.
  • D 0_final kal(D 0_scale , D 0 , R, Q) (10);
  • R represents the variance of D 0_scale and D 1_final , D 2_final , D 3_final . . .
  • Q represents the variance of D 0 and D 1 , D 2 , D 3 . , and further correct the adjusted distance information based on the distance offset information to obtain the current first distance information with higher accuracy.
  • the target object located in the blind spot of the target vehicle can be determined according to each object in the current frame image and the current first distance information of the target vehicle.
  • the method of determining the blind spot of the target vehicle may be shown in FIG. 2 above, and the blind spot of the vision corresponding to the target vehicle may be determined.
  • the current first distance information and the determined blind spot of the target vehicle it is possible to determine the object that falls into the blind spot of the visual field among the detected objects, that is, the target object.
  • the target object may include, for example, other driving vehicles, pedestrians, road facilities, and parts of road obstacles among the above-mentioned objects.
  • the monitoring result can be generated according to the type information and position of the target object and the driving state of the target vehicle.
  • the generated monitoring result is determined according to the image
  • multiple frames of images will be acquired, but the multiple frames of images are discrete relative to this continuous time.
  • tracking and smoothing processing can also be performed when the continuous frame image is monitored for the object, for example, interpolation is used to further improve the accuracy.
  • the tracking and smoothing method can use multiple frames of monitoring images corresponding to discrete time to determine the distance between the target object and the target vehicle in continuous time, this method can also alleviate the need for the acquisition device to acquire multiple frames of monitoring images in rapid succession. pressure and reduce equipment wear.
  • the frequency of acquiring a frame of monitoring images in 0.1 seconds requires more power consumption for the acquisition device than the frequency of acquiring a frame of monitoring images in 0.2 seconds. Within 0.1 seconds, a relatively accurate frame of predictive monitoring images can ensure safety.
  • the monitoring result may include alarm information, for example.
  • the driving state of the target vehicle may include, for example, steering information of the target vehicle.
  • the following method when generating the monitoring result according to the type information and position of the target object and the driving state of the target vehicle, for example, the following method may be used: determine the warning information according to the type information and position of the target object and the steering information of the target vehicle level; generate alarm information of a certain level and prompt.
  • the type information of the target object may include pedestrians, for example. Since the target object is located in the blind spot of the target vehicle, that is, the target vehicle may affect the driving safety because the target object is located in the blind spot of the target vehicle, the monitoring result including the warning information can be generated.
  • the steering information of the target vehicle indicates that the target vehicle turns left, and the position of the target object indicates that the target object is in the blind spot on the left side of the target vehicle; or, the steering information of the target vehicle indicates that the target vehicle turns right, And the position of the target object indicates that when the target object is in the blind spot on the right side of the target vehicle, it is considered that the target vehicle has a greater impact on the safety of the target object when driving.
  • the target vehicle may collide with pedestrians while driving, and the monitoring results can include the highest level monitoring results.
  • the monitoring results can also be divided into multiple levels, such as level 1, level 2, level 3, and level 4; the higher the level, the greater the impact on the driving safety of the target vehicle, and the corresponding alarm information is also The corresponding representation has a great influence on the driving safety of the target vehicle.
  • the first monitoring result corresponds to, for example, level 1, and includes a "beep” sound at a higher frequency, or a voice prompt message "Currently too close to the vehicle, please drive carefully”.
  • the first monitoring result may be further refined according to the position of the target object. Take the driving state of the target device as driving to the left, and the position of the target object indicates that there is a target object on the left side of the target vehicle. If the first monitoring result indicates that the target device is constantly approaching the target object, gradually increase the "di ” sound, or generate more accurate warning information such as “currently 1 meter away from the pedestrian on the left” and “currently 0.5 meters away from the pedestrian on the left”.
  • the steering information of the target vehicle indicates that the target vehicle is turning left, and the position of the target object indicates that the target object is in the blind spot on the right side of the target vehicle; or, the steering information of the target vehicle indicates that the target vehicle is turning right , and the position of the target object indicates that when the target object is in the blind spot on the left side of the target vehicle, it is believed that the target vehicle has a considerable probability of having a certain impact on safety when driving. For example, when pedestrians approach the target vehicle, collision may occur, then monitor The results can correspond to Level 2, including a “beep” sound at a lower frequency than the monitoring results of the corresponding Level 1, or a voice prompt message “Currently close to pedestrians, please drive carefully”.
  • the type information of the target object may also include, for example, a vehicle; here, the vehicle is a vehicle other than the target vehicle.
  • the steering information of the target vehicle indicates that the target vehicle turns left, and the current position of the target object indicates that the target object is in The blind spot on the left side of the target vehicle; or, when the steering information of the target vehicle indicates that the target vehicle is turning right, and the position of the current target object indicates that the target object is in the blind spot on the right side of the target vehicle, it is considered that the target vehicle has a greater impact on safety when driving.
  • the target vehicle may collide with other vehicles when turning, and the monitoring results may include monitoring results corresponding to three levels.
  • the monitoring results can be further refined according to the driving state. Take the driving state of the target vehicle as an example of the target vehicle turning left, and the monitoring result representing the target object on the left side of the target vehicle. frequency, or generate alarm information with more accurate prompt information, such as "currently 1 meter away from the left vehicle” and "currently 0.5 meters away from the left vehicle”.
  • the steering information of the target vehicle indicates that the target vehicle is turning left, and the position of the target object indicates that the target object is in the blind spot on the right side of the target vehicle; or, the steering information of the target vehicle indicates that the target vehicle is turning right , and the position of the target object indicates that when the target object is in the blind spot on the left side of the target vehicle, it is believed that the target object has a considerable probability of affecting the safety of the target vehicle when driving.
  • the monitoring result can correspond to Level 4, including a “beep” sound at a lower frequency than the monitoring result corresponding to Level 3, or a voice prompt message “Currently close to the vehicle, please drive carefully”.
  • the monitoring results may also include vehicle control instructions, for example.
  • the driving state of the target vehicle may include, for example, steering information of the target vehicle.
  • the traveling device is, for example, but not limited to, any of the following: an autonomous vehicle, a vehicle equipped with an advanced driving assistance system (Advanced Driving Assistance System, ADAS), or a robot.
  • ADAS Advanced Driving Assistance System
  • the vehicle control instruction when generating the monitoring result according to the type information and position of the target object and the driving state of the target vehicle, for example, the vehicle control instruction can be generated according to the type information and position of the target object and the steering information of the target vehicle.
  • the target vehicle when generating the vehicle control command according to the type information and position of the target object and the steering information of the target vehicle, for example, the target vehicle can determine the generated vehicle control command according to the type information and position of the target object, so as to ensure that the target vehicle can avoid Collision with the target object to ensure safe driving.
  • the monitoring results are more conducive to deployment in the intelligent driving device, improve the safety of the intelligent driving device during the automatic driving control process, that is, can better meet the needs of the automatic driving field.
  • the writing order of each step does not mean a strict execution order but constitutes any limitation on the implementation process, and the specific execution order of each step should be based on its function and possible Internal logic is determined.
  • the embodiment of the present disclosure also provides a blind spot monitoring device corresponding to the blind spot monitoring method. Reference may be made to the implementation of the method, and repeated descriptions will not be repeated.
  • the blind spot monitoring device includes: an acquisition module 121 , a detection module 122 , a determination module 123 , and a generation module 124 ; wherein,
  • an acquisition module 121 configured to acquire the current frame monitoring image acquired by the acquisition device on the target vehicle
  • a detection module 122 configured to perform object detection on the current frame monitoring image to obtain type information and position of the object included in the image
  • a determination module 123 configured to determine the target object located in the blind spot of the target vehicle according to the position of the object and the blind spot of the target vehicle;
  • the generating module 124 is configured to generate monitoring results according to the type information and position of the target object and the driving state of the target vehicle.
  • the monitoring result includes warning information
  • the driving state of the target vehicle includes steering information of the target vehicle
  • the generating module 124 is based on the type information and location of the target object and The driving state of the target vehicle, when generating the monitoring result, is used to: determine the level of the alarm information according to the type information and position of the target object and the steering information of the target vehicle; generate the alarm information of the determined level and prompt .
  • the monitoring result includes vehicle control instructions
  • the driving state of the target vehicle includes steering information of the target vehicle
  • the generation module 124 is based on the type information and position of the target object.
  • the driving state of the target vehicle when generating the monitoring result, it is used to: generate the vehicle control instruction according to the type information and position of the target object and the steering information of the target vehicle
  • the blind spot monitoring device further includes: The control module 125 is configured to: control the target vehicle to travel based on the vehicle control instruction.
  • the determining module 123 when determining the target object located in the blind spot of the target vehicle according to the position of the object and the blind spot of the target vehicle, is configured to: The position of the object in the current frame monitoring image is determined, and the current first distance information between the target vehicle and the object in the current frame monitoring image is determined; according to the current first distance information, it is determined that the target vehicle is located in the target vehicle. target object in the blind spot of the field of view.
  • the determining module 123 determines the current first distance between the target vehicle and the object in the monitoring image of the current frame according to the position of the object in the monitoring image of the current frame information is used to: determine the distance information to be adjusted between the target vehicle and the object in the current frame monitoring image based on the current frame monitoring image; based on the multiple frames collected by the object in the collecting device The scale change information between the scales in the two adjacent frames of monitoring images in the historical frame monitoring images, and the relationship between the object and the target vehicle in each frame of the historical frame monitoring images in the multi-frame historical frame monitoring images.
  • the historical first distance information is adjusted, and the to-be-adjusted distance information is adjusted to obtain the current first distance information between the target vehicle and the object.
  • the determining module 123 when the determining module 123 adjusts the distance information to be adjusted to obtain the current first distance information between the target vehicle and the object, the determining module 123 is configured to: The distance information to be adjusted is adjusted until the error amount of the scale change information is the smallest, and the adjusted distance information is obtained; wherein the error amount is based on the distance information to be adjusted, the scale change information and the multi-frame history The historical first distance information corresponding to each frame of the historical frame monitoring image in the frame monitoring image is determined; and the current first distance information is determined based on the adjusted distance information.
  • the determining module 123 before determining the current first distance information based on the adjusted distance information, is further configured to: monitor the distance of the object in the image based on the current frame position, and the calibration parameters of the collection device, to determine the current second distance information; the determining module 123, when determining the current first distance information based on the adjusted distance information, is used for: based on the current second distance information, historical second distance information between the object and the target vehicle in each frame of the historical frame monitoring image in the multi-frame historical frame monitoring image, and the history corresponding to the historical frame monitoring image of the frame The first distance information and the adjusted distance information determine distance offset information for the adjusted distance information; adjust the adjusted distance information based on the distance offset information to obtain the current First distance information.
  • the determining module 123 determines the relationship between the object and the target vehicle in each frame of the historical frame monitoring image in the multi-frame historical frame monitoring image based on the current second distance information.
  • determining the distance offset information for the adjusted distance information it is used for : Based on the current second distance information and the historical second distance information corresponding to each frame of the historical frame monitoring image in the multi-frame historical frame monitoring image, determine that each frame in the multi-frame historical frame monitoring image is determined by The first linear fitting coefficient of the first fitting curve fitted by the historical second distance information corresponding to the historical frame monitoring image and the current second distance information; The historical first distance information and the adjusted distance information corresponding to the frame historical frame monitoring images, determine the historical first distance information corresponding to each frame of the historical frame monitoring images in the multi-frame historical frame monitoring images a second linear fitting coefficient of the second fitting curve fitted with the adjusted distance information; based on the first linear fitting coefficient and the second linear fitting coefficient, determine The distance offset information of the distance information.
  • the determining module 123 determines the current second distance information based on the position of the object in the monitoring image of the current frame and the calibration parameters of the acquisition device
  • the determining module 123 is configured to: : based on the position information of the detection frame of the object in the monitoring image of the current frame, obtain the pixel coordinate value of the set corner point in the detection frame; based on the pixel coordinate value of the set corner point, the acquisition
  • the current second distance information is determined by the calibration parameters of the device and the pixel coordinate value of the vanishing point of the lane line used when determining the calibration parameters of the acquisition device.
  • the calibration parameters of the acquisition device include a first height value of the acquisition device relative to the ground and a focal length of the acquisition device; the determining module 123 is based on the set corner point The pixel coordinate value of the collection device, the calibration parameter of the collection device, and the pixel coordinate value of the vanishing point of the lane line used when determining the calibration parameter of the collection device, when determining the current second distance information, for: based on the The pixel coordinate value of the vanishing point of the lane line and the pixel coordinate value of the corner point set in the detection frame, determine the first pixel height value of the collection device relative to the ground; based on the pixel coordinate value of the set corner point, determining a second pixel height value of the object in the monitoring image of the current frame relative to the ground; determining the object based on the first pixel height value, the second pixel height value and the first height value a second height value relative to the ground; determining the current second distance information based on the second height value, the focal length of
  • the determining module 123 when determining the distance information to be adjusted between the target vehicle and the object in the current frame monitoring image based on the current frame monitoring image, is configured to: Obtain the scale change information between the scale of the object in the current frame monitoring image and the scale in the historical frame monitoring image adjacent to the current frame monitoring image; The historical first distance information corresponding to the historical frame monitoring image adjacent to the current frame monitoring image is determined, and the to-be-adjusted distance information is determined.
  • the determining module 123 determines the scale change information between the scales of the object in two adjacent frames of monitoring images in the following manner: extracting a plurality of feature points contained in the object respectively in The first position information in the previous frame of monitoring images in the two adjacent frames of monitoring images, and the second position information in the next frame of monitoring images; based on the first position information and the second position information, The scale change information between the scales of the object in two adjacent frames of monitoring images is determined.
  • the determining module 123 determines, based on the first position information and the second position information, when determining the scale change information of the object between scales in two adjacent frames of monitoring images , for: determining, based on the first position information, the first scale value of the target line segment composed of multiple feature points contained in the object in the monitoring image of the previous frame; based on the second position information, Determine the second scale value of the target line segment in the next frame of monitoring images; based on the first scale value and the second scale value, determine the difference between the scales of the object in two adjacent frames of monitoring images Scale change information between.
  • an embodiment of the present disclosure further provides an electronic device 1300 .
  • a schematic structural diagram of the electronic device 1300 provided by an embodiment of the present disclosure includes:
  • the processor 10 and the memory 20 communicate through the bus 30, so that the processor 10 executes the following instructions : obtain the current frame monitoring image collected by the acquisition device on the target vehicle; perform object detection on the current frame monitoring image to obtain the type information and position of the object included in the current frame monitoring image; according to the position of the object and the blind spot of the target vehicle, determine the target object located in the blind spot of the target vehicle; generate monitoring results according to the type information and position of the target object and the driving state of the target vehicle.
  • Embodiments of the present disclosure further provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is run by a processor, the steps of the blind spot monitoring method described in the above method embodiments are executed.
  • the storage medium may be a volatile or non-volatile computer-readable storage medium.
  • Embodiments of the present disclosure further provide a computer program product, where the computer program product carries program codes, and the instructions included in the program codes can be used to execute the steps of the blind spot monitoring method described in the foregoing method embodiments.
  • the computer program product carries program codes
  • the instructions included in the program codes can be used to execute the steps of the blind spot monitoring method described in the foregoing method embodiments.
  • the foregoing method please refer to the foregoing method. The embodiments are not repeated here.
  • the above-mentioned computer program product can be specifically implemented by means of hardware, software or a combination thereof.
  • the computer program product is embodied as a computer storage medium, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), etc. Wait.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the functions, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a processor-executable non-volatile computer-readable storage medium.
  • the computer software products are stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in various embodiments of the present disclosure.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes .

Abstract

Provided are a blind area monitoring method and apparatus, an electronic device, and a storage medium. The blind area monitoring method comprises: obtaining a current frame monitoring image acquired by an acquisition device (2) on a target vehicle (S101); performing object detection on the current frame monitoring image to obtain type information and positions of objects comprised in the current frame monitoring image (S102); determining, according to the positions of the objects and a field of view blind area of the target vehicle, a target object located in the field of view blind area of the target vehicle (S103); and generating a monitoring result according to the type information and the position of the target object and a driving state of the target vehicle (S104), thereby improving driving safety and blind area monitoring performance.

Description

一种盲区监测方法、装置、电子设备及存储介质A blind spot monitoring method, device, electronic device and storage medium
相关申请的交叉引用CROSS-REFERENCE TO RELATED APPLICATIONS
本公开基于申请号为202110467776.4、申请日为2021年04月28日、申请名称为“一种盲区监测方法、装置、电子设备及存储介质”的中国专利申请提出,并要求上述中国专利申请的优先权,上述中国专利申请的全部内容在此引入本公开作为参考。The present disclosure is based on the Chinese patent application with the application number of 202110467776.4, the application date of April 28, 2021, and the application name of "a blind spot monitoring method, device, electronic device and storage medium", and requires the priority of the above-mentioned Chinese patent application The entire contents of the above-mentioned Chinese patent application are hereby incorporated into the present disclosure by reference.
技术领域technical field
本公开涉及图像处理技术领域,具体而言,涉及一种盲区监测方法、装置、电子设备及存储介质。The present disclosure relates to the technical field of image processing, and in particular, to a blind spot monitoring method, device, electronic device, and storage medium.
背景技术Background technique
在车辆、机器人的行车过程中,由于可以观测到的范围有限,因此容易出现行车盲区。由于盲区的存在,极易造成驾驶人员或者自动驾驶车辆的判断和操作失误,降低行车的安全性。During the driving process of vehicles and robots, blind spots are prone to occur due to the limited range that can be observed. Due to the existence of blind spots, it is very easy to cause errors in judgment and operation of drivers or autonomous vehicles, reducing the safety of driving.
发明内容SUMMARY OF THE INVENTION
本公开实施例至少提供一种盲区监测方法、装置、电子设备及存储介质。The embodiments of the present disclosure provide at least a blind spot monitoring method, an apparatus, an electronic device, and a storage medium.
第一方面,本公开实施例提供了一种盲区监测方法,包括:获取目标车辆上的采集设备采集得到的当前帧监测图像;对所述当前帧监测图像进行对象检测,得到所述当前帧监测图像中包括的对象的类型信息和位置;根据所述对象的位置和所述目标车辆的视野盲区,确定位于所述目标车辆的视野盲区中的目标对象;根据所述目标对象的类型信息和位置以及所述目标车辆的行车状态,生成监测结果。In a first aspect, an embodiment of the present disclosure provides a blind spot monitoring method, including: acquiring a current frame monitoring image collected by a collection device on a target vehicle; performing object detection on the current frame monitoring image to obtain the current frame monitoring image Type information and position of the object included in the image; according to the position of the object and the blind spot of the target vehicle, determine the target object located in the blind spot of the target vehicle; according to the type information and position of the target object and the driving state of the target vehicle to generate monitoring results.
本公开实施例通过获取目标车辆上的采集设备采集得到的当前帧监测图像,并对当前帧监测图像进行对象检测,确定当前帧监测图像中包括的对象的类型信息和位置,然后根据对象的位置和目标车辆的视野盲区,判断位于目标车辆的视野盲区中的目标对象;然后根据目标对象的类型信息和位置、以及目标车辆的行车状态,生成监测结果。这样,能够针对不同类型的目标对象产生不同的监测结果,从而提升行车安全以及盲区监测性能。In the embodiment of the present disclosure, the current frame monitoring image acquired by the acquisition device on the target vehicle is acquired, and the object detection is performed on the current frame monitoring image to determine the type information and position of the object included in the current frame monitoring image, and then according to the position of the object and the blind spot of the target vehicle, to determine the target object located in the blind spot of the target vehicle; and then generate monitoring results according to the type information and location of the target object and the driving state of the target vehicle. In this way, different monitoring results can be generated for different types of target objects, thereby improving driving safety and blind spot monitoring performance.
在一种可能的实施方式中,所述监测结果包括告警信息,所述目标车辆的行车状态包括所述目标车辆的转向信息;所述根据所述目标对象的类型信息和位置以及所述目标车辆的行车状态,生成监测结果,包括:根据所述目标对象的类型信息和位置以及所述目标车辆的转向信息,确定告警信息的级别;生成确定的级别的告警信息并提示。In a possible implementation manner, the monitoring result includes warning information, the driving state of the target vehicle includes steering information of the target vehicle; the type information and position of the target object and the target vehicle According to the type information and position of the target object and the steering information of the target vehicle, determining the level of the alarm information; generating and prompting the alarm information of the determined level.
在一种可能的实施方式中,所述监测结果包括车辆控制指令,所述目标车辆的行车状态包括所述目标车辆的转向信息;所述根据所述目标对象的类型信息和位置以及所述目标车辆的行车状态,生成监测结果,包括:根据所述目标对象的类型信息和位置以及所述目标车辆的转向信息,生成所述车辆控制指令;所述盲区监测方法还包括:基于所述车辆控制指令,控制所述目标车辆行驶。In a possible implementation manner, the monitoring result includes vehicle control instructions, the driving state of the target vehicle includes steering information of the target vehicle; the type information and position of the target object and the target The driving state of the vehicle, and generating a monitoring result includes: generating the vehicle control instruction according to the type information and position of the target object and the steering information of the target vehicle; the blind spot monitoring method further includes: based on the vehicle control instruction to control the target vehicle to travel.
这样,可以根据目标对象的类型信息和位置以及目标车辆的行车状态,生成更具针对性、更准确的监测结果。In this way, more targeted and accurate monitoring results can be generated according to the type information and location of the target object and the driving state of the target vehicle.
在一种可能的实施方式中,根据所述对象的位置和所述目标车辆的视野盲区,确定位于所述目标车辆的视野盲区中的目标对象,包括:根据所述当前帧监测图像中所述对象的位置,确定所述目标车辆与所述当前帧监测图像中所述对象的当前第一距离信息;根据所述当前第一距离信息,确定位于所述目标车辆的视野盲区中的目标对象。In a possible implementation manner, determining the target object located in the blind spot of the target vehicle according to the position of the object and the blind spot of the target vehicle includes: monitoring the image in the current frame according to the The position of the object is determined, and the current first distance information between the target vehicle and the object in the current frame monitoring image is determined; according to the current first distance information, the target object located in the blind spot of the target vehicle is determined.
这样,可以准确将位于目标车辆的视野盲区中的目标对象,从当前帧监测图像包括的所有对象中检测出来。In this way, the target object located in the blind spot of the target vehicle can be accurately detected from all the objects included in the monitoring image of the current frame.
在一种可能的实施方式中,所述根据所述当前帧监测图像中所述对象的位置,确定所述目标车辆与所述当前帧监测图像中所述对象的当前第一距离信息,包括:基于所述当前帧监测图像,确定所述目标车辆与所述当前帧监测图像中的对象的待调整距离信息;基于所述对象在所述采集设备采集的多帧历史 帧监测图像中相邻两帧监测图像中的尺度之间的尺度变化信息、以及所述多帧历史帧监测图像中每帧历史帧监测图像中的所述对象与所述目标车辆之间的历史第一距离信息,对所述待调整距离信息进行调整,得到所述目标车辆与所述对象之间的当前第一距离信息。In a possible implementation manner, determining the current first distance information between the target vehicle and the object in the current frame monitoring image according to the position of the object in the current frame monitoring image includes: Based on the current frame monitoring image, determine the distance information to be adjusted between the target vehicle and the object in the current frame monitoring image; based on the object in the multi-frame historical frame monitoring image collected by the collection device, two adjacent two The scale change information between the scales in the frame monitoring image, and the historical first distance information between the object and the target vehicle in each frame of the historical frame monitoring image in the multi-frame historical frame monitoring image, for all The distance information to be adjusted is adjusted to obtain the current first distance information between the target vehicle and the object.
在一种可能的实施方式中,所述对所述待调整距离信息进行调整,得到所述目标车辆与所述对象之间的当前第一距离信息,包括:对所述待调整距离信息进行调整,直至所述尺度变化信息的误差量最小,得到调整后的距离信息;其中,所述误差量基于所述待调整距离信息、所述尺度变化信息以及所述多帧历史帧监测图像中每帧历史帧监测图像对应的历史第一距离信息确定;基于所述调整后的距离信息,确定所述当前第一距离信息。In a possible implementation manner, the adjusting the distance information to be adjusted to obtain the current first distance information between the target vehicle and the object includes: adjusting the distance information to be adjusted , until the error amount of the scale change information is the smallest, and the adjusted distance information is obtained; wherein, the error amount is based on the to-be-adjusted distance information, the scale change information, and each frame in the multi-frame historical frame monitoring image The historical first distance information corresponding to the monitoring image of the historical frame is determined; and the current first distance information is determined based on the adjusted distance information.
本公开实施例中,通过对对象在当前帧监测图像和与当前帧监测图像相邻的历史帧监测图像中的尺度之间的尺度变化信息进行不断优化,可以降低获取到的对象在当前帧监测图像和与当前帧监测图像相邻的历史帧监测图像中的尺度之间的尺度变化信息的误差,从而提高调整后的距离信息的稳定性。In the embodiment of the present disclosure, by continuously optimizing the scale change information between the scale of the object in the current frame monitoring image and the historical frame monitoring image adjacent to the current frame monitoring image, it is possible to reduce the amount of the acquired object in the current frame monitoring image. The error of the scale change information between the image and the scale in the monitoring image of the historical frame adjacent to the monitoring image of the current frame, thereby improving the stability of the adjusted distance information.
在一种可能的实施方式中,所述基于所述调整后的距离信息,确定所述当前第一距离信息之前,所述盲区监测方法还包括:对所述当前帧监测图像进行目标检测,确定所述当前帧监测图像中包含的所述对象的检测框的位置信息;基于所述检测框的位置信息、以及所述采集设备的标定参数,确定当前第二距离信息;所述基于所述调整后的距离信息,确定所述当前第一距离信息,包括:基于所述当前第二距离信息、所述多帧历史帧监测图像中的每帧历史帧监测图像中所述对象与所述目标车辆之间的历史第二距离信息、该帧历史帧监测图像对应的所述历史第一距离信息以及所述调整后的距离信息,确定针对所述调整后的距离信息的距离偏置信息;基于所述距离偏置信息对所述调整后的距离信息进行调整,得到所述当前第一距离信息。In a possible implementation manner, before determining the current first distance information based on the adjusted distance information, the blind spot monitoring method further includes: performing target detection on the monitoring image of the current frame, and determining The position information of the detection frame of the object contained in the current frame monitoring image; the current second distance information is determined based on the position information of the detection frame and the calibration parameters of the acquisition device; the adjustment is based on the adjustment After the distance information is obtained, determining the current first distance information includes: based on the current second distance information, the object and the target vehicle in each frame of the historical frame monitoring image in the multi-frame historical frame monitoring image between the historical second distance information, the historical first distance information corresponding to the frame of the historical frame monitoring image, and the adjusted distance information, determine the distance offset information for the adjusted distance information; The adjusted distance information is adjusted by the distance offset information to obtain the current first distance information.
本公开实施例中,在得到距离偏置信息后,可以对调整后的距离信息进行进一步调整,从而得到目标车辆和对象在当前准确度较高的距离信息。In the embodiment of the present disclosure, after the distance offset information is obtained, the adjusted distance information can be further adjusted, so as to obtain the current distance information of the target vehicle and the object with high accuracy.
在一种可能的实施方式中,所述基于所述当前第二距离信息、所述多帧历史帧监测图像中的每帧历史帧监测图像中所述对象与所述目标车辆之间的历史第二距离信息、该帧历史帧监测图像对应的所述历史第一距离信息以及所述调整后的距离信息,确定针对所述调整后的距离信息的距离偏置信息,包括:基于所述当前第二距离信息以及所述多帧历史帧监测图像中的每帧历史帧监测图像对应的所述历史第二距离信息,确定由所述多帧历史帧监测图像中的每帧历史帧监测图像对应的所述历史第二距离信息和所述当前第二距离信息拟合成的第一拟合曲线的第一线性拟合系数;基于所述多帧历史帧监测图像中的每帧历史帧监测图像对应的所述历史第一距离信息以及所述调整后的距离信息,确定由所述多帧历史帧监测图像中的每帧历史帧监测图像对应的所述历史第一距离信息和所述调整后的距离信息拟合成的第二拟合曲线的第二线性拟合系数;基于所述第一线性拟合系数和所述第二线性拟合系数,确定针对所述调整后的距离信息的距离偏置信息。In a possible implementation manner, based on the current second distance information, the historical distance between the object and the target vehicle in each frame of the historical frame monitoring image in the multi-frame historical frame monitoring image Second distance information, the historical first distance information corresponding to the frame of the historical frame monitoring image, and the adjusted distance information, and determining distance offset information for the adjusted distance information, including: based on the current first distance information The second distance information and the historical second distance information corresponding to each frame of the historical frame monitoring image in the multi-frame historical frame monitoring images, determine the distance corresponding to each historical frame monitoring image in the multi-frame historical frame monitoring images. The first linear fitting coefficient of the first fitting curve fitted by the historical second distance information and the current second distance information; based on each frame of the historical frame monitoring image corresponding to the multi-frame historical frame monitoring image The historical first distance information and the adjusted distance information, determine the historical first distance information and the adjusted distance information corresponding to each frame of the historical frame monitoring image in the multi-frame historical frame monitoring image. a second linear fitting coefficient of the second fitting curve fitted to the distance information; determining a distance bias for the adjusted distance information based on the first linear fitting coefficient and the second linear fitting coefficient setting information.
在一种可能的实施方式中,所述基于所述检测框的位置信息、以及所述采集设备的标定参数,确定所述当前第二距离信息,包括:基于所述检测框的位置信息,获取所述检测框中设定角点的像素坐标值;基于所述设定角点的像素坐标值、所述采集设备的标定参数以及在确定所述采集设备的标定参数时使用的车道线消失点的像素坐标值,确定所述当前第二距离信息。In a possible implementation manner, the determining the current second distance information based on the position information of the detection frame and the calibration parameters of the collection device includes: obtaining, based on the position information of the detection frame, The pixel coordinate value of the corner point is set in the detection frame; based on the pixel coordinate value of the set corner point, the calibration parameters of the acquisition device, and the vanishing point of the lane line used when determining the calibration parameters of the acquisition device The pixel coordinate value of , determines the current second distance information.
在一种可能的实施方式中,所述采集设备的标定参数包括所述采集设备相对于地面的第一高度值以及所述采集设备的焦距;所述基于所述设定角点的像素坐标值、所述采集设备的标定参数以及在确定所述采集设备的标定参数时使用的车道线消失点的像素坐标值,确定所述当前第二距离信息,包括:基于所述车道线消失点的像素坐标值以及所述检测框中设定角点的像素坐标值,确定所述采集设备相对于地面的第一像素高度值;基于所述设定角点的像素坐标值,确定所述当前帧监测图像中的所述对象相对于地面的第二像素高度值;基于所述第一像素高度值、所述第二像素高度值以及所述第一高度值,确定所述对象相对于地面的第二高度值;基于所述第二高度值、所述采集设备的焦距以及所述第二像素高度值,确定所述当前第二距离信息。In a possible implementation manner, the calibration parameters of the acquisition device include a first height value of the acquisition device relative to the ground and a focal length of the acquisition device; the pixel coordinate value based on the set corner point , the calibration parameters of the collection device and the pixel coordinate value of the vanishing point of the lane line used when determining the calibration parameters of the collection device, and determining the current second distance information includes: based on the pixels of the vanishing point of the lane line The coordinate value and the pixel coordinate value of the set corner point in the detection frame, determine the first pixel height value of the acquisition device relative to the ground; based on the pixel coordinate value of the set corner point, determine the current frame monitoring a second pixel height value of the object in the image relative to the ground; determining a second pixel height value of the object relative to the ground based on the first pixel height value, the second pixel height value, and the first height value height value; determining the current second distance information based on the second height value, the focal length of the collecting device, and the second pixel height value.
本公开实施例中,在能够检测出当前帧监测图像中的对象的完整检测框的情况下,可以通过引入车 道线消失点的像素坐标值、采集设备的标定参数快速准确的得到对象的实际高度值,进一步可以快速准确地确定出目标车辆与对象的当前第二距离信息。In the embodiment of the present disclosure, under the condition that the complete detection frame of the object in the monitoring image of the current frame can be detected, the actual height of the object can be obtained quickly and accurately by introducing the pixel coordinate value of the vanishing point of the lane line and the calibration parameters of the acquisition device value, further, the current second distance information between the target vehicle and the object can be quickly and accurately determined.
在一种可能的实施方式中,所述基于所述当前帧监测图像,确定所述目标车辆与所述当前帧监测图像中的对象的待调整距离信息,包括:获取所述对象在所述当前帧监测图像中的尺度和在与所述当前帧监测图像相邻的历史帧监测图像中的尺度之间的尺度变化信息;基于所述尺度变化信息、以及与所述当前帧监测图像相邻的历史帧监测图像对应的所述历史第一距离信息,确定所述待调整距离信息。In a possible implementation manner, the determining, based on the monitoring image of the current frame, the distance information to be adjusted between the target vehicle and the object in the monitoring image of the current frame includes: The scale change information between the scale in the frame monitoring image and the scale in the historical frame monitoring image adjacent to the current frame monitoring image; based on the scale change information and the scale adjacent to the current frame monitoring image The historical first distance information corresponding to the historical frame monitoring image is determined to determine the to-be-adjusted distance information.
本公开实施例中,通过与当前帧监测图像相邻的历史帧监测图像对应的准确度较高的历史第一距离信息,以及所述对象在当前帧监测图像和与当前帧监测图像相邻的历史帧监测图像中的尺度之间的尺度变化信息,可以较为准确的得到待调整距离信息,以便在后期基于该待调整距离信息确定当前第一距离信息时,能够提高调整速度。In the embodiment of the present disclosure, the historical first distance information with high accuracy corresponding to the monitoring image of the historical frame adjacent to the monitoring image of the current frame, and the monitoring image of the object in the current frame and the monitoring image of the current frame adjacent to the monitoring image of the current frame are used. The scale change information between scales in the historical frame monitoring image can more accurately obtain the distance information to be adjusted, so that the adjustment speed can be improved when the current first distance information is determined based on the distance information to be adjusted later.
在一种可能的实施方式中,按照以下方式确定所述对象在相邻两帧监测图像中的尺度之间的尺度变化信息:分别提取所述对象包含的多个特征点在所述相邻两帧监测图像中前一帧监测图像中的第一位置信息,以及在后一帧监测图像中的第二位置信息;基于所述第一位置信息和所述第二位置信息,确定所述对象在相邻两帧监测图像中的尺度之间的尺度变化信息。In a possible implementation manner, the scale change information between the scales of the object in two adjacent frames of monitoring images is determined in the following manner: by extracting a plurality of feature points contained in the object in the adjacent two frames respectively. In the frame monitoring image, the first position information in the previous frame monitoring image, and the second position information in the following frame monitoring image; based on the first position information and the second position information, it is determined that the object is in Scale change information between scales in two adjacent frames of monitoring images.
在一种可能的实施方式中,所述基于所述第一位置信息和所述第二位置信息,确定所述对象在相邻两帧监测图像中的尺度之间的尺度变化信息,包括:基于所述第一位置信息,确定所述对象包含的多个特征点所构成的目标线段在所述前一帧监测图像中的第一尺度值;基于所述第二位置信息,确定所述目标线段在所述后一帧监测图像中的第二尺度值;基于所述第一尺度值和所述第二尺度值,确定所述对象在相邻两帧监测图像中的尺度之间的尺度变化信息。In a possible implementation manner, the determining, based on the first position information and the second position information, the scale change information between the scales of the object in two adjacent frames of monitoring images includes: based on For the first position information, determine the first scale value of the target line segment formed by a plurality of feature points included in the object in the monitoring image of the previous frame; based on the second position information, determine the target line segment the second scale value in the monitoring image of the next frame; based on the first scale value and the second scale value, determine the scale change information of the object between the scales in the two adjacent frames of the monitoring image .
本公开实施例中,通过提取对象包含的多个特征点在监测图像中的位置信息,可以更加准确的表示对象在监测图像中位置信息,从而得到更加准确的尺度变化信息,便于在基于该尺度变化信息调整待调整距离信息时,能够得到更加准确的当前第一距离信息。In the embodiment of the present disclosure, by extracting the position information of the multiple feature points contained in the object in the monitoring image, the position information of the object in the monitoring image can be more accurately represented, so as to obtain more accurate scale change information, which is convenient for the When the change information adjusts the to-be-adjusted distance information, more accurate current first distance information can be obtained.
第二方面,本公开实施例提供了一种盲区监测装置,包括:In a second aspect, an embodiment of the present disclosure provides a blind spot monitoring device, including:
获取模块,用于获取目标车辆上的采集设备采集得到的当前帧监测图像;an acquisition module, used to acquire the current frame monitoring image acquired by the acquisition device on the target vehicle;
检测模块,用于对所述当前帧监测图像进行对象检测,得到所述图像中包括的对象的类型信息和位置;a detection module, configured to perform object detection on the current frame monitoring image to obtain type information and position of the object included in the image;
确定模块,用于根据所述对象的位置和所述目标车辆的视野盲区,确定位于所述目标车辆的视野盲区中的目标对象;a determining module, configured to determine the target object located in the blind spot of the target vehicle according to the position of the object and the blind spot of the target vehicle;
生成模块,用于根据所述目标对象的类型信息和位置以及所述目标车辆的行车状态,生成监测结果。The generating module is configured to generate monitoring results according to the type information and position of the target object and the driving state of the target vehicle.
第三方面,本公开实施例提供了一种电子设备,包括:处理器、存储器和总线,所述存储器存储有所述处理器可执行的机器可读指令,当电子设备运行时,所述处理器与所述存储器之间通过总线通信,所述机器可读指令被所述处理器执行时执行如第一方面所述的盲区监测方法的步骤。In a third aspect, embodiments of the present disclosure provide an electronic device, including: a processor, a memory, and a bus, where the memory stores machine-readable instructions executable by the processor, and when the electronic device runs, the processing A bus communicates between the processor and the memory, and when the machine-readable instructions are executed by the processor, the steps of the blind spot monitoring method according to the first aspect are performed.
第四方面,本公开实施例提供了一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行如第一方面所述的盲区监测方法的步骤。In a fourth aspect, an embodiment of the present disclosure provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and the computer program is executed by a processor to execute the blind spot monitoring method according to the first aspect. step.
为使本公开的上述目的、特征和优点能更明显易懂,下文特举较佳实施例,并配合所附附图,作详细说明如下。In order to make the above-mentioned objects, features and advantages of the present disclosure more obvious and easy to understand, the preferred embodiments are exemplified below, and are described in detail as follows in conjunction with the accompanying drawings.
附图说明Description of drawings
为了更清楚地说明本公开实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,此处的附图被并入说明书中并构成本说明书中的一部分,这些附图示出了符合本公开的实施例,并与说明书一起用于说明本公开的技术方案。应当理解,以下附图仅示出了本公开的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。In order to illustrate the technical solutions of the embodiments of the present disclosure more clearly, the following briefly introduces the accompanying drawings required in the embodiments, which are incorporated into the specification and constitute a part of the specification. The drawings illustrate embodiments consistent with the present disclosure, and together with the description, serve to explain the technical solutions of the present disclosure. It should be understood that the following drawings only illustrate certain embodiments of the present disclosure, and therefore should not be regarded as limiting the scope. Other related figures are obtained from these figures.
图1示出了本公开实施例所提供的一种盲区监测方法的流程图;1 shows a flowchart of a blind spot monitoring method provided by an embodiment of the present disclosure;
图2示出了本公开实施例所提供的一种确定视野盲区的示意图;FIG. 2 shows a schematic diagram of determining a blind spot of a visual field provided by an embodiment of the present disclosure;
图3示出了本公开实施例所提供的一种确定当前第一距离信息的具体方法的流程图;3 shows a flowchart of a specific method for determining current first distance information provided by an embodiment of the present disclosure;
图4示出了本公开实施例所提供的一种确定尺度变化信息的方法流程图;FIG. 4 shows a flowchart of a method for determining scale change information provided by an embodiment of the present disclosure;
图5示出了本公开实施例所提供的一种确定待调整距离信息的方法流程图;FIG. 5 shows a flowchart of a method for determining distance information to be adjusted provided by an embodiment of the present disclosure;
图6示出了本公开实施例所提供的一种确定当前第一距离信息的方法流程图;6 shows a flowchart of a method for determining current first distance information provided by an embodiment of the present disclosure;
图7示出了本公开实施例所提供的一种确定当前第二距离信息的方法流程图;FIG. 7 shows a flowchart of a method for determining current second distance information provided by an embodiment of the present disclosure;
图8示出了本公开实施例所提供的一种目标装置、采集设备和目标对象之间的位置关系示意图;FIG. 8 shows a schematic diagram of a positional relationship among a target device, a collection device, and a target object provided by an embodiment of the present disclosure;
图9示出了本公开实施例所提供的一种目标对象的检测框的示意图;FIG. 9 shows a schematic diagram of a detection frame of a target object provided by an embodiment of the present disclosure;
图10示出了本公开实施例所提供的一种确定当前第二距离信息的原理示意图;FIG. 10 shows a schematic diagram of a principle for determining current second distance information provided by an embodiment of the present disclosure;
图11示出了本公开实施例所提供的一种确定当前第二距离信息的场景示意图;FIG. 11 shows a schematic diagram of a scenario for determining current second distance information provided by an embodiment of the present disclosure;
图12示出了本公开实施例所提供的一种盲区监测装置的结构示意图;FIG. 12 shows a schematic structural diagram of a blind spot monitoring device provided by an embodiment of the present disclosure;
图13示出了本公开实施例所提供的一种电子设备的示意图。FIG. 13 shows a schematic diagram of an electronic device provided by an embodiment of the present disclosure.
具体实施方式Detailed ways
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例中附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。通常在此处附图中描述和示出的本公开实施例的组件可以以各种不同的配置来布置和设计。因此,以下对在附图中提供的本公开的实施例的详细描述并非旨在限制要求保护的本公开的范围,而是仅仅表示本公开的选定实施例。基于本公开的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。In order to make the purposes, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present disclosure. Obviously, the described embodiments are only These are some, but not all, embodiments of the present disclosure. The components of the disclosed embodiments generally described and illustrated in the drawings herein may be arranged and designed in a variety of different configurations. Therefore, the following detailed description of the embodiments of the disclosure provided in the accompanying drawings is not intended to limit the scope of the disclosure as claimed, but is merely representative of selected embodiments of the disclosure. Based on the embodiments of the present disclosure, all other embodiments obtained by those skilled in the art without creative work fall within the protection scope of the present disclosure.
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步定义和解释。It should be noted that like numerals and letters refer to like items in the following figures, so once an item is defined in one figure, it does not require further definition and explanation in subsequent figures.
本文中术语“和/或”,仅仅是描述一种关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中术语“至少一种”表示多种中的任意一种或多种中的至少两种的任意组合,例如,包括A、B、C中的至少一种,可以表示包括从A、B和C构成的集合中选择的任意一个或多个元素。The term "and/or" in this paper only describes an association relationship, which means that there can be three kinds of relationships, for example, A and/or B, which can mean: the existence of A alone, the existence of A and B at the same time, the existence of B alone. a situation. In addition, the term "at least one" herein refers to any combination of any one of the plurality or at least two of the plurality, for example, including at least one of A, B, and C, and may mean including from A, B, and C. Any one or more elements selected from the set of B and C.
经研究发现,在利用雷达监测盲区中的目标对象时,由于雷达在扫描时得到的点云点是针对盲区的整个范围的,因此在监测时除了关注车辆、行人和标识牌外,还会监测到其他非关注的物体。通过该种方式进行盲区监测时,一旦通过雷达监测到有物体位于车辆的视野盲区内,就会产生告警。但实际上,并非是所有的物体位于视野盲区内时,都会对车辆行车安全造成影响,这就导致了存在很多无效告警,造成当前的盲区监测方法存在监测性能较差的问题。The study found that when using radar to monitor the target object in the blind spot, since the point cloud points obtained by the radar during scanning are for the entire range of the blind spot, in addition to paying attention to vehicles, pedestrians and signboards during monitoring, it will also monitor to other non-concerned objects. When blind spot monitoring is performed in this way, once an object is detected by the radar in the blind spot of the vehicle's field of vision, an alarm will be generated. However, in fact, not all objects in the blind spot of the field of view will affect the driving safety of the vehicle, which leads to many invalid alarms, and the current blind spot monitoring method has the problem of poor monitoring performance.
基于上述研究,本公开实施例提供了一种盲区监测方法,本公开实施例通过获取目标车辆上的采集设备采集得到的当前帧监测图像,并对当前帧监测图像进行对象检测,确定当前帧监测图像中包括的对象的类型信息和位置,然后根据对象的位置和目标车辆的视野盲区,判断位于目标车辆的视野盲区中的目标对象;然后根据目标对象的类型信息和位置、以及目标车辆的行车状态,生成监测结果。这样,能够针对不同类型的目标对象产生不同的监测结果,从而提升盲区监测性能,并提高行车安全性。Based on the above research, an embodiment of the present disclosure provides a blind spot monitoring method. The embodiment of the present disclosure determines the current frame monitoring image by acquiring the current frame monitoring image collected by the acquisition device on the target vehicle, and performing object detection on the current frame monitoring image. The type information and position of the object included in the image, and then according to the position of the object and the blind spot of the target vehicle, determine the target object located in the blind spot of the target vehicle; then according to the type information and position of the target object, and the driving of the target vehicle status and generate monitoring results. In this way, different monitoring results can be generated for different types of target objects, thereby improving blind spot monitoring performance and improving driving safety.
为便于对本实施例进行理解,首先对本公开实施例所公开的一种盲区监测方法进行详细介绍,本公开实施例所提供的盲区监测方法的执行主体一般为具有一定计算能力的计算机设备,该计算机设备例如包括:终端设备或服务器或其它处理设备,终端设备可以为计算设备、车载设备等。在一些可能的实现方式中,该盲区监测方法可以通过处理器调用存储器中存储的计算机可读指令的方式来实现。In order to facilitate the understanding of this embodiment, a blind spot monitoring method disclosed in the embodiment of the present disclosure is first introduced in detail. The device includes, for example, a terminal device or a server or other processing device, and the terminal device may be a computing device, a vehicle-mounted device, or the like. In some possible implementations, the blind spot monitoring method may be implemented by the processor calling computer-readable instructions stored in the memory.
参见图1所示,为本公开实施例提供的一种盲区监测方法的流程图,该盲区监测方法包括以下S101~S104:Referring to FIG. 1, which is a flowchart of a blind spot monitoring method provided by an embodiment of the present disclosure, the blind spot monitoring method includes the following S101-S104:
S101,获取目标车辆上的采集设备采集得到的当前帧监测图像;S101, acquiring the current frame monitoring image acquired by the acquisition device on the target vehicle;
S102,对当前帧监测图像进行对象检测,得到当前帧监测图像中包括的对象的类型信息和位置;S102, performing object detection on the current frame monitoring image to obtain type information and positions of objects included in the current frame monitoring image;
S103,根据对象的位置和目标车辆的视野盲区,确定位于目标车辆的视野盲区中的目标对象;S103, according to the position of the object and the blind spot of the target vehicle, determine the target object located in the blind spot of the target vehicle;
S104,根据目标对象的类型信息和位置以及目标车辆的行车状态,生成监测结果。S104 , generating a monitoring result according to the type information and position of the target object and the driving state of the target vehicle.
针对上述S101,在不同的场景中,对应的目标车辆也有所不同。For the above S101, in different scenarios, the corresponding target vehicles are also different.
示例性的,在驾驶员驾驶车辆的场景中,目标车辆例如可以包含驾驶员驾驶的车辆;在自动驾驶场景中时,目标车辆例如可以包括自动驾驶车辆;在仓储货运场景中时,目标车辆例如可以包括货运机器人。本公开实施例以目标车辆为车辆为例进行说明。Exemplarily, in a scenario where a driver drives a vehicle, the target vehicle may include, for example, a vehicle driven by the driver; in an autonomous driving scenario, the target vehicle may include, for example, an autonomous vehicle; in a warehousing and freight scenario, the target vehicle may include, for example, an autonomous vehicle. Can include cargo robots. The embodiments of the present disclosure are described by taking the target vehicle as a vehicle as an example.
在目标车辆上还可以搭载有采集设备,采集设备可以为设置于目标车辆上的单目相机,用于在目标车辆行驶过程中进行拍摄。示例性的,若目标区域包括:车辆的视野盲区,可以将采集设备安装在车辆的立柱上,并且采集设备的拍摄镜头朝向车辆的视野盲区。A collection device may also be mounted on the target vehicle, and the collection device may be a monocular camera set on the target vehicle, which is used for shooting during the driving of the target vehicle. Exemplarily, if the target area includes: a blind spot of the vehicle's field of vision, the capture device may be installed on a column of the vehicle, and the camera of the capture device faces the blind spot of the vehicle's field of view.
其中,不同的目标车辆由于其车型的不同,对应的视野盲区可能会有所区别。本公开实施例参考国标标准确定视野盲区,具体参见图2所示,为本公开实施例提供的一种确定视野盲区的示意图。在图2中,车辆1上搭载采集设备2,采集设备2采集的目标区域包括的视野盲区包括3和4指示的位置。Among them, different target vehicles may have different corresponding blind spots due to their different models. The embodiment of the present disclosure determines the blind area of the field of vision with reference to the national standard. For details, see FIG. 2 , which is a schematic diagram of determining the blind area of the field of view provided by the embodiment of the present disclosure. In FIG. 2 , the vehicle 1 is equipped with a collection device 2 , and the blind area of the visual field included in the target area collected by the collection device 2 includes the positions indicated by 3 and 4 .
具体地,在获取目标车辆上的采集设备采集的当前帧监测图像时,例如可以采用下述方式:获取目标车辆上的采集设备对目标区域进行图像采集得到的监测视频;从监测视频中确定当前帧监测图像;其中,目标区域包括:位于采集设备拍摄视野范围内的区域。Specifically, when acquiring the monitoring image of the current frame collected by the acquisition device on the target vehicle, for example, the following methods may be used: acquiring the monitoring video obtained by the acquisition device on the target vehicle performing image acquisition on the target area; determining the current frame from the monitoring video Frame monitoring images; wherein, the target area includes: an area within the shooting field of view of the acquisition device.
在利用采集设备对目标区域进行拍摄时,其拍摄的方向可以预先设定好;在目标车辆上搭载好采集设备后,即可以确定拍摄的目标区域即为采集设备拍摄视野范围内的区域。在车辆行驶或者停止的过程中,采集设备均可以对目标区域进行图像采集,获取监测视频,并从监测视频中确定当前帧监测图像。When using the capture device to capture the target area, the direction of the capture can be pre-set; after the capture device is mounted on the target vehicle, it can be determined that the target area to be captured is the area within the capture range of the capture device. When the vehicle is running or stopped, the acquisition device can collect images of the target area, obtain monitoring videos, and determine the monitoring images of the current frame from the monitoring videos.
其中,在利用监测视频确定当前帧监测图像时,可以采用从监测视频中确定视频帧图像的方式确定,例如将拍摄的视频帧图像中距离当前时间最近的一帧作为当前帧监测图像。Wherein, when using the monitoring video to determine the monitoring image of the current frame, the method of determining the video frame image from the monitoring video can be used, for example, the frame of the captured video frame image that is closest to the current time is used as the monitoring image of the current frame.
针对上述S102,在利用上述S101采集得到当前帧监测图像后,还可以对当前帧监测图像进行对象检测,以得到当前帧监测图像中包括的对象的类型信息和位置。For the above S102, after the current frame monitoring image is acquired by the above S101, object detection may also be performed on the current frame monitoring image to obtain the type information and position of the object included in the current frame monitoring image.
在具体实施中,在对当前帧监测图像进行对象检测,确定图像中包括的对象的类型信息时,例如可以利用预先训练的目标检测神经网络对当前帧监测图像进行对象检测(object detection)处理,示例性的,在对当前帧监测图像进行对象检测时,利用可以采用下述至少一种对象检测算法:卷积神经网络(Convolutional Neural Networks,CNN)、目标检测网络(Region-based CNN,RCNN)、快速神经网络(Fast RCNN)以及和更快速神经网络(Faster RCNN)。In a specific implementation, when the current frame monitoring image is subjected to object detection and the type information of the object included in the image is determined, for example, a pre-trained target detection neural network can be used to perform object detection processing on the current frame monitoring image, Exemplarily, when performing object detection on the monitoring image of the current frame, at least one of the following object detection algorithms can be used: convolutional neural network (Convolutional Neural Networks, CNN), target detection network (Region-based CNN, RCNN) , Fast Neural Network (Fast RCNN), and Faster Neural Network (Faster RCNN).
在利用对象检测算法对当前帧监测图形进行对象监测时,能够检测到的对象,例如包括:其他驾驶车辆、行人、路面设施、以及路面障碍物等。When the object detection algorithm is used to perform object monitoring on the monitoring graph of the current frame, the objects that can be detected include, for example, other driving vehicles, pedestrians, road facilities, and road obstacles.
在对当前帧监测图像进行对象检测时,还可以得到图像中包括的对象的位置。通过检测出的对象在图像中的位置,可以进一步确定在目标车辆的实际行驶过程中,该对象所处的实际位置。When the object detection is performed on the monitoring image of the current frame, the position of the object included in the image can also be obtained. Through the position of the detected object in the image, the actual position of the object in the actual driving process of the target vehicle can be further determined.
针对上述S103,利用对象的位置和目标车辆的视野盲区,可以采用下述方式确定位于目标车辆的视野盲区中的目标对象:根据当前帧监测图像中对象的位置,确定目标车辆与当前帧监测图像中对象的当前第一距离信息;根据当前第一距离信息,确定位于目标车辆的视野盲区中的目标对象。For the above S103, using the position of the object and the blind spot of the target vehicle, the target object located in the blind spot of the target vehicle can be determined in the following manner: According to the position of the object in the current frame monitoring image, determine the target vehicle and the current frame monitoring image The current first distance information of the object in the target vehicle; according to the current first distance information, the target object located in the blind spot of the target vehicle's field of view is determined.
目前,利用单目相机采集的图像来确定距离时,装载在智能汽车上的单目相机因为智能汽车行驶过程中,随着行驶路况的变化,会存在道路颠簸或者障碍物遮挡等问题,这种情况下在基于当前帧监测图像中对象对应的检测框进行测距时,可能无法检测出与对象之间的准确距离,比如采集设备因路面颠簸,采集到监测图像中的检测框的尺寸不稳定,因此在基于检测框持续检测与对象之间的距离时,得到的智能汽车与对象之间的距离在时序上的稳定性不高。At present, when using the images collected by the monocular camera to determine the distance, the monocular camera mounted on the smart car will have problems such as road bumps or obstructions due to changes in the driving road conditions during the driving of the smart car. In this case, when the distance measurement is performed based on the detection frame corresponding to the object in the monitoring image of the current frame, the accurate distance to the object may not be detected. , so when the distance to the object is continuously detected based on the detection frame, the obtained distance between the smart car and the object is not stable in time series.
为了利用单目相机采集的图像尽可能准确并稳定地检测出目标车辆与对象之间的距离,本公开实施例还提出了一种距离检测方案,In order to use the image collected by the monocular camera to detect the distance between the target vehicle and the object as accurately and stably as possible, an embodiment of the present disclosure also proposes a distance detection scheme,
参见图3所示,为本公开实施例提供的一种确定当前第一距离信息的具体方法的流程图,该方法包括下述S301~S302:Referring to FIG. 3, which is a flowchart of a specific method for determining current first distance information provided by an embodiment of the present disclosure, the method includes the following S301-S302:
S301,基于当前帧监测图像,确定目标车辆与当前帧监测图像中的对象的待调整距离信息。S301 , based on the monitoring image of the current frame, determine the distance information to be adjusted between the target vehicle and the object in the monitoring image of the current frame.
示例性地,对象可以包括但不限于车辆、行人、固定障碍物等,本公开是实施例以对象为车辆为例进行介绍。Exemplarily, the objects may include, but are not limited to, vehicles, pedestrians, fixed obstacles, and the like. In the present disclosure, the object is a vehicle as an example for introduction.
示例性地,本公开实施例提供的当前帧监测图像均为非首次检测到对象的监测图像,如果当前帧监测图像是首次检测到对象的监测图像,可以直接基于对象在当前帧监测图像中的位置信息,以及上述标定过程中得到的采集设备的参数信息以及消失点的像素坐标值确定与对象之间的当前第二距离信息,可以将当前第二距离信息直接作为当前第一距离信息,具体确定当前第二距离信息的过程详见后文介绍。Exemplarily, the current frame monitoring images provided by the embodiments of the present disclosure are all monitoring images that are not detected for the first time. If the current frame monitoring images are monitoring images that detect objects for the first time, it can be directly The position information, the parameter information of the acquisition device and the pixel coordinate value of the vanishing point obtained in the above calibration process determine the current second distance information between the object and the object, and the current second distance information can be directly used as the current first distance information, specifically The process of determining the current second distance information is described later in detail.
示例性地,在当前帧监测图像为非首次采集到对象的情况下,当前帧监测图像对应的当前第一距离信息,或者每帧历史帧监测图像对应的历史第一距离信息均表示经过调整后得到的距离信息。Exemplarily, when the current frame monitoring image is not the first time to collect the object, the current first distance information corresponding to the current frame monitoring image, or the historical first distance information corresponding to each frame of the historical frame monitoring image indicates that after adjustment obtained distance information.
示例性地,这里在基于当前帧监测图像,确定目标车辆与对象的待调整距离信息时,可以基于与当前帧监测图像相邻的历史帧监测图像对应的历史第一距离信息、以及当前帧监测图像和与当前帧监测图像相邻的历史帧监测图像中的尺度之间的尺度变化信息来确定,后续再对该待调整距离信息进行调整。Exemplarily, when determining the distance information to be adjusted between the target vehicle and the object based on the current frame monitoring image, it can be based on the historical first distance information corresponding to the historical frame monitoring image adjacent to the current frame monitoring image and the current frame monitoring The scale change information between the image and the scale in the historical frame monitoring image adjacent to the current frame monitoring image is determined, and the to-be-adjusted distance information is adjusted subsequently.
S302,基于对象在采集设备采集的多帧历史帧监测图像中相邻两帧监测图像中的尺度之间的尺度变化信息、以及多帧历史帧监测图像中每帧历史帧监测图像中的对象与目标车辆之间的历史第一距离信息,对待调整距离信息进行调整,得到目标车辆与对象之间的当前第一距离信息。S302, based on the scale change information between the scales in two adjacent frames of monitoring images in the multi-frame historical frame monitoring images collected by the acquisition device, and the object in each frame of the historical frame monitoring images in the multi-frame historical frame monitoring images and the object in the historical frame monitoring images The historical first distance information between the target vehicles is adjusted, and the to-be-adjusted distance information is adjusted to obtain the current first distance information between the target vehicle and the object.
示例性地,对象在采集设备采集的多帧历史帧监测图像中相邻两帧监测图像(比如包括监测图像i和监测图像j)中的尺度变化信息包括对象在后一帧监测图像j中的尺度与对象在前一帧监测图像i中的尺度的比值,具体确定过程将在后文进行阐述。Exemplarily, the scale change information of the object in two adjacent frames of monitoring images (such as including monitoring image i and monitoring image j) in the multi-frame historical frame monitoring images collected by the acquisition device includes the size of the object in the next frame of monitoring image j. The ratio of the scale to the scale of the object in the monitoring image i in the previous frame, and the specific determination process will be described later.
示例性地,本公开实施例确定每帧历史帧监测图像对应的目标车辆与对象之间的历史第一距离信息的方式与确定目标车辆与对象之间的当前第一距离信息的方式相同,因此本公开实施例将不再对确定历史第一距离信息的过程进行赘述。Exemplarily, the embodiment of the present disclosure determines the historical first distance information between the target vehicle and the object corresponding to each frame of the historical frame monitoring image in the same manner as the current first distance information between the target vehicle and the object. Therefore, In this embodiment of the present disclosure, the process of determining the historical first distance information will not be described repeatedly.
本公开实施例中,可以根据基于对象在多帧历史帧监测图像中相邻两帧监测图像中的尺度变化信息、以及历史过程中已经调整得到的目标车辆与对象之间的历史第一距离信息,对基于当前监测图像获取的待调整距离信息进行调整,这样可以使得相邻两帧监测图像对应目标车辆与对象之间的距离变化较为平稳,能够真实反应目标车辆在行驶过程中与对象之间的实际距离变化情况,可以提高预测得到的目标车辆与对象之间的距离在时序的稳定性。In the embodiment of the present disclosure, the scale change information in two adjacent frames of monitoring images in the multi-frame historical frame monitoring images based on the object and the historical first distance information between the target vehicle and the object that have been adjusted in the historical process can be used. , adjust the distance information to be adjusted obtained based on the current monitoring image, so that the distance between the target vehicle and the object corresponding to two adjacent frames of monitoring images changes relatively smoothly, and can truly reflect the distance between the target vehicle and the object during the driving process. The actual distance changes can improve the stability of the predicted distance between the target vehicle and the object in the time series.
另外,对象在相邻两帧监测图像中的尺度变化信息同样可以反应目标车辆与对象之间的距离变化,多帧历史帧监测图像中每帧历史帧监测图像对应目标车辆与对象之间的历史第一距离信息为经过调整得到的较为准确的距离信息,因此在基于对象在采集设备采集的多帧历史帧监测图像中相邻两帧监测图像中的尺度变化信息、以及多帧历史帧监测图像中每帧历史帧监测图像对应的所述目标车辆与对象之间的历史第一距离信息对待调整距离信息进行调整后,可以得到较为准确的当前第一距离信息。In addition, the scale change information of the object in two adjacent frames of monitoring images can also reflect the distance change between the target vehicle and the object. In the multi-frame historical frame monitoring image, each historical frame monitoring image corresponds to the history between the target vehicle and the object. The first distance information is relatively accurate distance information obtained after adjustment. Therefore, in the multi-frame historical frame monitoring images collected by the object-based acquisition device, the scale change information in two adjacent frames of monitoring images, and the multi-frame historical frame monitoring images. After adjusting the historical first distance information between the target vehicle and the object corresponding to each frame of the historical frame monitoring image in the to-be-adjusted distance information, more accurate current first distance information can be obtained.
在本公开实施例中,在确定当前第一距离信息时,可以根据基于对象在多帧历史帧监测图像中相邻两帧监测图像中的尺度之间的尺度变化信息、以及历史过程中已经调整得到的目标车辆与对象之间的历史第一距离信息,对基于当前监测图像获取的待调整距离信息进行调整,这样可以使得相邻两帧监测图像中的同一对象与目标车辆之间的距离变化较为平稳,能够真实反应目标车辆在行驶过程中与对象之间的实际距离变化情况,可以提高预测得到的目标车辆与对象之间的距离在时序的稳定性。In the embodiment of the present disclosure, when determining the current first distance information, it can be based on the scale change information between scales in two adjacent frames of monitoring images in multiple frames of historical frame monitoring images based on the object, and the information that has been adjusted in the historical process. Obtain the historical first distance information between the target vehicle and the object, and adjust the distance information to be adjusted obtained based on the current monitoring image, so that the distance between the same object and the target vehicle in two adjacent frames of monitoring images can be changed. It is relatively stable, and can truly reflect the actual distance change between the target vehicle and the object during the driving process, and can improve the stability of the predicted distance between the target vehicle and the object in the time series.
首先针对上述提到的尺度变化信息,如图4所示,可以按照以下方式确定对象在相邻两帧监测图像中的尺度之间的尺度变化信息,包括以下S401~S402:First, for the scale change information mentioned above, as shown in FIG. 4 , the scale change information between the scales of an object in two adjacent frames of monitoring images can be determined in the following manner, including the following S401 to S402:
S401,分别提取对象包含的多个特征点在相邻两帧监测图像中前一帧监测图像中的第一位置信息,以及在后一帧监测图像中的第二位置信息。S401, respectively extracting the first position information of the multiple feature points included in the object in the monitoring image of the previous frame of the two adjacent frames of the monitoring image, and the second position information of the monitoring image of the following frame.
示例性地,可以基于预先训练的目标检测模型对监测图像进行目标检测,得到用于表示对象在监测图像中的位置的检测框,然后可以在检测框内提取构成对象的多个特征点,这些特征点是指可以是对象中像素变化比较剧烈的点,比如拐点、角点等。Exemplarily, target detection can be performed on the monitoring image based on a pre-trained target detection model to obtain a detection frame for representing the position of the object in the monitoring image, and then a plurality of feature points constituting the object can be extracted in the detection frame, these Feature points refer to points in the object where the pixels change drastically, such as inflection points, corner points, etc.
S402,基于第一位置信息和第二位置信息,确定对象在相邻两帧监测图像中的尺度之间的尺度变化 信息。S402, based on the first position information and the second position information, determine the scale change information of the object between scales in two adjacent frames of monitoring images.
示例性地,多个特征点中的任意两个特征点在同一帧监测图像中的连线可以构成线段,这样由任意两个特征点在前一帧监测图像中的第一位置信息,可以得到任意两个特征点构成的线段在前一帧监测图像中的尺度,同样,由任意两个特征点在后一帧监测图像中的第二位置信息,可以得到任意两个特征点构成的线段在后一帧监测图像中的尺度,按照该方式可以得到对象上的多条线段分别在前一帧监测图像中的尺度,以及分别在后一帧监测图像中的尺度。Exemplarily, the connecting line of any two feature points in the same frame of monitoring image can form a line segment, so that the first position information of any two feature points in the previous frame of monitoring image can be obtained. The scale of the line segment formed by any two feature points in the previous frame of monitoring image, and similarly, from the second position information of any two feature points in the next frame of monitoring image, the line segment formed by any two feature points can be obtained in For the scale in the monitoring image of the next frame, in this way, the scale of the multiple line segments on the object in the monitoring image of the previous frame and the scale of the monitoring image of the subsequent frame can be obtained respectively.
进一步,可以根据多条线段分别在前一帧监测图像中的尺度,以及分别在后一帧监测图像中的尺度,确定对象在相邻两帧监测图像中的尺度变化信息。Further, the scale change information of the object in two adjacent frames of monitoring images can be determined according to the respective scales of the multiple line segments in the previous frame of the monitoring image and the respective scales in the next frame of the monitoring image.
具体地,针对S402,在基于第一位置信息和第二位置信息,确定对象在相邻两帧监测图像中的尺度之间的尺度变化信息时,包括以下S4021~S4022:Specifically, for S402, when determining the scale change information between the scales of the object in two adjacent frames of monitoring images based on the first position information and the second position information, the following steps S4021 to S4022 are included:
S4021,基于第一位置信息,确定对象包含的多个特征点所构成的目标线段在前一帧监测图像中的第一尺度值。S4021 , based on the first position information, determine the first scale value of the target line segment formed by multiple feature points included in the object in the monitoring image of the previous frame.
S4022,基于第二位置信息,确定目标线段在后一帧监测图像中的第二尺度值。S4022, based on the second position information, determine the second scale value of the target line segment in the monitoring image of the next frame.
示例性地,目标线段包含n条,n大于或等于1,且小于设定阈值,基于每条目标线段包含的特征点的第一位置信息,可以得到该条目标线段对应的第一尺度值,以及,基于每条目标线段包含的特征点的第二位置信息,可以得到该条目标线段对应的第二尺度值。Exemplarily, the target line segment includes n, where n is greater than or equal to 1 and less than a set threshold, and based on the first position information of the feature points included in each target line segment, the first scale value corresponding to the target line segment can be obtained, And, based on the second position information of the feature points included in each target line segment, a second scale value corresponding to the target line segment can be obtained.
S4023,基于第一尺度值和第二尺度值,确定对象在相邻两帧监测图像中的尺度之间的尺度变化信息。S4023 , based on the first scale value and the second scale value, determine scale change information of the object between scales in two adjacent frames of monitoring images.
示例性地,可以通过任一条目标线段分别对应的第二尺度值和第一尺度值之间的比值,表示该条目标线段对应的尺度变化信息,进一步根据多条目标线段分别对应的尺度变化信息,确定对象在相邻两帧监测图像中的尺度变化信息,比如,可以将设定条数的目标线段对应的尺度变化信息的平均值作为对象在相邻两帧监测图像中的尺度变化信息。Exemplarily, the ratio between the second scale value and the first scale value corresponding to any target line segment can be used to represent the scale change information corresponding to the target line segment, and further according to the scale change information corresponding to the multiple target line segments. , to determine the scale change information of the object in two adjacent frames of monitoring images. For example, the average value of the scale change information corresponding to the set number of target line segments can be used as the scale change information of the object in two adjacent frames of monitoring images.
相比通过检测框的两个角点在监测图像中的位置信息来表示对象的尺度的方式,比如通过检测框的左上角点和右下角点在监测图像中的位置信息表示对象在监测图像中的尺度的方式,本公开实施例通过选择多个特征点分别在相邻两帧监测图像中的位置信息,确定出的对象在相邻两帧监测图像中的尺度变化信息,该方式通过提取对象包含的多个特征点在监测图像中的位置信息,可以更加准确的表示对象在监测图像中位置信息,从而得到更加准确的尺度变化信息。Compared with the method of representing the scale of the object by the position information of the two corner points of the detection frame in the monitoring image, for example, the position information of the upper left corner and the lower right corner of the detection frame in the monitoring image indicates that the object is in the monitoring image. In the embodiment of the present disclosure, by selecting the position information of a plurality of feature points in two adjacent frames of monitoring images, respectively, the scale change information of the object in the two adjacent frames of monitoring images is determined. The included position information of multiple feature points in the monitoring image can more accurately represent the position information of the object in the monitoring image, thereby obtaining more accurate scale change information.
本公开实施例中,通过提取对象包含的多个特征点在监测图像中的位置信息,可以更加准确的表示对象在监测图像中位置信息,从而得到更加准确的尺度变化信息,便于在基于该尺度变化信息调整待调整距离信息时,能够得到更加准确的当前第一距离信息。In the embodiment of the present disclosure, by extracting the position information of the multiple feature points contained in the object in the monitoring image, the position information of the object in the monitoring image can be more accurately represented, so as to obtain more accurate scale change information, which is convenient for the When the change information adjusts the to-be-adjusted distance information, more accurate current first distance information can be obtained.
针对上述S302,在基于当前帧监测图像,确定目标车辆与当前帧监测图像中的对象的待调整距离信息时,如图5所示,可以包括以下S501~S502:For the above S302, when determining the distance information to be adjusted between the target vehicle and the object in the current frame monitoring image based on the current frame monitoring image, as shown in FIG. 5, the following S501-S502 may be included:
S501,获取对象在当前帧监测图像中的尺度和在与当前帧监测图像相邻的历史帧监测图像中的尺度之间的尺度变化信息。S501: Acquire the scale change information between the scale of the object in the monitoring image of the current frame and the scale in the monitoring image of the historical frame adjacent to the monitoring image of the current frame.
示例性地,与当前帧监测图像相邻的历史帧监测图像是指采集时刻位于当前帧监测图像之前的前一帧监测图像,对象在当前帧监测图像和与当前帧监测图像相邻的历史帧监测图像之间的尺度变化信息可以通过对象在当前帧监测图像中的尺度与对象在与当前帧监测图像相邻的历史帧监测图像中的尺度的比值来表示,具体确定过程将在后文进行阐述。Exemplarily, the historical frame monitoring image adjacent to the current frame monitoring image refers to the previous frame monitoring image before the current frame monitoring image at the time of acquisition, and the object is in the current frame monitoring image and the historical frame adjacent to the current frame monitoring image. The scale change information between monitoring images can be represented by the ratio of the scale of the object in the monitoring image of the current frame to the scale of the object in the monitoring image of the historical frame adjacent to the monitoring image of the current frame. The specific determination process will be carried out later. elaborate.
S502,基于尺度变化信息、以及与当前帧监测图像相邻的历史帧监测图像对应的历史第一距离信息,确定待调整距离信息。S502 , based on the scale change information and the historical first distance information corresponding to the monitoring image of the historical frame adjacent to the monitoring image of the current frame, determine the distance information to be adjusted.
考虑到目标车辆与对象在靠近的过程中,采集到的监测图像中对象的尺度会逐渐增大,即对象在相邻两帧监测图像中的尺度和这两帧监测图像对应的目标车辆与对象之间的距离之间存在比例关系,基于此,可以通过以下公式(1)来确定待调整距离信息:Considering that the target vehicle and the object are in the process of approaching, the scale of the object in the collected monitoring images will gradually increase, that is, the scale of the object in two adjacent frames of monitoring images and the target vehicle and object corresponding to the two frames of monitoring images. There is a proportional relationship between the distances. Based on this, the distance information to be adjusted can be determined by the following formula (1):
d 0_scale=scale×D 1_final;         (1); d 0_scale =scale×D 1_final ; (1);
其中,d 0_scale表示待调整距离信息;scale表示对象在当前帧监测图像中的尺度与对象在与当前帧监测图像相邻的历史帧监测图像中的尺度的比值;D 1_final表示与当前帧监测图像相邻的历史帧监测图像对应的历史第一距离信息。 Wherein, d 0_scale represents the distance information to be adjusted; scale represents the ratio of the scale of the object in the monitoring image of the current frame to the scale of the object in the monitoring image of the historical frame adjacent to the monitoring image of the current frame; D 1_final represents the monitoring image of the current frame The historical first distance information corresponding to the adjacent historical frame monitoring images.
本公开实施例中,通过与当前帧监测图像相邻的历史帧监测图像对应的历史第一距离信息,以及对象在当前帧监测图像和与当前帧监测图像相邻的历史帧监测图像中的尺度之间的尺度变化信息,可以得到较为准确的待调整距离信息,以便在后期基于该待调整距离信息确定当前第一距离信息时,能够提高调整速度。In the embodiment of the present disclosure, the historical first distance information corresponding to the monitoring image of the historical frame adjacent to the monitoring image of the current frame, and the scale of the object in the monitoring image of the current frame and the monitoring image of the historical frame adjacent to the monitoring image of the current frame are used. It is possible to obtain relatively accurate distance information to be adjusted, so that the adjustment speed can be increased when the current first distance information is determined based on the distance information to be adjusted later.
示例性地,上述获取到的对象在当前帧监测图像和与当前帧监测图像相邻的历史帧监测图像之间的尺度变化信息可能会存在误差,比如在拍摄当前监测图像时发生抖动,或者检测的对象错误,基于此得到的尺度变化信息相比多帧历史帧监测图像中相邻两帧监测图像中的尺度变化信息会存在突变情况,这样基于此得到的待调整距离信息相比相邻的历史第一距离信息也会存在突变情况,此时,可以通过对象在采集设备采集的多帧历史帧监测图像中相邻两帧监测图像中的尺度变化信息、以及多帧历史帧监测图像中每帧历史帧监测图像对应的历史第一距离信息对该待调整距离信息进行调整。Exemplarily, there may be errors in the scale change information of the obtained object between the current frame monitoring image and the historical frame monitoring image adjacent to the current frame monitoring image, such as jittering when the current monitoring image is captured, or detecting object error, the scale change information obtained based on this will have a sudden change compared with the scale change information in the adjacent two frames of monitoring images in the multi-frame historical frame monitoring image, so the distance information to be adjusted obtained based on this is compared with the adjacent ones. There will also be a sudden change in the historical first distance information. At this time, the scale change information in two adjacent frames of monitoring images in the multi-frame historical frame monitoring image collected by the object, and each frame in the multi-frame historical frame monitoring image can be used. The historical first distance information corresponding to the frame historical frame monitoring image adjusts the to-be-adjusted distance information.
具体地,在对待调整距离信息进行调整,得到目标车辆与对象之间的当前第一距离信息时,如图6所示,可以包括以下S601~S603:Specifically, when the distance information to be adjusted is adjusted to obtain the current first distance information between the target vehicle and the object, as shown in FIG. 6 , the following steps S601 to S603 may be included:
S601,对待调整距离信息进行调整,直至尺度变化信息的误差量最小,得到调整后的距离信息;其中,误差量基于待调整距离信息、尺度变化信息以及多帧历史帧监测图像中每帧历史帧监测图像对应的历史第一距离信息确定。S601, adjust the distance information to be adjusted until the error amount of the scale change information is the smallest, and obtain the adjusted distance information; wherein, the error amount is based on the distance information to be adjusted, the scale change information, and each historical frame in the multi-frame historical frame monitoring image The historical first distance information corresponding to the monitoring image is determined.
示例性地,可以基于以下公式(2)来预测用于表示对象在当前帧监测图像和与当前帧监测图像相邻的历史帧监测图像之间的尺度变化信息的误差量:Exemplarily, the amount of error used to represent the scale change information of the object between the current frame monitoring image and the historical frame monitoring image adjacent to the current frame monitoring image can be predicted based on the following formula (2):
Figure PCTCN2022084399-appb-000001
Figure PCTCN2022084399-appb-000001
其中,E表示对象在当前帧监测图像和与当前帧监测图像相邻的历史帧监测图像之间的尺度变化信息的误差量;T包含对象的监测图像的帧数,T小于或等于预设帧数;t用于指示历史帧监测图像,表示从当前帧监测图像开始的第t帧历史帧监测图像,比如t=1表示从当前帧监测图像开始的第一帧历史帧监测图像;L t表示从当前帧监测图像开始的第t帧历史帧监测图像在确定误差量E时的预设权重,D t_final表示从当前帧监测图像开始的第t帧历史帧监测图像对应的历史第一距离信息;scale i表示从当前帧监测图像开始的第i帧历史帧监测图像和第i+1帧历史帧监测图像之间的尺度变化信息。 Among them, E represents the error amount of the scale change information of the object between the monitoring image of the current frame and the monitoring image of the historical frame adjacent to the monitoring image of the current frame; T contains the frame number of the monitoring image of the object, and T is less than or equal to the preset frame Number; t is used to indicate the historical frame monitoring image, representing the t-th frame historical frame monitoring image starting from the current frame monitoring image, for example, t=1 represents the first frame historical frame monitoring image starting from the current frame monitoring image; L t represents The preset weight of the t-th frame historical frame monitoring image starting from the current frame monitoring image when determining the error amount E, D t_final represents the historical first distance information corresponding to the t-th frame historical frame monitoring image starting from the current frame monitoring image; scale i represents the scale change information between the i-th historical frame monitoring image and the i+1-th historical frame monitoring image starting from the current frame monitoring image.
示例性地,可以通过多种优化方式对上述公式(2)进行优化,比如可以包含但不限于牛顿梯度下降法的方式对上述公式(2)中的d 0_scale进行调整,在E最小时,得到调整后的距离信息D 0_scaleExemplarily, the above formula (2) can be optimized through a variety of optimization methods, for example, the d 0_scale in the above formula (2) can be adjusted in a manner including but not limited to the Newton gradient descent method. When E is the smallest, it is obtained: The adjusted distance information D 0_scale .
通过上述方式对对象在当前帧监测图像和与当前帧监测图像相邻的历史帧监测图像之间的尺度变化信息进行不断优化,可以降低获取到的对象在当前帧监测图像中和在与当前帧监测图像相邻的历史帧监测图像之间的尺度变化信息的误差,从而提高确定的调整后的距离信息的稳定性。By continuously optimizing the scale change information of the object in the monitoring image of the current frame and the monitoring image of the historical frame adjacent to the monitoring image of the current frame, the obtained object in the monitoring image of the current frame and the difference between the monitoring image of the current frame and the current frame can be reduced. The error of the scale change information between the adjacent historical frames of the monitoring images is monitored, thereby improving the stability of the determined adjusted distance information.
S602,基于调整后的距离信息,确定当前第一距离信息。S602, based on the adjusted distance information, determine the current first distance information.
示例性地,在得到调整后的距离信息后,为了进一步提高调整后的距离信息的准确度,还可以进一步对调整后的距离信息进行调整,得到目标车辆与对象之间的当前第一距离信息。Exemplarily, after the adjusted distance information is obtained, in order to further improve the accuracy of the adjusted distance information, the adjusted distance information may be further adjusted to obtain the current first distance information between the target vehicle and the object. .
具体地,在基于调整后的距离信息,确定当前第一距离信息之前,如图7所示,本公开实施例提供的盲区监测方法还包括以下S701~S702:Specifically, before the current first distance information is determined based on the adjusted distance information, as shown in FIG. 7 , the blind spot monitoring method provided by the embodiment of the present disclosure further includes the following S701 to S702:
S701,对当前帧监测图像进行目标检测,确定当前帧监测图像中包含的对象的检测框的位置信息。S701 , perform target detection on the monitoring image of the current frame, and determine the position information of the detection frame of the object included in the monitoring image of the current frame.
S702,基于检测框的位置信息、以及采集设备的标定参数,确定当前第二距离信息。S702: Determine the current second distance information based on the position information of the detection frame and the calibration parameters of the collection device.
示例性地,在目标车辆行驶之前,可以对设置在目标车辆上的采集设备进行标定,比如将采集设备安装于目标车辆的顶部,如图6所示,使得目标车辆位于平行车道线中间,保持采集设备的光轴与水平地面平行,且与目标车里前进方向平行,按照该方式可以获取到采集设备的焦距(f x,f y)以及采集设备相对于地面的高度H cExemplarily, before the target vehicle travels, the acquisition device provided on the target vehicle can be calibrated, for example, the acquisition device is installed on the top of the target vehicle, as shown in FIG. The optical axis of the collecting device is parallel to the horizontal ground and the advancing direction of the target vehicle. In this way, the focal length (f x , f y ) of the collecting device and the height H c of the collecting device relative to the ground can be obtained.
示例性地,可以通过预先训练的目标检测模型对当前帧监测图像进行目标检测,得到当前帧监测图像中包含的对象,以及该对象对应的检测框,如图8所示,检测框的位置信息可以包含检测框的角点在当前帧监测图像中的位置信息,比如可以包含角点A、B、C和D在当前帧监测图像中的像素坐标值。Exemplarily, target detection can be performed on the monitoring image of the current frame by using a pre-trained target detection model to obtain the object contained in the monitoring image of the current frame, and the detection frame corresponding to the object. As shown in FIG. 8 , the position information of the detection frame is obtained. The position information of the corner points of the detection frame in the monitoring image of the current frame may be included, for example, the pixel coordinate values of the corner points A, B, C and D in the monitoring image of the current frame may be included.
进一步地,按照小孔成像原理,可以得到以下公式(3)和公式(4):Further, according to the principle of pinhole imaging, the following formulas (3) and (4) can be obtained:
Figure PCTCN2022084399-appb-000002
Figure PCTCN2022084399-appb-000002
Figure PCTCN2022084399-appb-000003
Figure PCTCN2022084399-appb-000003
其中,H x表示对象的实际宽度;H y表示对象相对于地面的实际高度;w b表示对象在当前帧监测图像中的像素宽度,可以通过对象的检测框ABCD的像素宽确定;h b表示对象相对于地面的像素高度,可以通过对象的检测框ABCD的像素高确定;D 0表示目标车辆与对象之间的当前第二距离信息。 Among them, H x represents the actual width of the object; H y represents the actual height of the object relative to the ground; w b represents the pixel width of the object in the monitoring image of the current frame, which can be determined by the pixel width of the detection frame ABCD of the object; h b represents The pixel height of the object relative to the ground can be determined by the pixel height of the detection frame ABCD of the object; D 0 represents the current second distance information between the target vehicle and the object.
示例性地,在一种实施方式中,H x和H y可以通过检测出的对象的类型进行确定,比如对象为车辆时,可以基于检测出的目标车辆的类型,以及预先存储的车辆类型和车辆对应的高度、以及宽度之间的对应关系,确定目标车辆的实际宽度和实际高度。 Exemplarily, in one embodiment, H x and Hy can be determined based on the type of the detected object, for example, when the object is a vehicle, it can be determined based on the detected type of the target vehicle and the pre-stored vehicle type and The corresponding height of the vehicle and the corresponding relationship between the widths determine the actual width and actual height of the target vehicle.
示例性地,对象在当前帧监测图像中的宽度w b可以通过如图9中的检测框ABCD中的角点AB在当前帧监测图像中的像素坐标值确定,或者通过角点CD在当前帧监测图像中的像素坐标值确定;对象在当前帧监测图像中的高度h b可以通过角点BC在当前帧监测图像中的像素坐标值确定,或者通过角点AD在当前帧监测图像中的像素坐标值确定,在此不进行赘述。 Exemplarily, the width w b of the object in the monitoring image of the current frame can be determined by the pixel coordinate value of the corner point AB in the monitoring image of the current frame in the detection frame ABCD as shown in Figure 9, or by the corner point CD in the current frame. The pixel coordinate value in the monitoring image is determined; the height h b of the object in the monitoring image of the current frame can be determined by the pixel coordinate value of the corner point BC in the monitoring image of the current frame, or the pixel coordinate value of the corner point AD in the monitoring image of the current frame can be determined. The coordinate value is determined, which will not be repeated here.
考虑到存在无法识别出对象的类型的情况,因此可能无法直接获取到对象的实际高度或者实际宽度,本公开实施例以确定对象的实际高度为例进行阐述,针对上述S702,在基于检测框的位置信息、以及采集设备的标定参数,确定当前第二距离信息时,包括以下S7021~S7022:Considering that there is a situation in which the type of the object cannot be identified, the actual height or actual width of the object may not be directly obtained, the embodiment of the present disclosure is described as an example of determining the actual height of the object. For the above S702, in the detection frame-based detection The location information and the calibration parameters of the acquisition device, when determining the current second distance information, include the following S7021-S7022:
S7021,基于检测框的位置信息,获取检测框中设定角点的像素坐标值。S7021 , based on the position information of the detection frame, obtain the pixel coordinate value of the set corner in the detection frame.
S7022,基于设定角点的像素坐标值、采集设备的标定参数以及在确定采集设备的标定参数时使用的车道线消失点的像素坐标值,确定当前第二距离信息。S7022: Determine the current second distance information based on the pixel coordinate value of the set corner point, the calibration parameters of the collection device, and the pixel coordinate value of the vanishing point of the lane line used in determining the calibration parameter of the collection device.
下面将结合图10所示,对这里基于设定角点的像素坐标值、采集设备的标定参数以及在确定采集设备的标定参数时使用的车道线消失点的像素坐标值,确定当前第二距离信息的原理进行说明:10, the current second distance is determined based on the pixel coordinate value of the set corner point, the calibration parameters of the acquisition device, and the pixel coordinate value of the vanishing point of the lane line used when determining the calibration parameters of the acquisition device. The principle of information is explained:
示例性地,在初始对采集设备进行标定过程中,可以将目标车辆停放在平行车道线之间,远处的平行车道线在采集设备的相平面投影时相交于一点,可以称为车道线消失点,车道线消失点近似与图9中 的V点重合,可以通过该车道线消失点表示采集设备在监测图像中的投影位置,该车道线消失点的像素坐标值可以表示采集设备在当前帧监测图像中的像素坐标值。Exemplarily, in the initial calibration process of the acquisition device, the target vehicle can be parked between parallel lane lines, and the parallel lane lines in the distance intersect at a point when the phase plane of the acquisition device is projected, which can be called the disappearance of the lane lines. The vanishing point of the lane line approximately coincides with point V in Figure 9. The vanishing point of the lane line can be used to represent the projection position of the acquisition device in the monitoring image, and the pixel coordinate value of the vanishing point of the lane line can indicate that the acquisition device is in the current frame. Monitor pixel coordinate values in an image.
如图10所示,EG两点之间的距离可以表示采集设备相对于地面的实际高度H c;FG两点之间的距离可以表示对象相对于地面的实际高度H y;MN两点之间的距离可以表示对象相对于地面的像素高度h b;MV两点之间的距离可以表示采集设备相对于地面的像素高度。 As shown in Figure 10, the distance between the two points EG can represent the actual height H c of the acquisition device relative to the ground; the distance between the two points FG can represent the actual height Hy of the object relative to the ground; the distance between the two points MN The distance of MV can represent the pixel height h b of the object relative to the ground; the distance between the two points of MV can represent the pixel height of the acquisition device relative to the ground.
进一步地,如图10所示,根据小孔成像原理,确定采集设备在拍摄当前帧监测图像时,采集设备相对于地面的实际高度H c和对象相对于地面的实际高度H y的比值,等于采集设备相对于地面的像素高度和对象相对于地面的像素高度h b的比值,这样,在确定出M点、V点以及N点的像素坐标值后,可以进一步确定出标对象相对于地面的像素高度h b以及采集设备相对于地面的像素高度,从而预测出对象相对于地面的实际高度H yFurther, as shown in FIG. 10 , according to the principle of pinhole imaging, it is determined that the ratio between the actual height H c of the acquisition device relative to the ground and the actual height H y of the object relative to the ground when the acquisition device captures the monitoring image of the current frame is equal to The ratio of the pixel height of the acquisition device relative to the ground and the pixel height h b of the object relative to the ground. In this way, after determining the pixel coordinate values of points M, V and N, it is possible to further determine the distance of the marked object relative to the ground. The pixel height h b and the pixel height of the acquisition device relative to the ground are used to predict the actual height H y of the object relative to the ground.
进一步地,在预测出对象相对于地面的实际高度后,可以结合上述公式(3)确定出当前第二距离信息。Further, after the actual height of the object relative to the ground is predicted, the current second distance information can be determined in combination with the above formula (3).
上述结合图10对确定当前第二距离信息的原理进行了介绍,下面将结合图11所示,介绍如何确定当前第二距离信息的具体过程:The principle of determining the current second distance information has been introduced above in conjunction with FIG. 10 , and the following describes the specific process of how to determine the current second distance information in conjunction with FIG. 11 :
如图11所示,在对当前帧监测图像进行去畸变处理后,针对当前帧监测图像建立图像坐标系,在该图像坐标系中标记道路线消失点V的像素坐标值(x v,y v);对象的检测框的左上角点A的像素坐标值(x tl,y tl),右下点C的像素坐标值(x br,y br),进一步地,可以通过角点AC沿y轴方向上的像素坐标值确定如图9所示MN两点之间的距离;可以通过角点CV沿y轴方向上的像素坐标值确定如图9所示MV两点之间的距离。 As shown in FIG. 11 , after the current frame monitoring image is subjected to de-distortion processing, an image coordinate system is established for the current frame monitoring image, and the pixel coordinate values (x v , y v ) of the vanishing point V of the road line are marked in the image coordinate system ); the pixel coordinate value of the upper left point A of the detection frame of the object (x tl , y tl ), the pixel coordinate value of the lower right point C (x br , y br ), and further, you can pass the corner point AC along the y-axis The pixel coordinate value in the direction determines the distance between the two points MN as shown in Figure 9; the distance between the two points MV as shown in Figure 9 can be determined by the pixel coordinate value of the corner point CV along the y-axis direction.
具体地,采集设备的标定参数包括采集设备相对于地面的第一高度值以及采集设备的焦距;针对上述S7022,在基于设定角点的像素坐标值、采集设备的标定参数以及在确定采集设备的标定参数时使用的车道线消失点的像素坐标值,确定当前第二距离信息时,包括以下S70221~S70224:Specifically, the calibration parameters of the collection device include the first height value of the collection device relative to the ground and the focal length of the collection device; for the above S7022, in the pixel coordinate value based on the set corner point, the calibration parameters of the collection device, and the determination of the collection device The pixel coordinate value of the vanishing point of the lane line used in the calibration parameters of the
S70221,基于车道线消失点的像素坐标值以及检测框中设定角点的像素坐标值,确定采集设备相对于地面的第一像素高度值。S70221, based on the pixel coordinate value of the vanishing point of the lane line and the pixel coordinate value of the corner point set in the detection frame, determine a first pixel height value of the collecting device relative to the ground.
结合上图11所示,可以得到第一像素高度值:y br-y vCombined with the above figure 11, the first pixel height value can be obtained: y br -y v .
S70222,基于设定角点的像素坐标值,确定当前帧监测图像中的对象相对于地面的第二像素高度值。S70222: Determine, based on the pixel coordinate value of the set corner point, a second pixel height value of the object in the monitoring image of the current frame relative to the ground.
比如可以将上述图11中AC两角点沿y轴上的像素坐标值之间的差值作为这里的第二像素高度值,可以通过h b来表示。 For example, the difference between the pixel coordinate values along the y-axis of the two corners of AC in the above-mentioned FIG. 11 may be used as the second pixel height value here, which may be represented by h b .
S70223,基于第一像素高度值、第二像素高度值以及第一高度值,确定对象相对于地面的第二高度值。S70223: Determine a second height value of the object relative to the ground based on the first pixel height value, the second pixel height value, and the first height value.
Figure PCTCN2022084399-appb-000004
Figure PCTCN2022084399-appb-000004
其中,H c表示第一高度值,用来表示采集设备相对于地面的实际高度,可以在对采集设备进行标定时获取;H y表示第二高度值,用来表示对象相对于地面的实际高度。 Among them, H c represents the first height value, which is used to represent the actual height of the acquisition device relative to the ground, which can be obtained when the acquisition device is calibrated; H y represents the second height value, which is used to represent the actual height of the object relative to the ground. .
S70224,基于第二高度值、采集设备的焦距以及第二像素高度值,确定当前第二距离信息。S70224: Determine current second distance information based on the second height value, the focal length of the acquisition device, and the second pixel height value.
示例性地,当前第二距离信息可以通过上述公式(3)进行确定。Exemplarily, the current second distance information may be determined by the above formula (3).
本公开实施例中,在能够检测出当前帧监测图像中的对象对应的完整检测框的情况下,可以通过引入车道线消失点的像素坐标值、采集设备的标定参数快速准确的得到对象的实际高度值,进一步可以快速准确的确定出目标车辆与对象的当前第二距离信息。In the embodiment of the present disclosure, under the condition that the complete detection frame corresponding to the object in the monitoring image of the current frame can be detected, the actual detection frame of the object can be obtained quickly and accurately by introducing the pixel coordinate value of the vanishing point of the lane line and the calibration parameters of the acquisition device. The height value can further quickly and accurately determine the current second distance information between the target vehicle and the object.
在得到目标车辆与对象的当前第二距离信息后,针对上述S603在基于调整后的距离信息,确定当前第一距离信息时,包括以下S6031~S6032:After obtaining the current second distance information between the target vehicle and the object, for the above S603, when determining the current first distance information based on the adjusted distance information, the following steps S6031 to S6032 are included:
S6031,基于当前第二距离信息、多帧历史帧监测图像中的每帧历史帧监测图像中对象与目标车辆之间的历史第二距离信息、该帧历史帧监测图像对应的历史第一距离信息以及调整后的距离信息,确定针对调整后的距离信息的距离偏置信息。S6031, based on the current second distance information, the historical second distance information between the object and the target vehicle in each frame of the historical frame monitoring image in the multi-frame historical frame monitoring image, and the historical first distance information corresponding to the frame of the historical frame monitoring image and the adjusted distance information to determine distance offset information for the adjusted distance information.
示例性地,当前第二距离信息和每个历史第二距离信息均为基于单帧监测图像确定的目标车辆与对象之间的距离信息,该方式在确定第二距离信息时,如果能够检测到对象准确完整的检测框,可以基于该检测框的位置信息得到目标车辆与对象之间准确度较高的第二距离信息,反之,若无法检测到对象准确的检测框,或者检测到的对象的检测框不完整,得到的第二距离信息的准确度较低,因此在基于该方式确定的多个第二距离信息的准确度较高,但是波动较大。Exemplarily, the current second distance information and each historical second distance information are the distance information between the target vehicle and the object determined based on the single-frame monitoring image. The accurate and complete detection frame of the object can be based on the position information of the detection frame to obtain the second distance information with high accuracy between the target vehicle and the object. On the contrary, if the accurate detection frame of the object cannot be detected, or the detected object is The detection frame is incomplete, and the accuracy of the obtained second distance information is low. Therefore, the accuracy of the plurality of second distance information determined based on this method is high, but the fluctuation is large.
示例性地,每个历史第一距离信息为基于多帧监测图像确定得到的距离信息,调整后的距离信息也为基于多个历史第一距离信息调整后得到的距离信息,因此,基于该方式得到的多个历史第一距离信息和调整后的距离信息之间波动较小,但是由于在确定历史第一距离信息以及调整后的距离信息时,使用到了相邻两帧监测图像对应的尺度变化信息,而尺度变化信息的确定过程依赖于识别对象的特征点在监测图像中的位置信息,当存在误差时,误差会进行累计,因此确定的多个历史第一距离信息以及调整后的距离信息的准确度相比基于完整的检测框确定出的第二距离信息的准确度。Exemplarily, each historical first distance information is distance information determined based on multiple frames of monitoring images, and the adjusted distance information is also distance information adjusted based on multiple historical first distance information. Therefore, based on this method The fluctuation between the obtained multiple historical first distance information and the adjusted distance information is small, but because the scale changes corresponding to two adjacent frames of monitoring images are used when determining the historical first distance information and the adjusted distance information The process of determining the scale change information depends on the position information of the feature points of the recognized object in the monitoring image. When there is an error, the error will be accumulated. Therefore, the determined multiple historical first distance information and adjusted distance information The accuracy is compared to the accuracy of the second distance information determined based on the complete detection frame.
考虑到基于检测框确定的当前第二距离信息和历史第二距离信息的准确度高,基于尺度变化信息确定的历史第一距离信息和调整后得到的距离信息之间的稳定性高,为了得到准确度高且稳定性的当前第一距离信息,可以通过两种方式分别对确定的多帧监测图像对应的目标车辆与对象之间的距离信息对调整后的距离信息进行进一步调整。Considering that the accuracy of the current second distance information and the historical second distance information determined based on the detection frame is high, and the stability between the historical first distance information determined based on the scale change information and the adjusted distance information is high, in order to obtain For the current first distance information with high accuracy and stability, the adjusted distance information can be further adjusted for the distance information between the target vehicle and the object corresponding to the determined multi-frame monitoring images respectively in two ways.
S6032,基于距离偏置信息对调整后的距离信息进行调整,得到当前第一距离信息。S6032: Adjust the adjusted distance information based on the distance offset information to obtain current first distance information.
示例性地,在得到距离偏置信息后,可以基于该距离偏置信息对调整后的距离信息进行进一步调整,使得当前第一距离信息更加准确。Exemplarily, after the distance offset information is obtained, the adjusted distance information may be further adjusted based on the distance offset information, so that the current first distance information is more accurate.
本公开实施例中,在得到距离偏置信息后,可以对调整后的距离信息进行进一步调整,从而得到目标车辆和对象在当前准确度较高的距离信息。In the embodiment of the present disclosure, after the distance offset information is obtained, the adjusted distance information can be further adjusted, so as to obtain the current distance information of the target vehicle and the object with high accuracy.
在一种实施方式中,在基于当前第二距离信息、多帧历史帧监测图像中的每帧历史帧监测图像中对象与目标车辆之间的历史第二距离信息、该帧历史帧监测图像对应的历史第一距离信息以及调整后的距离信息,确定针对调整后的距离信息的距离偏置信息时,可以包括以下S60311~S60313:In one embodiment, based on the current second distance information, the historical second distance information between the object and the target vehicle in each frame of the historical frame monitoring image in the multi-frame historical frame monitoring image, the historical frame monitoring image corresponding to the frame When determining the distance offset information for the adjusted distance information, the following steps S60311 to S60313 may be included:
S60311,基于当前第二距离信息以及多帧历史帧监测图像中的每帧历史帧监测图像对应的历史第二距离信息,确定由多帧历史帧监测图像中的每帧历史帧监测图像对应的历史第二距离信息和当前第二距离信息拟合成的第一拟合曲线的第一线性拟合系数。S60311, based on the current second distance information and the historical second distance information corresponding to each frame of the historical frame monitoring image in the multi-frame historical frame monitoring image, determine the history corresponding to each frame of the historical frame monitoring image in the multi-frame historical frame monitoring image The first linear fitting coefficient of the first fitting curve fitted by the second distance information and the current second distance information.
示例性地,可以通过D 0表示当前第二距离信息,分别通过D 1、D 2、D 3…表示多个历史第二距离信息,可以通过D 0和D 1、D 2、D 3…进行线性拟合,得到由多个历史第二距离信息和当前第二距离信 息构成的第一拟合曲线,该第一拟合曲线可以通过以下公式(6)表示: Exemplarily, the current second distance information can be represented by D 0 , and a plurality of historical second distance information can be represented by D 1 , D 2 , D 3 . . . respectively, which can be performed by D 0 and D 1 , D 2 , D 3 . Linear fitting is performed to obtain a first fitting curve composed of multiple historical second distance information and current second distance information, and the first fitting curve can be represented by the following formula (6):
y 1=ax+bx 2+c           (6); y 1 =ax+bx 2 +c (6);
在拟合过程中,可以将确定多个第二距离信息时使用的监测图像的帧号0,1,2,3…作为x值,以及与帧号分别对应的第二距离信息D 0、D 1、D 2、D 3…作为y值输入公式(6),可以得到第一线性拟合系数:a,b,c。 In the fitting process, the frame numbers 0, 1, 2, 3 of the monitoring images used in determining the plurality of second distance information can be used as the x value, and the second distance information D 0 , D corresponding to the frame numbers respectively 1 , D 2 , D 3 , .
S60312,基于多帧历史帧监测图像中的每帧历史帧监测图像对应的历史第一距离信息以及调整后的距离信息,确定由多帧历史帧监测图像中的每帧历史帧监测图像对应的历史第一距离信息和调整后的距离信息拟合成的第二拟合曲线的第二线性拟合系数。S60312, based on the historical first distance information and the adjusted distance information corresponding to each frame of the historical frame monitoring image in the multi-frame historical frame monitoring image, determine the history corresponding to each historical frame monitoring image in the multi-frame historical frame monitoring image A second linear fitting coefficient of the second fitting curve fitted by the first distance information and the adjusted distance information.
示例性地,可以通过D 0_scale表示调整后的距离信息,分别通过D 1_final、D 2_final、D 3_final…表示多个历史第一距离信息,可以通过D 0_scale和D 1_final、D 2_final、D 3_final…进行线性拟合,得到由多个历史第一距离信息和调整后的距离信息构成的第二拟合曲线,该第二拟合曲线可以通过以下公式(7)表示: Exemplarily, the adjusted distance information can be represented by D 0_scale , and a plurality of historical first distance information can be represented by D 1_final , D 2_final , D 3_final . . respectively, which can be performed by D 0_scale and D 1_final , D 2_final , D 3_final . Linear fitting is performed to obtain a second fitting curve composed of multiple historical first distance information and adjusted distance information, and the second fitting curve can be expressed by the following formula (7):
y 2=a′x+b′x 2+c′            (7); y 2 =a'x+b'x 2 +c'(7);
在拟合过程中,可以将确定多个历史第一距离信息以及调整后的距离信息时使用的监测图像的帧号0,1,2,3…作为x值,以及与帧号分别对应的调整后的距离信息和多个历史第一距离信息D 0_scale、D 1_final、D 2_final、D 3_final…作为y值输入公式(7),可以得到第二线性拟合系数:a′,b′,c′。 During the fitting process, the frame numbers 0, 1, 2, 3 of the monitoring images used in determining the multiple historical first distance information and the adjusted distance information can be used as the x value, and the adjustment corresponding to the frame number respectively The latter distance information and a plurality of historical first distance information D 0_scale , D 1_final , D 2_final , D 3_final . .
S60313,基于第一线性拟合系数和第二线性拟合系数,确定针对调整后的距离信息的距离偏置信息。S60313: Determine distance offset information for the adjusted distance information based on the first linear fitting coefficient and the second linear fitting coefficient.
示例性地,可以通过以下公式(8)来确定距离偏置信息:Exemplarily, the distance offset information can be determined by the following formula (8):
L=(a/a′+b/b′+c/c′)/3           (8);L=(a/a′+b/b′+c/c′)/3 (8);
通过该方式确定的距离偏置信息,可以按照以下公式(9)来对待调整的距离信息进行调整,得到当前第一距离信息D 0_finalThe distance offset information determined in this way can be adjusted according to the following formula (9) to obtain the current first distance information D 0_final :
D 0_final=D 0_scale×L           (9); D 0_final =D 0_scale ×L (9);
在另一种实施方式中,在基于当前第二距离信息、多帧历史帧监测图像中的每帧历史帧监测图像中对象与目标车辆之间的历史第二距离信息、该帧历史帧监测图像对应的历史第一距离信息以及调整后的距离信息,确定针对调整后的距离信息的距离偏置信息时,还可以通过卡尔曼滤波算法进行确定,进一步基于卡尔曼滤波算法确定出当前第一距离信息。In another embodiment, based on the current second distance information, the historical second distance information between the object and the target vehicle in each frame of the historical frame monitoring image in the multi-frame historical frame monitoring image, the historical frame monitoring image of the frame The corresponding historical first distance information and the adjusted distance information, when determining the distance offset information for the adjusted distance information, can also be determined by the Kalman filtering algorithm, and further determine the current first distance based on the Kalman filtering algorithm. information.
在基于卡尔曼滤波算法确定当前第一距离信息时,可以通过以下公式(10)进行确定:When the current first distance information is determined based on the Kalman filter algorithm, it can be determined by the following formula (10):
D 0_final=kal(D 0_scale,D 0,R,Q)           (10); D 0_final = kal(D 0_scale , D 0 , R, Q) (10);
其中,R表示D 0_scale和D 1_final、D 2_final、D 3_final…的方差;Q表示D 0和D 1、D 2、D 3…的方差,通过R和Q可以确定针对D 0_scale的距离偏置信息,进一步基于该距离偏置信息对调整后的距离信息进行修正,得到准确度较高的当前第一距离信息。 Among them, R represents the variance of D 0_scale and D 1_final , D 2_final , D 3_final . . . Q represents the variance of D 0 and D 1 , D 2 , D 3 . , and further correct the adjusted distance information based on the distance offset information to obtain the current first distance information with higher accuracy.
在确定当前第一距离信息后,即可以根据当前帧图像中的各对象分别和目标车辆的当前第一距离信息,确定位于目标车辆的视野盲区中的目标对象。After the current first distance information is determined, the target object located in the blind spot of the target vehicle can be determined according to each object in the current frame image and the current first distance information of the target vehicle.
示例性的,可以根据上述图2示出确定目标车辆的视野盲区的方式,可以确定目标车辆对应的视野盲区。另外,利用当前第一距离信息、以及确定的目标车辆的视野盲区,可以确定检测到的对象中落入视野盲区的对象,也即目标对象。Exemplarily, the method of determining the blind spot of the target vehicle may be shown in FIG. 2 above, and the blind spot of the vision corresponding to the target vehicle may be determined. In addition, by using the current first distance information and the determined blind spot of the target vehicle, it is possible to determine the object that falls into the blind spot of the visual field among the detected objects, that is, the target object.
此处,目标对象例如可以包括上述对象中的其他驾驶车辆、行人、路面设施、以及路面障碍物中的部分。Here, the target object may include, for example, other driving vehicles, pedestrians, road facilities, and parts of road obstacles among the above-mentioned objects.
针对上述S104,根据目标对象的类型信息和位置,以及目标车辆的行车状态,即可以生成监测结果。For the above S104, the monitoring result can be generated according to the type information and position of the target object and the driving state of the target vehicle.
此处,由于生成的监测结果是根据图像确定的,而在利用监测结果引导自动驾驶车辆、或者辅助驾驶员行驶时,通常是需要在连续的一段时间内不断的生成监测结果的。在这段连续的时间内,会获取到多帧图像,但多帧图像相对于这段连续的时间是离散的。为了得到更对目标对象进行更为准确的监测,还可以对连续帧图像进行对象监测时进行跟踪平滑处理,例如采用插值的方式,以进一步提升精度。Here, since the generated monitoring result is determined according to the image, when using the monitoring result to guide the automatic driving vehicle or assist the driver to drive, it is usually necessary to continuously generate the monitoring result for a continuous period of time. During this continuous time, multiple frames of images will be acquired, but the multiple frames of images are discrete relative to this continuous time. In order to obtain more accurate monitoring of the target object, tracking and smoothing processing can also be performed when the continuous frame image is monitored for the object, for example, interpolation is used to further improve the accuracy.
另外,由于利用跟踪平滑处理的方式可以利用多帧对应离散时间的监测图像确定连续时间内目标对象与目标车辆的距离,因此该种方式还可以缓解采集设备需要快速连续获取多帧监测图像的设备压力,并减少设备损耗。In addition, since the tracking and smoothing method can use multiple frames of monitoring images corresponding to discrete time to determine the distance between the target object and the target vehicle in continuous time, this method can also alleviate the need for the acquisition device to acquire multiple frames of monitoring images in rapid succession. pressure and reduce equipment wear.
示例性的,为了使得到的监测结果离散程度更低,以保证目标车辆在行驶时的安全性,则相应的需要在0.1秒获取一帧监测图像;若是按照0.5秒获取一帧监测图像,在车速较快的场景下时,可能在0.5秒内会发生突然的撞击,也即无法保证安全性。但以0.1秒获取一帧监测图像的频率,对于采集设备而言,相较于0.2秒获取一帧监测图像的频率需要更大的功耗;同时,利用插值的方式也可以确定在该0.2秒内,处于0.1秒时较为准确的一帧预测监测图像,也即能够保证安全性。Exemplarily, in order to make the obtained monitoring results less discrete, so as to ensure the safety of the target vehicle while driving, it is necessary to acquire a frame of monitoring images in 0.1 seconds; if a frame of monitoring images is acquired in 0.5 seconds, In a fast-speed scenario, a sudden impact may occur within 0.5 seconds, which means safety cannot be guaranteed. However, the frequency of acquiring a frame of monitoring images in 0.1 seconds requires more power consumption for the acquisition device than the frequency of acquiring a frame of monitoring images in 0.2 seconds. Within 0.1 seconds, a relatively accurate frame of predictive monitoring images can ensure safety.
另外,在确定了目标对象后,不同位置的不同目标对象对于目标车辆而言的影响不同,因此,还可以结合目标对象的类型信息、位置、以及目标车辆的行车状态,对不同的目标对象确定对应的监测结果。In addition, after the target object is determined, different target objects in different positions have different effects on the target vehicle. Therefore, it is also possible to combine the type information of the target object, the location, and the driving state of the target vehicle to determine the different target objects. corresponding monitoring results.
具体地,监测结果例如可以包括告警信息。目标车辆的行车状态例如可以包括目标车辆的转向信息。Specifically, the monitoring result may include alarm information, for example. The driving state of the target vehicle may include, for example, steering information of the target vehicle.
在具体实施中,根据目标对象的类型信息和位置以及目标车辆的行车状态,生成监测结果时,例如可以采用下述方式:根据目标对象的类型信息和位置以及目标车辆的转向信息,确定告警信息的级别;生成确定的级别的告警信息并提示。In a specific implementation, when generating the monitoring result according to the type information and position of the target object and the driving state of the target vehicle, for example, the following method may be used: determine the warning information according to the type information and position of the target object and the steering information of the target vehicle level; generate alarm information of a certain level and prompt.
其中,目标对象的类型信息例如可以包括行人。由于目标对象位于目标车辆的视野盲区内,也即目标车辆可能会由于目标对象位于该目标车辆的视野盲区而对行车安全造成影响,则可以生成包含告警信息的监测结果。The type information of the target object may include pedestrians, for example. Since the target object is located in the blind spot of the target vehicle, that is, the target vehicle may affect the driving safety because the target object is located in the blind spot of the target vehicle, the monitoring result including the warning information can be generated.
在一种可能的实施方式中,在目标车辆的转向信息表征目标车辆左转,且目标对象的位置表征目标对象在目标车辆左侧盲区;或者,在目标车辆的转向信息表征目标车辆右转,且目标对象的位置表征目标对象在目标车辆右侧盲区时,认为目标车辆在行车时对目标对象的安全的影响较大,例如目标车辆在行驶中可能与行人碰撞,则监测结果可以包括最高级别的监测结果。此处,例如还可以将监测结果划分多个级别,例如一级、二级、三级、以及四级;级数越高,表征对目标车辆的行车安全的影响越大,相应的告警信息也对应表征对目标车辆的行车安全的影响较大。In a possible implementation, the steering information of the target vehicle indicates that the target vehicle turns left, and the position of the target object indicates that the target object is in the blind spot on the left side of the target vehicle; or, the steering information of the target vehicle indicates that the target vehicle turns right, And the position of the target object indicates that when the target object is in the blind spot on the right side of the target vehicle, it is considered that the target vehicle has a greater impact on the safety of the target object when driving. For example, the target vehicle may collide with pedestrians while driving, and the monitoring results can include the highest level monitoring results. Here, for example, the monitoring results can also be divided into multiple levels, such as level 1, level 2, level 3, and level 4; the higher the level, the greater the impact on the driving safety of the target vehicle, and the corresponding alarm information is also The corresponding representation has a great influence on the driving safety of the target vehicle.
以第一监测结果为例,第一监测结果例如对应一级,并包括较高频率发出的“嘀”声音频,或者语音提示信息“当前距车辆过近,请小心驾驶”。Taking the first monitoring result as an example, the first monitoring result corresponds to, for example, level 1, and includes a "beep" sound at a higher frequency, or a voice prompt message "Currently too close to the vehicle, please drive carefully".
另外,还可以根据目标对象的位置再进一步细化第一监测结果。以在目标装置的行车状态为向左侧行驶,目标对象的位置表征在目标车辆的左侧有目标对象为例,若第一监测结果表征目标装置在不断向目标对象靠近,则逐渐提高“嘀”声发出的频率,或者生成“当前距离左侧行人1米”、“当前距离左侧行人0.5米”等具有更为准确的提示信息的告警信息。In addition, the first monitoring result may be further refined according to the position of the target object. Take the driving state of the target device as driving to the left, and the position of the target object indicates that there is a target object on the left side of the target vehicle. If the first monitoring result indicates that the target device is constantly approaching the target object, gradually increase the "di ” sound, or generate more accurate warning information such as “currently 1 meter away from the pedestrian on the left” and “currently 0.5 meters away from the pedestrian on the left”.
在另一种可能的实施方式中,在目标车辆的转向信息表征目标车辆左转,且目标对象的位置表征目标对象在目标车辆右侧盲区;或者,在目标车辆的转向信息表征目标车辆右转,且目标对象的位置表征目标对象在目标车辆左侧盲区时,认为目标车辆在行车时有相当概率对安全会造成一定的影响,例如行 人在向目标车辆靠近时,可能会发生碰撞,则监测结果可以对应二级,包括相较于对应一级的监测结果较低频率发出的“嘀”声音频,或者语音提示信息“当前距行人较近,请小心驾驶”。In another possible implementation, the steering information of the target vehicle indicates that the target vehicle is turning left, and the position of the target object indicates that the target object is in the blind spot on the right side of the target vehicle; or, the steering information of the target vehicle indicates that the target vehicle is turning right , and the position of the target object indicates that when the target object is in the blind spot on the left side of the target vehicle, it is believed that the target vehicle has a considerable probability of having a certain impact on safety when driving. For example, when pedestrians approach the target vehicle, collision may occur, then monitor The results can correspond to Level 2, including a “beep” sound at a lower frequency than the monitoring results of the corresponding Level 1, or a voice prompt message “Currently close to pedestrians, please drive carefully”.
另外,目标对象的类型信息例如还可以包括车辆;此处,该车辆为除目标车辆外的其他车辆。In addition, the type information of the target object may also include, for example, a vehicle; here, the vehicle is a vehicle other than the target vehicle.
与上述针对类型信息表征目标对象为行人时,确定监测结果的方式相似,在一种可能的实施方式中,在目标车辆的转向信息表征目标车辆左转,且当前目标对象的位置表征目标对象在目标车辆左侧盲区;或者,在目标车辆的转向信息表征目标车辆右转,且当前目标对象的位置表征目标对象在目标车辆右侧盲区时,认为目标车辆在行车时对安全的影响较大,例如目标车辆在转向时可能会与其他车辆发生碰撞,则监测结果可以包括三级级别对应的监测结果。Similar to the above-mentioned method for determining the monitoring result when the type information indicates that the target object is a pedestrian, in a possible implementation, the steering information of the target vehicle indicates that the target vehicle turns left, and the current position of the target object indicates that the target object is in The blind spot on the left side of the target vehicle; or, when the steering information of the target vehicle indicates that the target vehicle is turning right, and the position of the current target object indicates that the target object is in the blind spot on the right side of the target vehicle, it is considered that the target vehicle has a greater impact on safety when driving. For example, the target vehicle may collide with other vehicles when turning, and the monitoring results may include monitoring results corresponding to three levels.
另外,还可以根据行车状态再进一步细化监测结果。以在目标车辆的行车状态表征目标车辆左转,监测结果表征在目标车辆的左侧有目标对象为例,若监测结果表征目标车辆在不断向目标对象靠近,则逐渐提高“嘀”声发出的频率,或者生成“当前距离左侧车辆1米”、“当前距离左侧车辆0.5米”等具有更为准确的提示信息的告警信息。In addition, the monitoring results can be further refined according to the driving state. Take the driving state of the target vehicle as an example of the target vehicle turning left, and the monitoring result representing the target object on the left side of the target vehicle. frequency, or generate alarm information with more accurate prompt information, such as "currently 1 meter away from the left vehicle" and "currently 0.5 meters away from the left vehicle".
在另一种可能的实施方式中,在目标车辆的转向信息表征目标车辆左转,且目标对象的位置表征目标对象在目标车辆右侧盲区;或者,在目标车辆的转向信息表征目标车辆右转,且目标对象的位置表征目标对象在目标车辆左侧盲区时,认为目标对象在行车时有相当概率对目标车辆的安全会造成一定的影响,例如目标车辆在行驶时,其他车辆可能会与目标车辆发生碰撞,则监测结果可以对应四级,包括相较于对应三级的监测结果以较低频率发出的“嘀”声音频,或者语音提示信息“当前距车辆较近,请小心驾驶”。In another possible implementation, the steering information of the target vehicle indicates that the target vehicle is turning left, and the position of the target object indicates that the target object is in the blind spot on the right side of the target vehicle; or, the steering information of the target vehicle indicates that the target vehicle is turning right , and the position of the target object indicates that when the target object is in the blind spot on the left side of the target vehicle, it is believed that the target object has a considerable probability of affecting the safety of the target vehicle when driving. For example, when the target vehicle is driving, other vehicles may If the vehicle collides, the monitoring result can correspond to Level 4, including a “beep” sound at a lower frequency than the monitoring result corresponding to Level 3, or a voice prompt message “Currently close to the vehicle, please drive carefully”.
这样,通过较为准确且快速的生成告警信息,可以指导控制目标车辆的驾驶人员更加安全的驾驶。In this way, by generating alarm information more accurately and quickly, the driver who controls the target vehicle can be guided to drive more safely.
另外,监测结果例如还可以包括车辆控制指令。对应的,目标车辆的行车状态例如可以包括目标车辆的转向信息。In addition, the monitoring results may also include vehicle control instructions, for example. Correspondingly, the driving state of the target vehicle may include, for example, steering information of the target vehicle.
此处,由于可以较为高效且准确的获取目标对象的类型信息和位置,因此还可以生成包括车辆控制指令的监测结果,以控制行驶装置等的安全驾驶。其中,行驶装置例如但不限于下述任一种:自动驾驶车辆、装有高级驾驶辅助系统(Advanced Driving Assistance System,ADAS)的车辆、或者机器人等。Here, since the type information and position of the target object can be acquired efficiently and accurately, monitoring results including vehicle control instructions can also be generated to control safe driving of the traveling device and the like. The traveling device is, for example, but not limited to, any of the following: an autonomous vehicle, a vehicle equipped with an advanced driving assistance system (Advanced Driving Assistance System, ADAS), or a robot.
在具体实施中,在根据目标对象的类型信息和位置以及目标车辆的行车状态,生成监测结果时,例如可以根据目标对象的类型信息和位置以及目标车辆的转向信息,生成车辆控制指令。In a specific implementation, when generating the monitoring result according to the type information and position of the target object and the driving state of the target vehicle, for example, the vehicle control instruction can be generated according to the type information and position of the target object and the steering information of the target vehicle.
此处,根据目标对象的类型信息和位置以及目标车辆的转向信息,生成车辆控制指令时,例如可以依据目标对象的类型信息和位置判断目标车辆确定生成的车辆控制指令,以保证目标车辆可以避免与目标对象发生碰撞,保证安全行驶。Here, when generating the vehicle control command according to the type information and position of the target object and the steering information of the target vehicle, for example, the target vehicle can determine the generated vehicle control command according to the type information and position of the target object, so as to ensure that the target vehicle can avoid Collision with the target object to ensure safe driving.
这样,监测结果更利于部署在智能行驶装置中,提高自动驾驶控制过程中,智能行驶装置的安全性,也即能更好的满足自动驾驶领域的需求。In this way, the monitoring results are more conducive to deployment in the intelligent driving device, improve the safety of the intelligent driving device during the automatic driving control process, that is, can better meet the needs of the automatic driving field.
本领域技术人员可以理解,在具体实施方式的上述方法中,各步骤的撰写顺序并不意味着严格的执行顺序而对实施过程构成任何限定,各步骤的具体执行顺序应当以其功能和可能的内在逻辑确定。Those skilled in the art can understand that in the above method of the specific implementation, the writing order of each step does not mean a strict execution order but constitutes any limitation on the implementation process, and the specific execution order of each step should be based on its function and possible Internal logic is determined.
基于同一技术构思,本公开实施例中还提供了与盲区监测方法对应的盲区监测装置,由于本公开实施例中的装置解决问题的原理与本公开实施例上述盲区监测方法相似,因此装置的实施可以参见方法的实施,重复之处不再赘述。Based on the same technical concept, the embodiment of the present disclosure also provides a blind spot monitoring device corresponding to the blind spot monitoring method. Reference may be made to the implementation of the method, and repeated descriptions will not be repeated.
参照图12所示,为本公开实施例提供的一种盲区监测装置的示意图,该盲区监测装置包括:获取模块121、检测模块122、确定模块123、以及生成模块124;其中,Referring to FIG. 12 , which is a schematic diagram of a blind spot monitoring device provided by an embodiment of the present disclosure, the blind spot monitoring device includes: an acquisition module 121 , a detection module 122 , a determination module 123 , and a generation module 124 ; wherein,
获取模块121,用于获取目标车辆上的采集设备采集得到的当前帧监测图像;an acquisition module 121, configured to acquire the current frame monitoring image acquired by the acquisition device on the target vehicle;
检测模块122,用于对所述当前帧监测图像进行对象检测,得到所述图像中包括的对象的类型信息和位置;A detection module 122, configured to perform object detection on the current frame monitoring image to obtain type information and position of the object included in the image;
确定模块123,用于根据所述对象的位置和所述目标车辆的视野盲区,确定位于所述目标车辆的视野盲区中的目标对象;A determination module 123, configured to determine the target object located in the blind spot of the target vehicle according to the position of the object and the blind spot of the target vehicle;
生成模块124,用于根据所述目标对象的类型信息和位置以及所述目标车辆的行车状态,生成监测 结果。The generating module 124 is configured to generate monitoring results according to the type information and position of the target object and the driving state of the target vehicle.
在一种可能的实施方式中,所述监测结果包括告警信息,所述目标车辆的行车状态包括所述目标车辆的转向信息;所述生成模块124在根据所述目标对象的类型信息和位置以及所述目标车辆的行车状态,生成监测结果时,用于:根据所述目标对象的类型信息和位置以及所述目标车辆的转向信息,确定告警信息的级别;生成确定的级别的告警信息并提示。In a possible implementation manner, the monitoring result includes warning information, and the driving state of the target vehicle includes steering information of the target vehicle; the generating module 124 is based on the type information and location of the target object and The driving state of the target vehicle, when generating the monitoring result, is used to: determine the level of the alarm information according to the type information and position of the target object and the steering information of the target vehicle; generate the alarm information of the determined level and prompt .
在一种可能的实施方式中,所述监测结果包括车辆控制指令,所述目标车辆的行车状态包括所述目标车辆的转向信息;所述生成模块124在根据所述目标对象的类型信息和位置以及所述目标车辆的行车状态,生成监测结果时,用于:根据所述目标对象的类型信息和位置以及所述目标车辆的转向信息,生成所述车辆控制指令;所述盲区监测装置还包括控制模块125,用于:基于所述车辆控制指令,控制所述目标车辆行驶。In a possible implementation manner, the monitoring result includes vehicle control instructions, and the driving state of the target vehicle includes steering information of the target vehicle; the generation module 124 is based on the type information and position of the target object. and the driving state of the target vehicle, when generating the monitoring result, it is used to: generate the vehicle control instruction according to the type information and position of the target object and the steering information of the target vehicle; the blind spot monitoring device further includes: The control module 125 is configured to: control the target vehicle to travel based on the vehicle control instruction.
在一种可能的实施方式中,所述确定模块123在根据所述对象的位置和所述目标车辆的视野盲区,确定位于所述目标车辆的视野盲区中的目标对象时,用于:根据所述当前帧监测图像中所述对象的位置,确定所述目标车辆与所述当前帧监测图像中所述对象的当前第一距离信息;根据所述当前第一距离信息,确定位于所述目标车辆的视野盲区中的目标对象。In a possible implementation manner, when determining the target object located in the blind spot of the target vehicle according to the position of the object and the blind spot of the target vehicle, the determining module 123 is configured to: The position of the object in the current frame monitoring image is determined, and the current first distance information between the target vehicle and the object in the current frame monitoring image is determined; according to the current first distance information, it is determined that the target vehicle is located in the target vehicle. target object in the blind spot of the field of view.
在一种可能的实施方式中,所述确定模块123在根据所述当前帧监测图像中所述对象的位置,确定所述目标车辆与所述当前帧监测图像中所述对象的当前第一距离信息时,用于:基于所述当前帧监测图像,确定所述目标车辆与所述当前帧监测图像中的所述对象的待调整距离信息;基于所述对象在所述采集设备采集的多帧历史帧监测图像中相邻两帧监测图像中的尺度之间的尺度变化信息、以及所述多帧历史帧监测图像中每帧历史帧监测图像中的所述对象与所述目标车辆之间的历史第一距离信息,对所述待调整距离信息进行调整,得到所述目标车辆与所述对象之间的当前第一距离信息。In a possible implementation manner, the determining module 123 determines the current first distance between the target vehicle and the object in the monitoring image of the current frame according to the position of the object in the monitoring image of the current frame information is used to: determine the distance information to be adjusted between the target vehicle and the object in the current frame monitoring image based on the current frame monitoring image; based on the multiple frames collected by the object in the collecting device The scale change information between the scales in the two adjacent frames of monitoring images in the historical frame monitoring images, and the relationship between the object and the target vehicle in each frame of the historical frame monitoring images in the multi-frame historical frame monitoring images. The historical first distance information is adjusted, and the to-be-adjusted distance information is adjusted to obtain the current first distance information between the target vehicle and the object.
在一种可能的实施方式中,所述确定模块123在对所述待调整距离信息进行调整,得到所述目标车辆与所述对象之间的当前第一距离信息时,用于:对所述待调整距离信息进行调整,直至所述尺度变化信息的误差量最小,得到调整后的距离信息;其中,所述误差量基于所述待调整距离信息、所述尺度变化信息以及所述多帧历史帧监测图像中每帧历史帧监测图像对应的历史第一距离信息确定;基于所述调整后的距离信息,确定所述当前第一距离信息。In a possible implementation manner, when the determining module 123 adjusts the distance information to be adjusted to obtain the current first distance information between the target vehicle and the object, the determining module 123 is configured to: The distance information to be adjusted is adjusted until the error amount of the scale change information is the smallest, and the adjusted distance information is obtained; wherein the error amount is based on the distance information to be adjusted, the scale change information and the multi-frame history The historical first distance information corresponding to each frame of the historical frame monitoring image in the frame monitoring image is determined; and the current first distance information is determined based on the adjusted distance information.
在一种可能的实施方式中,在基于所述调整后的距离信息,确定所述当前第一距离信息之前,所述确定模块123还用于:基于所述当前帧监测图像中所述对象的位置、以及所述采集设备的标定参数,确定当前第二距离信息;所述确定模块123在基于所述调整后的距离信息,确定所述当前第一距离信息时,用于:基于所述当前第二距离信息、所述多帧历史帧监测图像中的每帧历史帧监测图像中所述对象与所述目标车辆之间的历史第二距离信息、该帧历史帧监测图像对应的所述历史第一距离信息以及所述调整后的距离信息,确定针对所述调整后的距离信息的距离偏置信息;基于所述距离偏置信息对所述调整后的距离信息进行调整,得到所述当前第一距离信息。In a possible implementation manner, before determining the current first distance information based on the adjusted distance information, the determining module 123 is further configured to: monitor the distance of the object in the image based on the current frame position, and the calibration parameters of the collection device, to determine the current second distance information; the determining module 123, when determining the current first distance information based on the adjusted distance information, is used for: based on the current second distance information, historical second distance information between the object and the target vehicle in each frame of the historical frame monitoring image in the multi-frame historical frame monitoring image, and the history corresponding to the historical frame monitoring image of the frame The first distance information and the adjusted distance information determine distance offset information for the adjusted distance information; adjust the adjusted distance information based on the distance offset information to obtain the current First distance information.
在一种可能的实施方式中,所述确定模块123在基于所述当前第二距离信息、所述多帧历史帧监测图像中的每帧历史帧监测图像中所述对象与所述目标车辆之间的历史第二距离信息、该帧历史帧监测图像对应的所述历史第一距离信息以及所述调整后的距离信息,确定针对所述调整后的距离信息的距离偏置信息时,用于:基于所述当前第二距离信息以及所述多帧历史帧监测图像中的每帧历史帧监测图像对应的所述历史第二距离信息,确定由所述多帧历史帧监测图像中的每帧历史帧监测图像对应的所述历史第二距离信息和所述当前第二距离信息拟合成的第一拟合曲线的第一线性拟合系数;基于所述多帧历史帧监测图像中的每帧历史帧监测图像对应的所述历史第一距离信息以及所述调整后的距离信息,确定由所述多帧历史帧监测图像中的每帧历史帧监测图像对应的所述历史第一距离信息和所述调整后的距离信息拟合成的第二拟合曲线的第二线性拟合系数;基于所述第一线性拟合系数和所述第二线性拟合系数,确定针对所述调整后的距离信息的距离偏置信息。In a possible implementation manner, the determining module 123 determines the relationship between the object and the target vehicle in each frame of the historical frame monitoring image in the multi-frame historical frame monitoring image based on the current second distance information. When determining the distance offset information for the adjusted distance information, it is used for : Based on the current second distance information and the historical second distance information corresponding to each frame of the historical frame monitoring image in the multi-frame historical frame monitoring image, determine that each frame in the multi-frame historical frame monitoring image is determined by The first linear fitting coefficient of the first fitting curve fitted by the historical second distance information corresponding to the historical frame monitoring image and the current second distance information; The historical first distance information and the adjusted distance information corresponding to the frame historical frame monitoring images, determine the historical first distance information corresponding to each frame of the historical frame monitoring images in the multi-frame historical frame monitoring images a second linear fitting coefficient of the second fitting curve fitted with the adjusted distance information; based on the first linear fitting coefficient and the second linear fitting coefficient, determine The distance offset information of the distance information.
在一种可能的实施方式中,所述确定模块123在基于所述当前帧监测图像中所述对象的位置、以及所述采集设备的标定参数,确定所述当前第二距离信息时,用于:基于所述对象在所述当前帧监测图像 中的检测框的位置信息,获取所述检测框中设定角点的像素坐标值;基于所述设定角点的像素坐标值、所述采集设备的标定参数以及在确定所述采集设备的标定参数时使用的车道线消失点的像素坐标值,确定所述当前第二距离信息。In a possible implementation manner, when the determining module 123 determines the current second distance information based on the position of the object in the monitoring image of the current frame and the calibration parameters of the acquisition device, the determining module 123 is configured to: : based on the position information of the detection frame of the object in the monitoring image of the current frame, obtain the pixel coordinate value of the set corner point in the detection frame; based on the pixel coordinate value of the set corner point, the acquisition The current second distance information is determined by the calibration parameters of the device and the pixel coordinate value of the vanishing point of the lane line used when determining the calibration parameters of the acquisition device.
在一种可能的实施方式中,所述采集设备的标定参数包括所述采集设备相对于地面的第一高度值以及所述采集设备的焦距;所述确定模块123在基于所述设定角点的像素坐标值、所述采集设备的标定参数以及在确定所述采集设备的标定参数时使用的车道线消失点的像素坐标值,确定所述当前第二距离信息时,用于:基于所述车道线消失点的像素坐标值以及所述检测框中设定角点的像素坐标值,确定所述采集设备相对于地面的第一像素高度值;基于所述设定角点的像素坐标值,确定所述当前帧监测图像中的所述对象相对于地面的第二像素高度值;基于所述第一像素高度值、所述第二像素高度值以及所述第一高度值,确定所述对象相对于地面的第二高度值;基于所述第二高度值、所述采集设备的焦距以及所述第二像素高度值,确定所述当前第二距离信息。In a possible implementation manner, the calibration parameters of the acquisition device include a first height value of the acquisition device relative to the ground and a focal length of the acquisition device; the determining module 123 is based on the set corner point The pixel coordinate value of the collection device, the calibration parameter of the collection device, and the pixel coordinate value of the vanishing point of the lane line used when determining the calibration parameter of the collection device, when determining the current second distance information, for: based on the The pixel coordinate value of the vanishing point of the lane line and the pixel coordinate value of the corner point set in the detection frame, determine the first pixel height value of the collection device relative to the ground; based on the pixel coordinate value of the set corner point, determining a second pixel height value of the object in the monitoring image of the current frame relative to the ground; determining the object based on the first pixel height value, the second pixel height value and the first height value a second height value relative to the ground; determining the current second distance information based on the second height value, the focal length of the acquisition device, and the second pixel height value.
在一种可能的实施方式中,所述确定模块123在基于所述当前帧监测图像,确定所述目标车辆与所述当前帧监测图像中的所述对象的待调整距离信息时,用于:获取所述对象在所述当前帧监测图像中的尺度和在与所述当前帧监测图像相邻的历史帧监测图像中的尺度之间的尺度变化信息;基于所述尺度变化信息、以及与所述当前帧监测图像相邻的历史帧监测图像对应的所述历史第一距离信息,确定所述待调整距离信息。In a possible implementation manner, when determining the distance information to be adjusted between the target vehicle and the object in the current frame monitoring image based on the current frame monitoring image, the determining module 123 is configured to: Obtain the scale change information between the scale of the object in the current frame monitoring image and the scale in the historical frame monitoring image adjacent to the current frame monitoring image; The historical first distance information corresponding to the historical frame monitoring image adjacent to the current frame monitoring image is determined, and the to-be-adjusted distance information is determined.
在一种可能的实施方式中,所述确定模块123按照以下方式确定所述对象在相邻两帧监测图像中的尺度之间的尺度变化信息:分别提取所述对象包含的多个特征点在所述相邻两帧监测图像中前一帧监测图像中的第一位置信息,以及在后一帧监测图像中的第二位置信息;基于所述第一位置信息和所述第二位置信息,确定所述对象在相邻两帧监测图像中的尺度之间的尺度变化信息。In a possible implementation manner, the determining module 123 determines the scale change information between the scales of the object in two adjacent frames of monitoring images in the following manner: extracting a plurality of feature points contained in the object respectively in The first position information in the previous frame of monitoring images in the two adjacent frames of monitoring images, and the second position information in the next frame of monitoring images; based on the first position information and the second position information, The scale change information between the scales of the object in two adjacent frames of monitoring images is determined.
在一种可能的实施方式中,所述确定模块123基于所述第一位置信息和所述第二位置信息,确定所述对象在相邻两帧监测图像中的尺度之间的尺度变化信息时,用于:基于所述第一位置信息,确定所述对象包含的多个特征点所构成的目标线段在所述前一帧监测图像中的第一尺度值;基于所述第二位置信息,确定所述目标线段在所述后一帧监测图像中的第二尺度值;基于所述第一尺度值和所述第二尺度值,确定所述对象在相邻两帧监测图像中的尺度之间的尺度变化信息。In a possible implementation manner, the determining module 123 determines, based on the first position information and the second position information, when determining the scale change information of the object between scales in two adjacent frames of monitoring images , for: determining, based on the first position information, the first scale value of the target line segment composed of multiple feature points contained in the object in the monitoring image of the previous frame; based on the second position information, Determine the second scale value of the target line segment in the next frame of monitoring images; based on the first scale value and the second scale value, determine the difference between the scales of the object in two adjacent frames of monitoring images Scale change information between.
对应于图1中的盲区监测方法,本公开实施例还提供了一种电子设备1300,如图13所示,为本公开实施例提供的电子设备1300结构示意图,包括:Corresponding to the blind spot monitoring method in FIG. 1 , an embodiment of the present disclosure further provides an electronic device 1300 . As shown in FIG. 13 , a schematic structural diagram of the electronic device 1300 provided by an embodiment of the present disclosure includes:
处理器10、存储器20、和总线30;存储器20用于存储执行指令,包括内存210和外部存储器220;这里的内存210也称内存储器,用于暂时存放处理器10中的运算数据,以及与硬盘等外部存储器220交换的数据,处理器10通过内存210与外部存储器220进行数据交换,当电子设备1300运行时,处理器10与存储器20之间通过总线30通信,使得处理器10执行以下指令:获取目标车辆上的采集设备采集得到的当前帧监测图像;对所述当前帧监测图像进行对象检测,得到所述当前帧监测图像中包括的对象的类型信息和位置;根据所述对象的位置和所述目标车辆的视野盲区,确定位于所述目标车辆的视野盲区中的目标对象;根据所述目标对象的类型信息和位置以及所述目标车辆的行车状态,生成监测结果。The processor 10, the memory 20, and the bus 30; the memory 20 is used to store the execution instructions, including the memory 210 and the external memory 220; the memory 210 here is also called the internal memory, which is used to temporarily store the operation data in the processor 10, and For data exchanged by an external memory 220 such as a hard disk, the processor 10 exchanges data with the external memory 220 through the memory 210. When the electronic device 1300 is running, the processor 10 and the memory 20 communicate through the bus 30, so that the processor 10 executes the following instructions : obtain the current frame monitoring image collected by the acquisition device on the target vehicle; perform object detection on the current frame monitoring image to obtain the type information and position of the object included in the current frame monitoring image; according to the position of the object and the blind spot of the target vehicle, determine the target object located in the blind spot of the target vehicle; generate monitoring results according to the type information and position of the target object and the driving state of the target vehicle.
本公开实施例还提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行上述方法实施例中所述的盲区监测方法的步骤。其中,该存储介质可以是易失性或非易失的计算机可读取存储介质。Embodiments of the present disclosure further provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is run by a processor, the steps of the blind spot monitoring method described in the above method embodiments are executed. Wherein, the storage medium may be a volatile or non-volatile computer-readable storage medium.
本公开实施例还提供一种计算机程序产品,该计算机程序产品承载有程序代码,所述程序代码包括的指令可用于执行上述方法实施例中所述的盲区监测方法的步骤,具体可参见上述方法实施例,在此不再赘述。Embodiments of the present disclosure further provide a computer program product, where the computer program product carries program codes, and the instructions included in the program codes can be used to execute the steps of the blind spot monitoring method described in the foregoing method embodiments. For details, please refer to the foregoing method. The embodiments are not repeated here.
其中,上述计算机程序产品可以具体通过硬件、软件或其结合的方式实现。在一个可选实施例中,所述计算机程序产品具体体现为计算机存储介质,在另一个可选实施例中,计算机程序产品具体体现为软件产品,例如软件开发包(Software Development Kit,SDK)等等。Wherein, the above-mentioned computer program product can be specifically implemented by means of hardware, software or a combination thereof. In an optional embodiment, the computer program product is embodied as a computer storage medium, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), etc. Wait.
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统和装置的具体工作 过程,可以参考前述方法实施例中的对应过程,在此不再赘述。在本公开所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,又例如,多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些通信接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。Those skilled in the art can clearly understand that, for the convenience and brevity of description, the specific working process of the system and device described above can be referred to the corresponding process in the foregoing method embodiments, which will not be repeated here. In the several embodiments provided by the present disclosure, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. The apparatus embodiments described above are only illustrative. For example, the division of the units is only a logical function division. In actual implementation, there may be other division methods. For example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented. On the other hand, the shown or discussed mutual coupling or direct coupling or communication connection may be through some communication interfaces, indirect coupling or communication connection of devices or units, which may be in electrical, mechanical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
另外,在本公开各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。In addition, each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个处理器可执行的非易失的计算机可读取存储介质中。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本公开各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。The functions, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a processor-executable non-volatile computer-readable storage medium. Based on such understanding, the technical solutions of the present disclosure can be embodied in the form of software products in essence, or the parts that contribute to the prior art or the parts of the technical solutions. The computer software products are stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in various embodiments of the present disclosure. The aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes .
最后应说明的是:以上所述实施例,仅为本公开的具体实施方式,用以说明本公开的技术方案,而非对其限制,本公开的保护范围并不局限于此,尽管参照前述实施例对本公开进行了详细的说明,本领域的普通技术人员应当理解:任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,其依然可以对前述实施例所记载的技术方案进行修改或可轻易想到变化,或者对其中部分技术特征进行等同替换;而这些修改、变化或者替换,并不使相应技术方案的本质脱离本公开实施例技术方案的精神和范围,都应涵盖在本公开的保护范围之内。因此,本公开的保护范围应所述以权利要求的保护范围为准。Finally, it should be noted that the above-mentioned embodiments are only specific implementations of the present disclosure, and are used to illustrate the technical solutions of the present disclosure, but not to limit them. The protection scope of the present disclosure is not limited to this, although the aforementioned The embodiments describe the present disclosure in detail, and those skilled in the art should understand that: any person skilled in the art can still modify the technical solutions described in the foregoing embodiments within the technical scope disclosed by the present disclosure. Or can easily think of changes, or equivalently replace some of the technical features; and these modifications, changes or replacements do not make the essence of the corresponding technical solutions deviate from the spirit and scope of the technical solutions of the embodiments of the present disclosure, and should be covered in the present disclosure. within the scope of protection. Therefore, the protection scope of the present disclosure should be based on the protection scope of the claims.

Claims (17)

  1. 一种盲区监测方法,其特征在于,包括:A blind spot monitoring method, comprising:
    获取目标车辆上的采集设备采集得到的当前帧监测图像;Obtain the current frame monitoring image collected by the collection device on the target vehicle;
    对所述当前帧监测图像进行对象检测,得到所述当前帧监测图像中包括的对象的类型信息和位置;Perform object detection on the current frame monitoring image to obtain the type information and position of the object included in the current frame monitoring image;
    根据所述对象的位置和所述目标车辆的视野盲区,确定位于所述目标车辆的视野盲区中的目标对象;According to the position of the object and the blind spot of the target vehicle, determine the target object located in the blind spot of the target vehicle;
    根据所述目标对象的类型信息和位置以及所述目标车辆的行车状态,生成监测结果。The monitoring result is generated according to the type information and position of the target object and the driving state of the target vehicle.
  2. 根据权利要求1所述的盲区监测方法,其特征在于,所述监测结果包括告警信息,所述目标车辆的行车状态包括所述目标车辆的转向信息;The blind spot monitoring method according to claim 1, wherein the monitoring result includes warning information, and the driving state of the target vehicle includes steering information of the target vehicle;
    所述根据所述目标对象的类型信息和位置以及所述目标车辆的行车状态,生成监测结果,包括:The generating monitoring results according to the type information and position of the target object and the driving state of the target vehicle, including:
    根据所述目标对象的类型信息和位置以及所述目标车辆的转向信息,确定告警信息的级别;Determine the level of the warning information according to the type information and position of the target object and the steering information of the target vehicle;
    生成确定的级别的告警信息并提示。Generate alarm information of a certain level and prompt it.
  3. 根据权利要求1所述的盲区监测方法,其特征在于,所述监测结果包括车辆控制指令,所述目标车辆的行车状态包括所述目标车辆的转向信息;The blind spot monitoring method according to claim 1, wherein the monitoring result includes a vehicle control command, and the driving state of the target vehicle includes steering information of the target vehicle;
    所述根据所述目标对象的类型信息和位置以及所述目标车辆的行车状态,生成监测结果,包括:The generating monitoring results according to the type information and position of the target object and the driving state of the target vehicle, including:
    根据所述目标对象的类型信息和位置以及所述目标车辆的转向信息,生成所述车辆控制指令;generating the vehicle control instruction according to the type information and position of the target object and the steering information of the target vehicle;
    所述盲区监测方法还包括:基于所述车辆控制指令,控制所述目标车辆行驶。The blind spot monitoring method further includes: controlling the target vehicle to drive based on the vehicle control instruction.
  4. 根据权利要求1-3任一所述的盲区监测方法,其特征在于,根据所述对象的位置和所述目标车辆的视野盲区,确定位于所述目标车辆的视野盲区中的目标对象,包括:The blind spot monitoring method according to any one of claims 1-3, wherein determining the target object located in the blind spot of the target vehicle according to the position of the object and the blind spot of the target vehicle, comprising:
    根据所述当前帧监测图像中所述对象的位置,确定所述目标车辆与所述当前帧监测图像中所述对象的当前第一距离信息;Determine the current first distance information between the target vehicle and the object in the current frame monitoring image according to the position of the object in the current frame monitoring image;
    根据所述当前第一距离信息,确定位于所述目标车辆的视野盲区中的目标对象。According to the current first distance information, the target object located in the blind spot of the target vehicle is determined.
  5. 根据权利要求1-4任一项所述的盲区监测方法,其特征在于,所述根据所述当前帧监测图像中所述对象的位置,确定所述目标车辆与所述当前帧监测图像中所述对象的当前第一距离信息,包括:The blind spot monitoring method according to any one of claims 1-4, wherein the target vehicle and the object in the current frame monitoring image are determined according to the position of the object in the current frame monitoring image. The current first distance information of the object, including:
    基于所述当前帧监测图像,确定所述目标车辆与所述当前帧监测图像中的所述对象的待调整距离信息;Based on the current frame monitoring image, determining the distance information to be adjusted between the target vehicle and the object in the current frame monitoring image;
    基于所述对象在所述采集设备采集的多帧历史帧监测图像中相邻两帧监测图像中的尺度之间的尺度变化信息、以及所述多帧历史帧监测图像中每帧历史帧监测图像中的所述对象与所述目标车辆之间的历史第一距离信息,对所述待调整距离信息进行调整,得到所述目标车辆与所述对象之间的当前第一距离信息。Based on the scale change information between the scales in two adjacent frames of monitoring images in the multi-frame historical frame monitoring images collected by the acquisition device, and the monitoring image of each historical frame in the multi-frame historical frame monitoring images The historical first distance information between the object and the target vehicle in , adjust the to-be-adjusted distance information to obtain the current first distance information between the target vehicle and the object.
  6. 根据权利要求5所述的盲区监测方法,其特征在于,所述对所述待调整距离信息进行调整,得到所述目标车辆与所述对象之间的当前第一距离信息,包括:The blind spot monitoring method according to claim 5, wherein the adjusting the distance information to be adjusted to obtain the current first distance information between the target vehicle and the object, comprising:
    对所述待调整距离信息进行调整,直至所述尺度变化信息的误差量最小,得到调整后的距离信息;其中,所述误差量基于所述待调整距离信息、所述尺度变化信息以及所述多帧历史帧监测图像中每帧历史帧监测图像对应的历史第一距离信息确定;Adjust the distance information to be adjusted until the error amount of the scale change information is the smallest, and obtain the adjusted distance information; wherein, the error amount is based on the distance information to be adjusted, the scale change information and the Determine the historical first distance information corresponding to each frame of the historical frame monitoring image in the multi-frame historical frame monitoring image;
    基于所述调整后的距离信息,确定所述当前第一距离信息。The current first distance information is determined based on the adjusted distance information.
  7. 根据权利要求6所述的盲区监测方法,其特征在于,所述基于所述调整后的距离信息,确定所述当前第一距离信息之前,所述盲区监测方法还包括:The blind spot monitoring method according to claim 6, wherein before the current first distance information is determined based on the adjusted distance information, the blind spot monitoring method further comprises:
    基于所述当前帧监测图像中所述对象的位置、以及所述采集设备的标定参数,确定当前第二距离信息;determining the current second distance information based on the position of the object in the monitoring image of the current frame and the calibration parameters of the acquisition device;
    所述基于所述调整后的距离信息,确定所述当前第一距离信息,包括:The determining the current first distance information based on the adjusted distance information includes:
    基于所述当前第二距离信息、所述多帧历史帧监测图像中的每帧历史帧监测图像中所述对象与所述目标车辆之间的历史第二距离信息、该帧历史帧监测图像对应的所述历史第一距离信息以及所述调整后 的距离信息,确定针对所述调整后的距离信息的距离偏置信息;Based on the current second distance information, the historical second distance information between the object and the target vehicle in each frame of the historical frame monitoring image in the multi-frame historical frame monitoring image, the historical frame monitoring image corresponding to the frame The historical first distance information and the adjusted distance information of , determine the distance offset information for the adjusted distance information;
    基于所述距离偏置信息对所述调整后的距离信息进行调整,得到所述当前第一距离信息。The adjusted distance information is adjusted based on the distance offset information to obtain the current first distance information.
  8. 根据权利要求7所述的盲区监测方法,其特征在于,所述基于所述当前第二距离信息、所述多帧历史帧监测图像中的每帧历史帧监测图像中所述对象与所述目标车辆之间的历史第二距离信息、该帧历史帧监测图像对应的所述历史第一距离信息以及所述调整后的距离信息,确定针对所述调整后的距离信息的距离偏置信息,包括:The blind spot monitoring method according to claim 7, wherein the object and the target in each frame of the historical frame monitoring image based on the current second distance information and the multi-frame historical frame monitoring image The historical second distance information between vehicles, the historical first distance information corresponding to the frame of historical frame monitoring images, and the adjusted distance information, and the distance offset information for the adjusted distance information is determined, including :
    基于所述当前第二距离信息以及所述多帧历史帧监测图像中的每帧历史帧监测图像对应的所述历史第二距离信息,确定由所述多帧历史帧监测图像中的每帧历史帧监测图像对应的所述历史第二距离信息和所述当前第二距离信息拟合成的第一拟合曲线的第一线性拟合系数;Based on the current second distance information and the historical second distance information corresponding to each frame of the historical frame monitoring image in the multi-frame historical frame monitoring image, determine the historical data of each frame in the multi-frame historical frame monitoring image the first linear fitting coefficient of the first fitting curve fitted by the historical second distance information and the current second distance information corresponding to the frame monitoring image;
    基于所述多帧历史帧监测图像中的每帧历史帧监测图像对应的所述历史第一距离信息以及所述调整后的距离信息,确定由所述多帧历史帧监测图像中的每帧历史帧监测图像对应的所述历史第一距离信息和所述调整后的距离信息拟合成的第二拟合曲线的第二线性拟合系数;Based on the historical first distance information and the adjusted distance information corresponding to each frame of the historical frame monitoring image in the multi-frame historical frame monitoring image, determine the historical first distance information of each frame in the multi-frame historical frame monitoring image a second linear fitting coefficient of a second fitting curve fitted by the historical first distance information corresponding to the frame monitoring image and the adjusted distance information;
    基于所述第一线性拟合系数和所述第二线性拟合系数,确定针对所述调整后的距离信息的距离偏置信息。Based on the first linear fitting coefficient and the second linear fitting coefficient, distance bias information for the adjusted distance information is determined.
  9. 根据权利要求7或8所述的盲区监测方法,其特征在于,所述基于所述当前帧监测图像中所述对象的位置、以及所述采集设备的标定参数,确定所述当前第二距离信息,包括:The blind spot monitoring method according to claim 7 or 8, wherein the current second distance information is determined based on the position of the object in the current frame monitoring image and the calibration parameters of the acquisition device ,include:
    基于所述对象在所述当前帧监测图像中的检测框的位置信息,获取所述检测框中设定角点的像素坐标值;Based on the position information of the detection frame of the object in the current frame monitoring image, obtain the pixel coordinate value of the set corner in the detection frame;
    基于所述设定角点的像素坐标值、所述采集设备的标定参数以及在确定所述采集设备的标定参数时使用的车道线消失点的像素坐标值,确定所述当前第二距离信息。The current second distance information is determined based on the pixel coordinate value of the set corner point, the calibration parameter of the collection device, and the pixel coordinate value of the vanishing point of the lane line used in determining the calibration parameter of the collection device.
  10. 根据权利要求9所述的盲区监测方法,其特征在于,所述采集设备的标定参数包括所述采集设备相对于地面的第一高度值以及所述采集设备的焦距;The blind spot monitoring method according to claim 9, wherein the calibration parameters of the acquisition device include a first height value of the acquisition device relative to the ground and a focal length of the acquisition device;
    所述基于所述设定角点的像素坐标值、所述采集设备的标定参数以及在确定所述采集设备的标定参数时使用的车道线消失点的像素坐标值,确定所述当前第二距离信息,包括:The current second distance is determined based on the pixel coordinate value of the set corner point, the calibration parameter of the acquisition device, and the pixel coordinate value of the vanishing point of the lane line used when determining the calibration parameter of the acquisition device. information, including:
    基于所述车道线消失点的像素坐标值以及所述检测框中设定角点的像素坐标值,确定所述采集设备相对于地面的第一像素高度值;Determine the first pixel height value of the collection device relative to the ground based on the pixel coordinate value of the vanishing point of the lane line and the pixel coordinate value of the corner point set in the detection frame;
    基于所述设定角点的像素坐标值,确定所述当前帧监测图像中的所述对象相对于地面的第二像素高度值;determining a second pixel height value of the object in the current frame monitoring image relative to the ground based on the pixel coordinate value of the set corner;
    基于所述第一像素高度值、所述第二像素高度值以及所述第一高度值,确定所述对象相对于地面的第二高度值;determining a second height value of the object relative to the ground based on the first pixel height value, the second pixel height value, and the first height value;
    基于所述第二高度值、所述采集设备的焦距以及所述第二像素高度值,确定所述当前第二距离信息。The current second distance information is determined based on the second height value, the focal length of the acquisition device, and the second pixel height value.
  11. 根据权利要求5至10任一项所述的盲区监测方法,其特征在于,所述基于所述当前帧监测图像,确定所述目标车辆与所述当前帧监测图像中的所述对象的待调整距离信息,包括:The blind spot monitoring method according to any one of claims 5 to 10, wherein the to-be-adjusted distance between the target vehicle and the object in the current frame monitoring image is determined based on the current frame monitoring image Distance information, including:
    获取所述对象在所述当前帧监测图像中的尺度和在与所述当前帧监测图像相邻的历史帧监测图像中的尺度之间的尺度变化信息;Obtaining the scale change information between the scale of the object in the current frame monitoring image and the scale in the historical frame monitoring image adjacent to the current frame monitoring image;
    基于所述尺度变化信息、以及与所述当前帧监测图像相邻的历史帧监测图像对应的所述历史第一距离信息,确定所述待调整距离信息。The to-be-adjusted distance information is determined based on the scale change information and the historical first distance information corresponding to historical frame monitoring images adjacent to the current frame monitoring image.
  12. 根据权利要求5至11任一项所述的盲区监测方法,其特征在于,按照以下方式确定所述对象在相邻两帧监测图像中的尺度之间的尺度变化信息:The blind spot monitoring method according to any one of claims 5 to 11, wherein the scale change information between the scales of the object in two adjacent frames of monitoring images is determined in the following manner:
    分别提取所述对象包含的多个特征点在所述相邻两帧监测图像中前一帧监测图像中的第一位置信息,以及在后一帧监测图像中的第二位置信息;Respectively extract the first position information of the plurality of feature points contained in the object in the previous frame of the monitoring image in the two adjacent frames of the monitoring image, and the second position information in the next frame of the monitoring image;
    基于所述第一位置信息和所述第二位置信息,确定所述对象在相邻两帧监测图像中的尺度之间的尺度变化信息。Based on the first position information and the second position information, the scale change information between the scales of the object in two adjacent frames of monitoring images is determined.
  13. 根据权利要求12所述的盲区监测方法,其特征在于,所述基于所述第一位置信息和所述第二 位置信息,确定所述对象在相邻两帧监测图像中的尺度之间的尺度变化信息,包括:The blind spot monitoring method according to claim 12, wherein the scale between the scales of the object in two adjacent frames of monitoring images is determined based on the first position information and the second position information Change information, including:
    基于所述第一位置信息,确定所述对象包含的多个特征点所构成的目标线段在所述前一帧监测图像中的第一尺度值;determining, based on the first position information, the first scale value of the target line segment formed by a plurality of feature points included in the object in the monitoring image of the previous frame;
    基于所述第二位置信息,确定所述目标线段在所述后一帧监测图像中的第二尺度值;Based on the second position information, determine the second scale value of the target line segment in the monitoring image of the next frame;
    基于所述第一尺度值和所述第二尺度值,确定所述对象在相邻两帧监测图像中的尺度之间的尺度变化信息。Based on the first scale value and the second scale value, the scale change information between the scales of the object in two adjacent frames of monitoring images is determined.
  14. 一种盲区监测装置,其特征在于,包括:A blind spot monitoring device, comprising:
    获取模块,用于获取目标车辆上的采集设备采集得到的当前帧监测图像;an acquisition module, used to acquire the current frame monitoring image acquired by the acquisition device on the target vehicle;
    检测模块,用于对所述当前帧监测图像进行对象检测,得到所述图像中包括的对象的类型信息和位置;a detection module, configured to perform object detection on the current frame monitoring image to obtain type information and position of the object included in the image;
    确定模块,用于根据所述对象的位置和所述目标车辆的视野盲区,确定位于所述目标车辆的视野盲区中的目标对象;a determining module, configured to determine the target object located in the blind spot of the target vehicle according to the position of the object and the blind spot of the target vehicle;
    生成模块,用于根据所述目标对象的类型信息和位置以及所述目标车辆的行车状态,生成监测结果。The generating module is configured to generate monitoring results according to the type information and position of the target object and the driving state of the target vehicle.
  15. 一种电子设备,其特征在于,包括:处理器、存储器和总线,所述存储器存储有所述处理器可执行的机器可读指令,当电子设备运行时,所述处理器与所述存储器之间通过总线通信,所述机器可读指令被所述处理器执行时执行如权利要求1至13任一项所述的盲区监测方法的步骤。An electronic device, characterized in that it includes: a processor, a memory, and a bus, the memory stores machine-readable instructions executable by the processor, and when the electronic device is running, the processor and the memory communicate with each other. The machine-readable instructions execute the steps of the blind spot monitoring method according to any one of claims 1 to 13 when the machine-readable instructions are executed by the processor.
  16. 一种计算机可读存储介质,其特征在于,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行如权利要求1至13任一项所述的盲区监测方法的步骤。A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, and when the computer program is run by a processor, the steps of the blind spot monitoring method according to any one of claims 1 to 13 are executed .
  17. 一种计算机程序产品,其特征在于,该计算机程序产品包括计算机程序,该计算机程序被处理器运行时执行如权利要求1至13任一项所述的盲区监测方法的步骤。A computer program product, characterized in that, the computer program product includes a computer program, and the computer program executes the steps of the blind spot monitoring method according to any one of claims 1 to 13 when the computer program is run by a processor.
PCT/CN2022/084399 2021-04-28 2022-03-31 Blind area monitoring method and apparatus, electronic device, and storage medium WO2022228023A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110467776.4A CN113103957B (en) 2021-04-28 2021-04-28 Blind area monitoring method and device, electronic equipment and storage medium
CN202110467776.4 2021-04-28

Publications (1)

Publication Number Publication Date
WO2022228023A1 true WO2022228023A1 (en) 2022-11-03

Family

ID=76720492

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/084399 WO2022228023A1 (en) 2021-04-28 2022-03-31 Blind area monitoring method and apparatus, electronic device, and storage medium

Country Status (2)

Country Link
CN (1) CN113103957B (en)
WO (1) WO2022228023A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113103957B (en) * 2021-04-28 2023-07-28 上海商汤临港智能科技有限公司 Blind area monitoring method and device, electronic equipment and storage medium
CN116184992A (en) * 2021-11-29 2023-05-30 上海商汤临港智能科技有限公司 Vehicle control method, device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004001658A (en) * 2002-06-03 2004-01-08 Nissan Motor Co Ltd Optical axis deviation detector for in-vehicle camera
CN106524922A (en) * 2016-10-28 2017-03-22 深圳地平线机器人科技有限公司 Distance measurement calibration method, device and electronic equipment
CN110386065A (en) * 2018-04-20 2019-10-29 比亚迪股份有限公司 Monitoring method, device, computer equipment and the storage medium of vehicle blind zone
CN111942282A (en) * 2019-05-17 2020-11-17 比亚迪股份有限公司 Vehicle and driving blind area early warning method, device and system thereof and storage medium
CN113103957A (en) * 2021-04-28 2021-07-13 上海商汤临港智能科技有限公司 Blind area monitoring method and device, electronic equipment and storage medium

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8164628B2 (en) * 2006-01-04 2012-04-24 Mobileye Technologies Ltd. Estimating distance to an object using a sequence of images recorded by a monocular camera
CN105279770A (en) * 2015-10-21 2016-01-27 浪潮(北京)电子信息产业有限公司 Target tracking control method and device
CN108596116B (en) * 2018-04-27 2021-11-05 深圳市商汤科技有限公司 Distance measuring method, intelligent control method and device, electronic equipment and storage medium
CN109311425A (en) * 2018-08-23 2019-02-05 深圳市锐明技术股份有限公司 A kind of alarming method by monitoring of vehicle blind zone, device, equipment and storage medium
WO2020151560A1 (en) * 2019-01-24 2020-07-30 杭州海康汽车技术有限公司 Vehicle blind spot detection method, apparatus and system
CN111998780B (en) * 2019-05-27 2022-07-01 杭州海康威视数字技术股份有限公司 Target ranging method, device and system
CN111829484B (en) * 2020-06-03 2022-05-03 江西江铃集团新能源汽车有限公司 Target distance measuring and calculating method based on vision
CN112489136B (en) * 2020-11-30 2024-04-16 商汤集团有限公司 Calibration method, position determination device, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004001658A (en) * 2002-06-03 2004-01-08 Nissan Motor Co Ltd Optical axis deviation detector for in-vehicle camera
CN106524922A (en) * 2016-10-28 2017-03-22 深圳地平线机器人科技有限公司 Distance measurement calibration method, device and electronic equipment
CN110386065A (en) * 2018-04-20 2019-10-29 比亚迪股份有限公司 Monitoring method, device, computer equipment and the storage medium of vehicle blind zone
CN111942282A (en) * 2019-05-17 2020-11-17 比亚迪股份有限公司 Vehicle and driving blind area early warning method, device and system thereof and storage medium
CN113103957A (en) * 2021-04-28 2021-07-13 上海商汤临港智能科技有限公司 Blind area monitoring method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113103957B (en) 2023-07-28
CN113103957A (en) 2021-07-13

Similar Documents

Publication Publication Date Title
US11318928B2 (en) Vehicular automated parking system
WO2022228023A1 (en) Blind area monitoring method and apparatus, electronic device, and storage medium
JP6833630B2 (en) Object detector, object detection method and program
EP4141737A1 (en) Target detection method and device
US11544936B2 (en) In-path obstacle detection and avoidance system
EP3436879A1 (en) An autonomous vehicle with improved visual detection ability
CN111630460A (en) Path planning for autonomous mobile devices
EP2960858B1 (en) Sensor system for determining distance information based on stereoscopic images
US11049275B2 (en) Method of predicting depth values of lines, method of outputting three-dimensional (3D) lines, and apparatus thereof
EP3349143A1 (en) Nformation processing device, information processing method, and computer-readable medium
CN111469127B (en) Cost map updating method and device, robot and storage medium
CN110667474B (en) General obstacle detection method and device and automatic driving system
US11482007B2 (en) Event-based vehicle pose estimation using monochromatic imaging
WO2022227708A1 (en) Ranging method and apparatus, electronic device, and storage medium
CN116601667A (en) System and method for 3D object detection and tracking with monocular surveillance camera
JP2008310440A (en) Pedestrian detection device
US11869253B2 (en) Vehicle environment modeling with a camera
CN110864670A (en) Method and system for acquiring position of target obstacle
CN108416305B (en) Pose estimation method and device for continuous road segmentation object and terminal
JP4847303B2 (en) Obstacle detection method, obstacle detection program, and obstacle detection apparatus
Hartmann et al. Night time road curvature estimation based on Convolutional Neural Networks
KR102497488B1 (en) Image recognition apparatus for adjusting recognition range according to driving speed of autonomous vehicle
US20240144701A1 (en) Determining lanes from drivable area
US20220230535A1 (en) Electronic device and method for navigating pedestrian
CN117885734A (en) Lane offset distance determining method, device, equipment and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22794495

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE