CN113103957B - Blind area monitoring method and device, electronic equipment and storage medium - Google Patents

Blind area monitoring method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113103957B
CN113103957B CN202110467776.4A CN202110467776A CN113103957B CN 113103957 B CN113103957 B CN 113103957B CN 202110467776 A CN202110467776 A CN 202110467776A CN 113103957 B CN113103957 B CN 113103957B
Authority
CN
China
Prior art keywords
frame
distance information
information
monitoring image
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110467776.4A
Other languages
Chinese (zh)
Other versions
CN113103957A (en
Inventor
罗铨
李弘扬
蒋沁宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Original Assignee
Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Lingang Intelligent Technology Co Ltd filed Critical Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Priority to CN202110467776.4A priority Critical patent/CN113103957B/en
Publication of CN113103957A publication Critical patent/CN113103957A/en
Priority to PCT/CN2022/084399 priority patent/WO2022228023A1/en
Application granted granted Critical
Publication of CN113103957B publication Critical patent/CN113103957B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/80Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
    • B60R2300/802Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for monitoring and displaying vehicle exterior blind spot views
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Mechanical Engineering (AREA)
  • Traffic Control Systems (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The disclosure provides a blind area monitoring method, a device, an electronic device and a storage medium, wherein the blind area monitoring method comprises the following steps: acquiring a current frame monitoring image acquired by acquisition equipment on a target vehicle; performing object detection on the current frame monitoring image to obtain type information and position of an object included in the current frame monitoring image; determining a target object positioned in a visual field blind area of the target vehicle according to the position of the object and the visual field blind area of the target vehicle; and generating a monitoring result according to the type information and the position of the target object and the driving state of the target vehicle.

Description

Blind area monitoring method and device, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of image processing, in particular to a blind area monitoring method, a blind area monitoring device, electronic equipment and a storage medium.
Background
In the traveling process of a vehicle or a robot, a traveling blind area is easy to appear because the range of the visible light is limited. Because of the existence of the dead zone, judgment and misoperation of a driver or an automatic driving vehicle are very easy to cause, and the driving safety is reduced.
Disclosure of Invention
The embodiment of the disclosure at least provides a blind area monitoring method, a blind area monitoring device, electronic equipment and a storage medium.
In a first aspect, an embodiment of the present disclosure provides a blind area monitoring method, including: acquiring a current frame monitoring image acquired by acquisition equipment on a target vehicle; performing object detection on the current frame monitoring image to obtain type information and position of an object included in the current frame monitoring image; determining a target object positioned in a visual field blind area of the target vehicle according to the position of the object and the visual field blind area of the target vehicle; and generating a monitoring result according to the type information and the position of the target object and the driving state of the target vehicle.
According to the embodiment of the disclosure, a current frame monitoring image acquired by acquisition equipment on a target vehicle is acquired, object detection is carried out on the current frame monitoring image, type information and position of an object contained in the current frame monitoring image are determined, and then the target object in the visual field blind area of the target vehicle is judged according to the position of the object and the visual field blind area of the target vehicle; and then generating a monitoring result according to the type information and the position of the target object and the driving state of the target vehicle. Therefore, different monitoring results can be generated aiming at different types of target objects, and driving safety and blind area monitoring performance are improved.
In one possible implementation manner, the monitoring result includes warning information, and the driving state of the target vehicle includes steering information of the target vehicle; and generating a monitoring result according to the type information and the position of the target object and the driving state of the target vehicle, wherein the monitoring result comprises the following steps: determining the level of alarm information according to the type information and the position of the target object and the steering information of the target vehicle; and generating and prompting the alarm information of the determined level.
In one possible implementation, the monitoring result includes a vehicle control instruction, and the driving state of the target vehicle includes steering information of the target vehicle; and generating a monitoring result according to the type information and the position of the target object and the driving state of the target vehicle, wherein the monitoring result comprises the following steps: generating the vehicle control instruction according to the type information and the position of the target object and the steering information of the target vehicle; the blind area monitoring method further comprises the following steps: and controlling the target vehicle to run based on the vehicle control instruction.
Therefore, a more targeted and accurate monitoring result can be generated according to the type information and the position of the target object and the driving state of the target vehicle.
In one possible embodiment, determining the target object located in the blind field of view of the target vehicle according to the position of the object and the blind field of view of the target vehicle includes: determining current first distance information of the target vehicle and the object in the current frame monitoring image according to the position of the object in the current frame monitoring image; and determining the target object positioned in the blind area of the visual field of the target vehicle according to the current first distance information.
Thus, the target object located in the blind area of the field of view of the target vehicle can be accurately detected from all the objects included in the current frame monitoring image.
In one possible implementation manner, the determining the current first distance information between the target vehicle and the object in the current frame monitoring image according to the position of the object in the current frame monitoring image includes: determining distance information to be adjusted between the target vehicle and an object in the current frame monitoring image based on the current frame monitoring image; and adjusting the distance information to be adjusted based on the scale change information between scales in two adjacent frames of multi-frame historical frame monitoring images acquired by the acquisition equipment and the historical first distance information between the object and the target vehicle in each frame of historical frame monitoring image in the multi-frame historical frame monitoring images to obtain the current first distance information between the target vehicle and the object.
In a possible implementation manner, the adjusting the distance information to be adjusted to obtain current first distance information between the target vehicle and the object includes: adjusting the distance information to be adjusted until the error amount of the scale change information is minimum, and obtaining adjusted distance information; the error amount is determined based on the distance information to be adjusted, the scale change information and historical first distance information corresponding to each frame of historical frame monitoring image in the multi-frame historical frame monitoring image; and determining the current first distance information based on the adjusted distance information.
In the embodiment of the disclosure, by continuously optimizing the scale change information of the object between the scale in the current frame monitoring image and the scale in the history frame monitoring image adjacent to the current frame monitoring image, the error of the obtained scale change information of the object between the scale in the current frame monitoring image and the scale in the history frame monitoring image adjacent to the current frame monitoring image can be reduced, so that the stability of the adjusted distance information is improved.
In one possible implementation manner, before the determining the current first distance information based on the adjusted distance information, the blind area monitoring method further includes: performing target detection on the current frame monitoring image, and determining the position information of a detection frame of the object contained in the current frame monitoring image; determining current second distance information based on the position information of the detection frame and calibration parameters of the acquisition equipment; the determining the current first distance information based on the adjusted distance information includes: determining distance offset information for the adjusted distance information based on the current second distance information, historical second distance information between the object and the target vehicle in each frame of historical frame monitoring image in the multi-frame historical frame monitoring image, the historical first distance information corresponding to the frame of historical frame monitoring image, and the adjusted distance information; and adjusting the adjusted distance information based on the distance offset information to obtain the current first distance information.
In the embodiment of the disclosure, after the distance offset information is obtained, the adjusted distance information can be further adjusted, so that the distance information with higher current accuracy of the target vehicle and the object is obtained.
In one possible implementation manner, the determining distance offset information for the adjusted distance information based on the current second distance information, historical second distance information between the object and the target vehicle in each frame of the multi-frame historical frame monitoring image, the historical first distance information corresponding to the frame of historical frame monitoring image, and the adjusted distance information includes: determining a first linear fitting coefficient of a first fitting curve which is formed by fitting the historical second distance information corresponding to each frame of history frame monitoring image in the multi-frame history frame monitoring image and the current second distance information based on the current second distance information and the historical second distance information corresponding to each frame of history frame monitoring image in the multi-frame history frame monitoring image; determining a second linear fitting coefficient of a second fitting curve which is formed by fitting the historical first distance information corresponding to each frame of history frame monitoring image in the multi-frame history frame monitoring image and the adjusted distance information based on the historical first distance information corresponding to each frame of history frame monitoring image in the multi-frame history frame monitoring image and the adjusted distance information; distance offset information for the adjusted distance information is determined based on the first linear fit coefficient and the second linear fit coefficient.
In a possible implementation manner, the determining the current second distance information based on the position information of the detection frame and the calibration parameter of the acquisition device includes: acquiring pixel coordinate values of set corner points in the detection frame based on the position information of the detection frame; and determining the current second distance information based on the pixel coordinate values of the set corner points, the calibration parameters of the acquisition equipment and the pixel coordinate values of the lane line vanishing points used in determining the calibration parameters of the acquisition equipment.
In one possible embodiment, the calibration parameters of the acquisition device include a first height value of the acquisition device relative to the ground and a focal length of the acquisition device; the determining the current second distance information based on the pixel coordinate values of the set corner points, the calibration parameters of the collecting device, and the pixel coordinate values of the lane line vanishing points used in determining the calibration parameters of the collecting device includes: determining a first pixel height value of the acquisition equipment relative to the ground based on the pixel coordinate value of the lane line vanishing point and the pixel coordinate value of the set corner point in the detection frame; determining a second pixel height value of the object in the current frame monitoring image relative to the ground based on the pixel coordinate values of the set corner points; determining a second height value of the object relative to the ground based on the first pixel height value, the second pixel height value, and the first height value; the current second distance information is determined based on the second height value, the focal length of the acquisition device, and the second pixel height value.
In the embodiment of the disclosure, under the condition that the complete detection frame of the object in the current frame monitoring image can be detected, the actual height value of the object can be obtained rapidly and accurately by introducing the pixel coordinate value of the lane line vanishing point and the calibration parameter of the acquisition equipment, and the current second distance information of the target vehicle and the object can be further determined rapidly and accurately.
In a possible implementation manner, the determining, based on the current frame monitoring image, distance information to be adjusted between the target vehicle and the object in the current frame monitoring image includes: acquiring scale change information between a scale of the object in the current frame monitoring image and a scale in a historical frame monitoring image adjacent to the current frame monitoring image; and determining the distance information to be adjusted based on the scale change information and the historical first distance information corresponding to the historical frame monitoring image adjacent to the current frame monitoring image.
According to the embodiment of the disclosure, the distance information to be adjusted can be accurately obtained through the historical first distance information with higher accuracy corresponding to the historical frame monitoring image adjacent to the current frame monitoring image and the scale change information of the object between the current frame monitoring image and the scale in the historical frame monitoring image adjacent to the current frame monitoring image, so that the adjustment speed can be improved when the current first distance information is determined based on the distance information to be adjusted in the later period.
In one possible embodiment, the scale change information of the object between scales in two adjacent frames of monitoring images is determined in the following manner: respectively extracting first position information of a plurality of feature points contained in the object in a previous frame of monitoring image and second position information in a next frame of monitoring image in the two adjacent frames of monitoring images; and determining scale change information of the object between scales in two adjacent frames of monitoring images based on the first position information and the second position information.
In one possible implementation manner, the determining, based on the first location information and the second location information, scale change information of the object between scales in two adjacent frames of monitoring images includes: determining a first scale value of a target line segment formed by a plurality of feature points contained in the object in the previous frame of monitoring image based on the first position information; determining a second scale value of the target line segment in the monitoring image of the later frame based on the second position information; and determining scale change information of the object between scales in two adjacent frames of monitoring images based on the first scale value and the second scale value.
In the embodiment of the disclosure, the position information of the object in the monitoring image can be more accurately represented by extracting the position information of the plurality of feature points contained in the object in the monitoring image, so that more accurate scale change information is obtained, and more accurate current first distance information can be obtained conveniently when the distance information to be adjusted is adjusted based on the scale change information.
In a second aspect, an embodiment of the present disclosure provides a blind area monitoring device, including:
the acquisition module is used for acquiring a current frame monitoring image acquired by acquisition equipment on the target vehicle;
the detection module is used for carrying out object detection on the current frame monitoring image to obtain type information and position of an object included in the image;
the determining module is used for determining a target object positioned in the visual field blind area of the target vehicle according to the position of the object and the visual field blind area of the target vehicle;
and the generation module is used for generating a monitoring result according to the type information and the position of the target object and the driving state of the target vehicle.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory in communication via the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the blind zone monitoring method as described in the first aspect.
In a fourth aspect, embodiments of the present disclosure provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the blind zone monitoring method according to the first aspect.
The foregoing objects, features and advantages of the disclosure will be more readily apparent from the following detailed description of the preferred embodiments taken in conjunction with the accompanying drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for the embodiments are briefly described below, which are incorporated in and constitute a part of the specification, these drawings showing embodiments consistent with the present disclosure and together with the description serve to illustrate the technical solutions of the present disclosure. It is to be understood that the following drawings illustrate only certain embodiments of the present disclosure and are therefore not to be considered limiting of its scope, for the person of ordinary skill in the art may admit to other equally relevant drawings without inventive effort.
FIG. 1 shows a flow chart of a blind zone monitoring method provided by an embodiment of the present disclosure;
FIG. 2 illustrates a schematic diagram of determining a blind spot of a field of view provided by an embodiment of the present disclosure;
FIG. 3 illustrates a flow chart of a particular method of determining current first distance information provided by embodiments of the present disclosure;
FIG. 4 illustrates a flow chart of a method of determining scale change information provided by an embodiment of the present disclosure;
FIG. 5 illustrates a flow chart of a method for determining distance information to be adjusted according to an embodiment of the present disclosure;
FIG. 6 illustrates a flow chart of a method of determining current first distance information provided by an embodiment of the present disclosure;
FIG. 7 illustrates a flow chart of a method of determining current second distance information provided by an embodiment of the present disclosure;
FIG. 8 is a schematic diagram illustrating a positional relationship among a target device, an acquisition apparatus, and a target object according to an embodiment of the present disclosure;
FIG. 9 is a schematic diagram of a detection frame of a target object according to an embodiment of the disclosure;
FIG. 10 illustrates a schematic diagram of one embodiment of the present disclosure for determining current second distance information;
FIG. 11 illustrates a schematic view of a scenario for determining current second distance information provided by an embodiment of the present disclosure;
fig. 12 is a schematic structural diagram of a blind area monitoring device according to an embodiment of the disclosure;
Fig. 13 shows a schematic diagram of an electronic device provided by an embodiment of the disclosure.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only some embodiments of the present disclosure, but not all embodiments. The components of the embodiments of the present disclosure, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure provided in the accompanying drawings is not intended to limit the scope of the disclosure, as claimed, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be made by those skilled in the art based on the embodiments of this disclosure without making any inventive effort, are intended to be within the scope of this disclosure.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
The term "and/or" is used herein to describe only one relationship, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist together, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
It is found that when a radar is used to monitor a target object in a blind area, since the point cloud point obtained by the radar during scanning is aimed at the whole range of the blind area, other objects which are not concerned are monitored during monitoring besides vehicles, pedestrians and signboards. When blind area monitoring is carried out in the mode, once an object is detected to be positioned in the blind area of the visual field of the vehicle through radar, an alarm is generated. However, in practice, when not all objects are located in the blind area of the field of view, the running safety of the vehicle is affected, which results in a plurality of invalid alarms, and the current blind area monitoring method has the problem of poor monitoring performance.
Based on the above-mentioned research, the embodiment of the disclosure provides a blind area monitoring method, and the embodiment of the disclosure obtains a current frame monitoring image acquired by an acquisition device on a target vehicle, performs object detection on the current frame monitoring image, determines type information and position of an object included in the current frame monitoring image, and then determines a target object located in a blind area of a field of view of the target vehicle according to the position of the object and the blind area of the field of view of the target vehicle; and then generating a monitoring result according to the type information and the position of the target object and the driving state of the target vehicle. Therefore, different monitoring results can be generated aiming at different types of target objects, so that the blind area monitoring performance is improved, and the driving safety is improved.
For the sake of understanding the present embodiment, first, a detailed description will be given of a blind area monitoring method disclosed in an embodiment of the present disclosure, where an execution body of the blind area monitoring method provided in the embodiment of the present disclosure is generally a computer device having a certain computing capability, and the computer device includes, for example: the terminal device or server or other processing device, the terminal device may be a computing device, an in-vehicle device, etc. In some possible implementations, the blind zone monitoring method may be implemented by way of a processor invoking computer readable instructions stored in a memory.
Referring to fig. 1, a flowchart of a blind area monitoring method according to an embodiment of the disclosure is shown, where the blind area monitoring method includes the following steps S101 to S104:
s101, acquiring a current frame monitoring image acquired by acquisition equipment on a target vehicle;
s102, object detection is carried out on a current frame monitoring image, and type information and position of an object included in the current frame monitoring image are obtained;
s103, determining a target object positioned in the visual field blind area of the target vehicle according to the position of the object and the visual field blind area of the target vehicle;
s104, generating a monitoring result according to the type information and the position of the target object and the driving state of the target vehicle.
With respect to S101, the corresponding target vehicle is also different in different scenes.
For example, in a scenario where a driver drives a vehicle, the target vehicle may include, for example, a vehicle driven by the driver; in an autopilot scenario, the target vehicle may include, for example, an autopilot vehicle; in a warehouse shipping scenario, the target vehicle may include, for example, a shipping robot. The embodiment of the present disclosure will be described taking a target vehicle as an example.
The target vehicle can be further provided with an acquisition device, and the acquisition device can be a monocular camera arranged on the target vehicle and used for shooting in the running process of the target vehicle. Exemplary, if the target area includes: the view blind area of the vehicle can be formed by installing the acquisition device on the upright post of the vehicle, and the shooting lens of the acquisition device faces the view blind area of the vehicle.
Wherein, different target vehicles may be different due to different vehicle types, and corresponding vision blind areas may be different. The embodiment of the disclosure refers to national standard for determining a blind area of a visual field, and particularly refers to fig. 2, which is a schematic diagram for determining a blind area of a visual field. In fig. 2, a vehicle 1 is equipped with an acquisition device 2, and a target area acquired by the acquisition device 2 includes a blind area of view including positions indicated by 3 and 4.
Specifically, when acquiring the current frame monitoring image acquired by the acquisition device on the target vehicle, for example, the following manner may be adopted: acquiring a monitoring video obtained by image acquisition of a target area by acquisition equipment on a target vehicle; determining a current frame monitoring image from the monitoring video; wherein the target area includes: and the area is positioned in the shooting visual field range of the acquisition equipment.
When shooting a target area by using the acquisition equipment, the shooting direction of the target area can be preset; after the acquisition equipment is mounted on the target vehicle, the shot target area can be determined to be the area within the shooting visual field range of the acquisition equipment. In the running or stopping process of the vehicle, the acquisition equipment can acquire images of the target area, acquire a monitoring video and determine a current frame of monitoring image from the monitoring video.
When the monitoring video is used to determine the current frame monitoring image, the method of determining the video frame image from the monitoring video can be adopted, for example, a frame closest to the current time in the shot video frame image is used as the current frame monitoring image.
For the above S102, after the current frame monitoring image is acquired by using the above S101, object detection may also be performed on the current frame monitoring image to obtain type information and a position of an object included in the current frame monitoring image.
In a specific implementation, when performing object detection on the current frame monitoring image and determining type information of an object included in the image, for example, a pre-trained object detection neural network may be used to perform object detection (object detection) on the current frame monitoring image, and for example, when performing object detection on the current frame monitoring image, at least one of the following object detection algorithms may be used: convolutional neural networks (Convolutional Neural Networks, CNN), target detection networks (Region-based CNN, RCNN), fast neural networks (Fast RCNN), and Faster neural networks (Fast RCNN).
When the object detection algorithm is used for carrying out object detection on the current frame monitoring graph, objects which can be detected include, for example: other driving vehicles, pedestrians, road facilities, road obstacles, and the like.
When the object detection is performed on the current frame monitoring image, the position of the object included in the image can also be obtained. By detecting the position of the object in the image, the actual position of the object during the actual running of the target vehicle can be further determined.
With regard to S103 described above, using the position of the object and the blind view area of the target vehicle, the target object located in the blind view area of the target vehicle may be determined in the following manner: determining current first distance information of the target vehicle and the object in the current frame monitoring image according to the position of the object in the current frame monitoring image; and determining the target object positioned in the visual field blind area of the target vehicle according to the current first distance information.
At present, when the distance is determined by using the image collected by the monocular camera, the monocular camera loaded on the intelligent automobile has the problems of road bump or obstacle shielding and the like along with the change of the road condition of the intelligent automobile in the running process of the intelligent automobile, and under the condition, when the distance is measured based on the detection frame corresponding to the object in the current frame monitoring image, the accurate distance between the object and the object can not be detected, for example, the acquisition equipment can not be stable in size of the detection frame in the monitoring image due to road bump, so that the stability of the distance between the intelligent automobile and the object obtained in time sequence is not high when the distance between the intelligent automobile and the object is continuously detected based on the detection frame.
In order to detect the distance between the target vehicle and the object as accurately and stably as possible using the image acquired by the monocular camera, the embodiments of the present disclosure also propose a distance detection scheme,
referring to fig. 3, a flowchart of a specific method for determining current first distance information according to an embodiment of the present disclosure is shown, where the method includes the following steps S301 to S302:
s301, determining distance information to be adjusted between the target vehicle and an object in the current frame monitoring image based on the current frame monitoring image.
Illustratively, the object may include, but is not limited to, a vehicle, a pedestrian, a fixed obstacle, etc., and the present disclosure is presented with embodiments taking the object as an example of a vehicle.
The current frame monitoring image provided by the embodiment of the present disclosure is a monitoring image of an object that is not detected for the first time, if the current frame monitoring image is a monitoring image of an object that is detected for the first time, the current second distance information between the current second distance information and the object may be determined directly based on the position information of the object in the current frame monitoring image, the parameter information of the acquisition device and the pixel coordinate values of the vanishing points obtained in the calibration process, and the current second distance information may be directly used as the current first distance information, and a specific process for determining the current second distance information is described in detail later.
In an exemplary embodiment, when the current frame monitoring image is a subject that is not acquired for the first time, the current first distance information corresponding to the current frame monitoring image or the historical first distance information corresponding to each frame of the historical frame monitoring image all represent the distance information obtained after adjustment.
For example, when determining the distance information to be adjusted of the target vehicle and the object based on the current frame monitoring image, the distance information to be adjusted may be determined based on the historical first distance information corresponding to the historical frame monitoring image adjacent to the current frame monitoring image and the scale change information between the scales in the current frame monitoring image and the historical frame monitoring image adjacent to the current frame monitoring image, and then the distance information to be adjusted may be adjusted.
S302, adjusting the distance information to be adjusted based on scale change information between scales in two adjacent frames of monitoring images in multi-frame historical frame monitoring images acquired by an acquisition device and historical first distance information between the object and a target vehicle in each frame of historical frame monitoring image in the multi-frame historical frame monitoring images to obtain current first distance information between the target vehicle and the object.
Illustratively, the scale change information of the object in two adjacent frames of monitoring images (such as including the monitoring image i and the monitoring image j) in the multi-frame historical frame monitoring image acquired by the acquisition device includes a ratio of a scale of the object in a later frame of monitoring image j to a scale of the object in a previous frame of monitoring image i, and a specific determination process will be described later.
For example, the manner of determining the historical first distance information between the target vehicle and the object corresponding to each frame of the historical frame monitoring image is the same as the manner of determining the current first distance information between the target vehicle and the object, so the process of determining the historical first distance information will not be described in detail.
According to the method and the device for adjusting the distance information to be adjusted, which are obtained based on the current monitoring image, can be adjusted according to the scale change information in two adjacent frames of monitoring images in the multi-frame historical frame monitoring image based on the object and the historical first distance information between the target vehicle and the object, which is obtained in the historical process, so that the distance change between the target vehicle and the object corresponding to the two adjacent frames of monitoring images is stable, the actual distance change condition between the target vehicle and the object in the driving process can be truly reflected, and the stability of the predicted distance between the target vehicle and the object in time sequence can be improved.
In addition, the scale change information of the object in two adjacent frames of monitoring images can reflect the distance change between the target vehicle and the object, and the historical first distance information between the target vehicle and the object corresponding to each frame of historical frame monitoring image in the multi-frame historical frame monitoring images is more accurate distance information obtained through adjustment, so that the more accurate current first distance information can be obtained after the distance information to be adjusted based on the scale change information in two adjacent frames of monitoring images in the multi-frame historical frame monitoring images acquired by the acquisition equipment of the object and the historical first distance information between the target vehicle and the object corresponding to each frame of historical frame monitoring image in the multi-frame historical frame monitoring images is adjusted.
In the embodiment of the disclosure, when determining the current first distance information, the distance information to be adjusted, which is acquired based on the current monitoring image, can be adjusted according to the scale change information between scales in two adjacent frames of monitoring images in the multi-frame historical frame monitoring image based on the object and the historical first distance information between the target vehicle and the object, which are obtained in the historical process, so that the distance change between the same object and the target vehicle in the two adjacent frames of monitoring images is stable, the actual distance change condition between the target vehicle and the object in the driving process can be truly reflected, and the stability of the predicted distance between the target vehicle and the object in time sequence can be improved.
First, as for the above-mentioned scale change information, as shown in fig. 4, the scale change information of an object between scales in two adjacent frames of monitoring images may be determined in the following manner, including the following S401 to S402:
s401, respectively extracting first position information of a plurality of feature points contained in an object in a previous frame of monitoring image and second position information in a next frame of monitoring image in two adjacent frames of monitoring images.
For example, the detection frame for representing the position of the object in the monitored image may be obtained by performing object detection on the monitored image based on a pre-trained object detection model, and then a plurality of feature points constituting the object, which may be points in which the pixel in the object is relatively severely changed, such as inflection points, corner points, and the like, may be extracted within the detection frame.
S402, determining scale change information of the object between scales in two adjacent frames of monitoring images based on the first position information and the second position information.
The line segment may be formed by connecting any two feature points in the same frame of monitoring image, so that the scale of the line segment formed by any two feature points in the previous frame of monitoring image may be obtained by the first position information of any two feature points in the previous frame of monitoring image, and the scale of the line segment formed by any two feature points in the next frame of monitoring image may be obtained by the second position information of any two feature points in the next frame of monitoring image, in this way, the scales of the line segments on the object in the previous frame of monitoring image and the scales in the next frame of monitoring image may be obtained.
Further, the scale change information of the object in two adjacent frames of monitoring images can be determined according to the scale of the line segments in the previous frame of monitoring image and the scale of the line segments in the next frame of monitoring image.
Specifically, for S402, when determining scale change information of an object between scales in two adjacent frames of monitoring images based on the first position information and the second position information, the following S4021 to S4023 are included:
s4021, determining, based on the first position information, a first scale value of a target line segment formed by a plurality of feature points included in the object in a previous frame of the monitored image.
S4022, determining a second scale value of the target line segment in the monitoring image of the next frame based on the second position information.
The target line segment includes n pieces, where n is greater than or equal to 1 and less than a set threshold, a first scale value corresponding to each entry line segment may be obtained based on the first position information of the feature point included in each entry line segment, and a second scale value corresponding to each entry line segment may be obtained based on the second position information of the feature point included in each entry line segment.
S4023, determining scale change information of the object between scales in two adjacent frames of monitoring images based on the first scale value and the second scale value.
The scale change information corresponding to any entry of the line segments can be represented by a ratio between the second scale value and the first scale value corresponding to the entry of the line segments, and the scale change information of the object in the two adjacent frames of monitoring images can be further determined according to the scale change information corresponding to the multiple entries of the line segments, for example, an average value of the scale change information corresponding to the target line segments with a set number can be used as the scale change information of the object in the two adjacent frames of monitoring images.
Compared with a mode of representing the scale of the object by the position information of two corner points of the detection frame in the monitoring image, for example, a mode of representing the scale of the object in the monitoring image by the position information of the left upper corner point and the right lower corner point of the detection frame in the monitoring image, the embodiment of the disclosure determines the scale change information of the object in the two adjacent frames of monitoring images by selecting the position information of a plurality of characteristic points in the two adjacent frames of monitoring images respectively, and the mode can more accurately represent the position information of the object in the monitoring image by extracting the position information of the plurality of characteristic points contained in the object, thereby obtaining more accurate scale change information.
In the embodiment of the disclosure, the position information of the object in the monitoring image can be more accurately represented by extracting the position information of the plurality of feature points contained in the object in the monitoring image, so that more accurate scale change information is obtained, and more accurate current first distance information can be obtained conveniently when the distance information to be adjusted is adjusted based on the scale change information.
For S302 described above, when determining the distance information to be adjusted between the target vehicle and the object in the current frame monitoring image based on the current frame monitoring image, as shown in fig. 5, the following S501 to S502 may be included:
s501, acquiring scale change information between a scale of an object in a current frame monitoring image and a scale in a history frame monitoring image adjacent to the current frame monitoring image.
Illustratively, the historical frame monitoring image adjacent to the current frame monitoring image refers to a previous frame monitoring image whose acquisition time is before the current frame monitoring image, and the scale change information of the object between the current frame monitoring image and the historical frame monitoring image adjacent to the current frame monitoring image can be represented by the ratio of the scale of the object in the current frame monitoring image to the scale of the object in the historical frame monitoring image adjacent to the current frame monitoring image, and the specific determination process will be described later.
S502, determining distance information to be adjusted based on the scale change information and historical first distance information corresponding to a historical frame monitoring image adjacent to the current frame monitoring image.
Considering that the scale of the object in the collected monitoring images is gradually increased in the process that the target vehicle and the object are approaching, namely, the scale of the object in two adjacent frames of monitoring images and the distance between the target vehicle and the object corresponding to the two frames of monitoring images are in a proportional relation, based on the scale, the distance information to be adjusted can be determined by the following formula (1):
d 0_scale =scale×D 1_final ; (1);
wherein d 0_scale Representing distance information to be adjusted; scale represents the ratio of the scale of the object in the current frame monitoring image to the scale of the object in the historical frame monitoring image adjacent to the current frame monitoring image; d (D) 1_final And representing historical first distance information corresponding to a historical frame monitoring image adjacent to the current frame monitoring image.
In the embodiment of the disclosure, the more accurate distance information to be adjusted can be obtained through the historical first distance information corresponding to the historical frame monitoring image adjacent to the current frame monitoring image and the scale change information of the object between the current frame monitoring image and the scale in the historical frame monitoring image adjacent to the current frame monitoring image, so that the adjustment speed can be improved when the current first distance information is determined based on the distance information to be adjusted in the later period.
For example, the obtained scale change information of the object between the current frame monitoring image and the historical frame monitoring image adjacent to the current frame monitoring image may have errors, for example, jitter occurs when the current monitoring image is shot, or the detected object is wrong, the scale change information obtained based on the obtained scale change information may have abrupt change compared with the scale change information in two adjacent frames of monitoring images in a plurality of frames of historical frame monitoring images, so that the distance information to be adjusted obtained based on the obtained distance information to be adjusted may also have abrupt change compared with the adjacent historical first distance information, and at this time, the distance information to be adjusted may be adjusted by the scale change information in two adjacent frames of monitoring images in a plurality of frames of historical frame monitoring images collected by the object and the historical first distance information corresponding to each frame of historical frame monitoring images in the plurality of frames of historical frame monitoring images.
Specifically, when adjusting the distance information to be adjusted to obtain the current first distance information between the target vehicle and the object, as shown in fig. 6, the following S601 to S603 may be included:
s601, adjusting the distance information to be adjusted until the error amount of the scale change information is minimum, and obtaining adjusted distance information; the error amount is determined based on the distance information to be adjusted, the scale change information and historical first distance information corresponding to each frame of historical frame monitoring image in the multi-frame historical frame monitoring images.
Illustratively, the error amount for representing the scale change information of the object between the current frame monitoring image and the history frame monitoring image adjacent to the current frame monitoring image may be predicted based on the following formula (2):
wherein E represents the error amount of the scale change information of the object between the current frame monitoring image and the historical frame monitoring image adjacent to the current frame monitoring image; t comprises the frame number of the monitoring image of the object, and T is smaller than or equal to a preset frame number; t is used for indicating a historical frame monitoring image, and represents a t-th historical frame monitoring image starting from a current frame monitoring image, for example, t=1 represents a first historical frame monitoring image starting from the current frame monitoring image; l (L) t Representing a preset weight of a t-th frame history frame monitoring image from a current frame monitoring image in determining an error amount E, D t_final Historical first distance information corresponding to a t-th frame historical frame monitoring image from a current frame monitoring image is represented; scale for measuring the size of a sample i And indicating the scale change information between the i-th frame history frame monitoring image and the i+1-th frame history frame monitoring image from the current frame monitoring image.
Illustratively, the above equation (2) may be optimized in a variety of optimization manners, such as d in the above equation (2) may be performed in a manner including, but not limited to, newton gradient descent 0_scale Adjusting, and obtaining adjusted distance information D when E is minimum 0_scale
By the method, the scale change information of the object between the current frame monitoring image and the historical frame monitoring image adjacent to the current frame monitoring image is continuously optimized, so that the error of the obtained scale change information of the object in the current frame monitoring image and the historical frame monitoring image adjacent to the current frame monitoring image can be reduced, and the stability of the determined adjusted distance information is improved.
S602, determining current first distance information based on the adjusted distance information.
For example, after the adjusted distance information is obtained, in order to further improve the accuracy of the adjusted distance information, the adjusted distance information may be further adjusted to obtain the current first distance information between the target vehicle and the object.
Specifically, before determining the current first distance information based on the adjusted distance information, as shown in fig. 7, the blind area monitoring method provided by the embodiment of the disclosure further includes the following steps S701 to S702:
s701, performing target detection on the current frame monitoring image, and determining the position information of a detection frame of an object contained in the current frame monitoring image.
S702, determining current second distance information based on the position information of the detection frame and calibration parameters of the acquisition equipment.
For example, before the target vehicle runs, the acquisition device provided on the target vehicle may be calibrated, for example, the acquisition device may be mounted on top of the target vehicle, as shown in fig. 6, such that the target vehicle is located in the middle of parallel lane lines, the optical axis of the acquisition device is kept parallel to the horizontal ground and parallel to the forward direction in the target vehicle, in such a way that the focal length (f x ,f y ) Height H of the acquisition device relative to the ground c
For example, the object included in the current frame monitored image and the detection frame corresponding to the object may be obtained by performing target detection on the current frame monitored image through a pre-trained target detection model, as shown in fig. 8, the position information of the detection frame may include position information of a corner point of the detection frame in the current frame monitored image, for example, may include pixel coordinate values of corner points A, B, C and D in the current frame monitored image.
Further, according to the principle of pinhole imaging, the following formulas (3) and (4) can be obtained:
wherein H is x Representing the actual width of the object; h y Representing the actual height of the object relative to the ground; w (w) b Representing the pixel width of the object in the current frame monitoring image, which can be determined by the pixel width of the detection frame ABCD of the object; h is a b The pixel height of the object relative to the ground can be determined by the pixel height of the object's detection frame ABCD; d (D) 0 Representing current second distance information between the target vehicle and the object.
Illustratively, in one embodiment, H x And H y The determination may be performed by the type of the detected object, for example, when the object is a vehicle, the actual width and the actual height of the target vehicle may be determined based on the type of the detected target vehicle and the correspondence between the type of the vehicle and the height and width corresponding to the vehicle stored in advance.
Illustratively, the width w of the object in the current frame monitor image b The determination may be made by the pixel coordinate values of the corner AB in the detection frame ABCD in fig. 9 in the current frame monitor image or by the pixel coordinate values of the corner CD in the current frame monitor image; height h of object in current frame monitoring image b The pixel coordinate values of the corner BC in the current frame monitoring image may be determined, or the pixel coordinate values of the corner AD in the current frame monitoring image may be determined, which will not be described herein.
Considering that there is a case where the type of the object cannot be identified, and therefore the actual height or the actual width of the object may not be directly obtained, the embodiment of the disclosure is described taking determining the actual height of the object as an example, and for S702 described above, when determining the current second distance information based on the position information of the detection frame and the calibration parameters of the acquisition device, the method includes the following S7021 to S7022:
s7021, based on the position information of the detection frame, pixel coordinate values of the set corner points in the detection frame are acquired.
S7022, determining current second distance information based on pixel coordinate values of the set corner points, calibration parameters of the collection device, and pixel coordinate values of the lane line vanishing points used in determining the calibration parameters of the collection device.
The principle of determining the current second distance information based on the pixel coordinate values of the set corner point, the calibration parameters of the collection device, and the pixel coordinate values of the lane line vanishing point used in determining the calibration parameters of the collection device will be described with reference to fig. 10 as follows:
for example, in the initial calibration process of the acquisition device, the target vehicle may be parked between parallel lane lines, where a distant parallel lane line intersects a point when the phase plane of the acquisition device is projected, which may be referred to as a lane line vanishing point, where the lane line vanishing point approximately coincides with the V point in fig. 10, may represent a projection position of the acquisition device in the monitoring image, and a pixel coordinate value of the lane line vanishing point may represent a pixel coordinate value of the acquisition device in the current frame monitoring image.
As shown in FIG. 10, the distance between EG points may represent the actual height H of the acquisition device relative to the ground c The method comprises the steps of carrying out a first treatment on the surface of the The distance between the FG points may represent the actual height H of the object relative to the ground y The method comprises the steps of carrying out a first treatment on the surface of the The distance between the two points of the MN may represent the pixel height h of the object relative to the ground b The method comprises the steps of carrying out a first treatment on the surface of the The distance between the MV two points may represent the pixel height of the acquisition device relative to the ground.
Further, as shown in fig. 10, according to the principle of pinhole imaging, the actual height H of the acquisition device relative to the ground when the acquisition device captures the current frame monitoring image is determined c And the actual height H of the object relative to the ground y Ratio of (C)At the pixel height of the acquisition device relative to the ground and the pixel height h of the object relative to the ground b In this way, after determining the pixel coordinate values of the M point, the V point and the N point, the pixel height h of the target object relative to the ground can be further determined b And acquiring the pixel height of the device relative to the ground, thereby predicting the actual height H of the object relative to the ground y
Further, after predicting the actual height of the object relative to the ground, the current second distance information may be determined in conjunction with the above formula (3).
The principle of determining the current second distance information is described above in conjunction with fig. 10, and a specific procedure of determining the current second distance information will be described below in conjunction with fig. 11:
As shown in fig. 11, after the distortion removal processing is performed on the current frame monitor image, an image coordinate system is established for the current frame monitor image, in which the pixel coordinate value (x v ,y v ) The method comprises the steps of carrying out a first treatment on the surface of the Pixel coordinate value (x of upper left corner a of detection frame of object tl ,y tl ) Pixel coordinate value (x of lower right point C br ,y br ) Further, the distance between two points of MN as shown in fig. 10 can be determined by the pixel coordinate value of the corner AC in the y-axis direction; the distance between the two points of MV as shown in fig. 10 can be determined by the pixel coordinate value of the corner CV in the y-axis direction.
Specifically, the calibration parameters of the acquisition equipment comprise a first height value of the acquisition equipment relative to the ground and a focal length of the acquisition equipment; for the above S7022, when determining the current second distance information based on the pixel coordinate values of the set corner point, the calibration parameters of the collection device, and the pixel coordinate values of the lane line vanishing point used in determining the calibration parameters of the collection device, the following S70221 to S70224 are included:
s70221, determining a first pixel height value of the collection device relative to the ground based on the pixel coordinate values of the lane line vanishing point and the pixel coordinate values of the set corner points in the detection frame.
In combination with the above-described FIG. 11, a first pixel height may be obtainedValue: y is br -y v
S70222, determining a second pixel height value of the object in the current frame monitor image with respect to the ground based on the pixel coordinate values of the set corner points.
For example, the difference between the coordinate values of the pixels of the AC two corner points along the y-axis in fig. 11 can be used as the second pixel height value, and h can be used as b To represent.
S70223, a second height value of the object relative to the ground is determined based on the first pixel height value, the second pixel height value, and the first height value.
Wherein H is c The first height value is used for representing the actual height of the acquisition equipment relative to the ground and can be obtained when the acquisition equipment is calibrated; h y Representing a second height value representing the actual height of the object relative to the ground.
S70224, determining the current second distance information based on the second height value, the focal length of the acquisition device, and the second pixel height value.
Illustratively, the current second distance information may be determined by the above formula (3).
In the embodiment of the disclosure, under the condition that the complete detection frame corresponding to the object in the current frame monitoring image can be detected, the actual height value of the object can be obtained rapidly and accurately by introducing the pixel coordinate value of the lane line vanishing point and the calibration parameter of the acquisition equipment, and the current second distance information of the target vehicle and the object can be further determined rapidly and accurately.
After obtaining the current second distance information of the target vehicle and the object, when determining the current first distance information based on the adjusted distance information for S603, the method includes the following steps S6031 to S6032:
and S6031, determining distance offset information for the adjusted distance information based on the current second distance information, the historical second distance information between the object and the target vehicle in each frame of the historical frame monitoring image in the multi-frame historical frame monitoring image, the historical first distance information corresponding to the frame of historical frame monitoring image and the adjusted distance information.
In this way, when determining the second distance information, if an accurate and complete detection frame of the object can be detected, the second distance information with higher accuracy between the target vehicle and the object can be obtained based on the position information of the detection frame, otherwise, if the accurate detection frame of the object cannot be detected or the detection frame of the detected object is incomplete, the accuracy of the obtained second distance information is lower, so that the accuracy of a plurality of second distance information determined based on this way is higher, but the fluctuation is larger.
For example, each of the historical first distance information is distance information determined based on the multi-frame monitoring image, and the adjusted distance information is distance information adjusted based on the plurality of historical first distance information, so that fluctuation between the plurality of historical first distance information and the adjusted distance information obtained based on the method is smaller, but since scale change information corresponding to two adjacent frames of monitoring images is used when the historical first distance information and the adjusted distance information are determined, the scale change information is determined depending on position information of feature points of the identification object in the monitoring image, when errors exist, the errors are accumulated, and therefore the accuracy of the determined plurality of historical first distance information and the accuracy of the adjusted distance information are compared with the accuracy of the second distance information determined based on the complete detection frame.
In consideration of high accuracy of the current second distance information and the historical second distance information determined based on the detection frame, stability between the historical first distance information and the adjusted distance information determined based on the scale change information is high, in order to obtain the current first distance information with high accuracy and stability, the adjusted distance information can be further adjusted by respectively carrying out two ways on the distance information between the target vehicle and the object corresponding to the determined multi-frame monitoring image.
And S6032, adjusting the adjusted distance information based on the distance offset information to obtain current first distance information.
For example, after obtaining the distance offset information, the adjusted distance information may be further adjusted based on the distance offset information, so that the current first distance information is more accurate.
In the embodiment of the disclosure, after the distance offset information is obtained, the adjusted distance information can be further adjusted, so that the distance information with higher current accuracy of the target vehicle and the object is obtained.
In one embodiment, when determining the distance offset information for the adjusted distance information based on the current second distance information, the historical second distance information between the object and the target vehicle in each of the plurality of historical frame monitoring images, the historical first distance information corresponding to the frame of historical frame monitoring image, and the adjusted distance information, the following S60311 to S60313 may be included:
s60311, determining a first linear fitting coefficient of a first fitting curve which is formed by fitting the historical second distance information corresponding to each frame of history frame monitoring image in the multi-frame history frame monitoring image and the current second distance information based on the current second distance information and the historical second distance information corresponding to each frame of history frame monitoring image in the multi-frame history frame monitoring image.
Illustratively, it may be achieved by D 0 Representing the current second distance information, respectively through D 1 、D 2 、D 3 … a plurality of historical second distance information, which may be represented by D 0 And D 1 、D 2 、D 3 … to obtain a first fitted curve composed of a plurality of historical second distance information and current second distance information, the first fitted curve can be represented by the following formula (6):
y 1 =ax+bx 2 +c (6);
in the fitting process, the frame numbers 0,1,2,3 and … of the monitoring images used in determining the plurality of second distance information can be taken as x values, and the second distance information D respectively corresponding to the frame numbers 0 、D 1 、D 2 、D 3 … as y-value input to equation (6), a first linear fit coefficient can be obtained: a, b, c.
S60312, determining a second linear fitting coefficient of a second fitting curve which is formed by fitting the historical first distance information corresponding to each frame of history frame monitoring image in the multi-frame history frame monitoring image and the adjusted distance information based on the historical first distance information corresponding to each frame of history frame monitoring image in the multi-frame history frame monitoring image and the adjusted distance information.
Illustratively, it may be achieved by D 0_scale Representing the adjusted distance information by D 1_final 、D 2_final 、D 3_final … a plurality of historical first distance information, which may be represented by D 0_scale And D 1_final 、D 2_final 、D 3_final … to obtain a second fitted curve composed of a plurality of historical first distance information and adjusted distance information, the second fitted curve being represented by the following formula (7):
y 2 =a′x+b′x 2 +c′ (7);
in the fitting process, the frame numbers 0,1,2,3 and … of the monitoring images used in determining the plurality of pieces of historical first distance information and the adjusted distance information can be taken as x values, and the adjusted distance information and the plurality of pieces of historical first distance information D respectively corresponding to the frame numbers 0_scale 、D 1_final 、D 2_final 、D 3_final … as y-value input to equation (7), a second linear fit coefficient can be obtained: a ', b ', c '.
S60313, distance offset information for the adjusted distance information is determined based on the first linear fitting coefficient and the second linear fitting coefficient.
Illustratively, the distance offset information may be determined by the following equation (8):
L=(a/a′+b/b′+c/c′)/3 (8);
the distance offset information determined in this way can be adjusted according to the following formula (9) to obtain the current first distance information D 0_final
D 0_final =D 0_scale ×L (9);
In another embodiment, when determining the distance offset information for the adjusted distance information based on the current second distance information, the historical second distance information between the object and the target vehicle in each frame of the historical frame monitoring image in the multi-frame historical frame monitoring image, the historical first distance information corresponding to the frame of historical frame monitoring image, and the adjusted distance information, the distance offset information may also be determined by a kalman filtering algorithm, and further the current first distance information may be determined based on the kalman filtering algorithm.
In determining the current first distance information based on the kalman filter algorithm, the determination may be made by the following formula (10):
D 0_final =kal(D 0_scale ,D 0 ,R,Q) (10);
wherein R represents D 0_scale And D 1_final 、D 2_final 、D 3_final … variance; q represents D 0 And D 1 、D 2 、D 3 …, the variance for D can be determined by R and Q 0_scale And further correcting the adjusted distance information based on the distance offset information to obtain current first distance information with higher accuracy.
After the current first distance information is determined, the target object in the visual field blind area of the target vehicle can be determined according to each object in the current frame image and the current first distance information of the target vehicle.
For example, the blind area of the field of view of the target vehicle may be determined according to the manner in which the blind area of the field of view of the target vehicle is determined as shown in fig. 2 described above. In addition, using the current first distance information and the determined blind area of the field of view of the target vehicle, it is possible to determine an object, that is, a target object, which falls in the blind area of the field of view among the detected objects.
Here, the target object may include, for example, other driving vehicles, pedestrians, road facilities, and portions of road obstacles among the above objects.
For S104, the monitoring result may be generated according to the type information and the position of the target object, and the driving state of the target vehicle.
Here, since the generated monitoring result is determined based on the image, it is generally necessary to continuously generate the monitoring result for a continuous period of time when guiding the autonomous vehicle or assisting the driver in traveling using the monitoring result. During this continuous period, multiple images are acquired, but are discrete with respect to the continuous period. In order to obtain more accurate monitoring of the target object, tracking smoothing processing can be performed when object monitoring is performed on continuous frame images, for example, interpolation is adopted, so that accuracy is further improved.
In addition, the tracking smoothing processing mode can be used for determining the distance between the target object and the target vehicle in the continuous time by using the multi-frame corresponding discrete-time monitoring images, so that the mode can also relieve the equipment pressure of the acquisition equipment, which needs to acquire multi-frame monitoring images in a rapid and continuous mode, and reduce equipment loss.
For example, in order to make the obtained monitoring result have a lower degree of dispersion so as to ensure the safety of the target vehicle during running, a frame of monitoring image is required to be acquired in 0.1 second correspondingly; if a frame of monitoring image is obtained according to 0.5 seconds, sudden impact may occur within 0.5 seconds when the vehicle speed is fast, i.e. the safety cannot be ensured. However, the acquisition of one frame of monitoring image at 0.1 second requires greater power consumption for the acquisition device than the acquisition of one frame of monitoring image at 0.2 seconds; meanwhile, the interpolation mode is utilized to determine a frame of predictive monitoring image which is more accurate in 0.1 second within 0.2 second, and therefore safety can be guaranteed.
In addition, after the target object is determined, different target objects at different positions have different influences on the target vehicle, so that corresponding monitoring results can be determined for different target objects by combining type information, positions and driving states of the target object.
Specifically, the monitoring result may include, for example, alarm information. The driving state of the target vehicle may include, for example, steering information of the target vehicle.
In a specific implementation, when the monitoring result is generated according to the type information and the position of the target object and the driving state of the target vehicle, for example, the following manner may be adopted: determining the level of alarm information according to the type information and the position of the target object and the steering information of the target vehicle; and generating and prompting the alarm information of the determined level.
The type information of the target object may include, for example, pedestrians. Since the target object is located in the blind area of the field of view of the target vehicle, that is, the target vehicle may affect the driving safety due to the blind area of the field of view of the target vehicle, the monitoring result including the warning information may be generated.
In one possible embodiment, the steering information at the target vehicle characterizes a left turn of the target vehicle, and the position of the target object characterizes a left dead zone of the target object at the target vehicle; or when the steering information of the target vehicle indicates that the target vehicle turns right and the position of the target object indicates that the target object is in a dead zone on the right side of the target vehicle, the target vehicle is considered to have a larger influence on the safety of the target object during driving, for example, the target vehicle may collide with a pedestrian during driving, and the monitoring result may include the highest-level monitoring result. Here, for example, the monitoring result may be divided into a plurality of levels, such as a first level, a second level, a third level, and a fourth level; the higher the level number is, the larger the influence of the characterization on the driving safety of the target vehicle is, and the corresponding warning information also corresponds to the larger influence of the characterization on the driving safety of the target vehicle.
Taking the first monitoring result as an example, the first monitoring result corresponds to a first level, and includes a "ticker" sound frequency sent by a higher frequency, or a voice prompt message "the current is too close to the vehicle, please drive carefully".
In addition, the first monitoring result can be further refined according to the position of the target object. Taking the running state of the target device as the running state of the target device to the left side, taking the example that the target object exists on the left side of the target vehicle as the position representation of the target object, if the first monitoring result represents that the target device is continuously approaching to the target object, the frequency of the sound emitted by the beep is gradually increased, or the alarm information with more accurate prompt information such as 1 meter away from the left pedestrian currently and 0.5 meter away from the left pedestrian currently is generated.
In another possible embodiment, the steering information at the target vehicle characterizes a left turn of the target vehicle, and the position of the target object characterizes a dead zone of the target object on the right side of the target vehicle; or when the steering information of the target vehicle indicates that the target vehicle turns right and the position of the target object indicates that the target object is in a blind area on the left side of the target vehicle, the target vehicle is considered to have a certain influence on safety when the vehicle is in driving, for example, a pedestrian possibly collides when approaching the target vehicle, the monitoring result can correspond to the second level, and the monitoring result comprises a 'tic' voice frequency sent by a lower frequency compared with the monitoring result corresponding to the first level, or voice prompt information is 'current nearer to the pedestrian and please drive carefully'.
In addition, the type information of the target object may also include, for example, a vehicle; here, the vehicle is another vehicle other than the target vehicle.
Similar to the above manner of determining the monitoring result when the type information indicates that the target object is a pedestrian, in one possible implementation manner, the steering information of the target vehicle indicates that the target vehicle turns left, and the position of the current target object indicates that the target object is a blind area on the left side of the target vehicle; or when the steering information of the target vehicle indicates that the target vehicle turns right and the position of the current target object indicates that the target object is in a dead zone on the right side of the target vehicle, the influence of the target vehicle on safety is considered to be large when the target vehicle is driving, for example, the target vehicle may collide with other vehicles when the target vehicle is steering, and the monitoring result may comprise a monitoring result corresponding to three-level.
In addition, the monitoring result can be further refined according to the driving state. Taking the driving state of the target vehicle to represent left turning of the target vehicle, the monitoring result represents that the target object is arranged on the left side of the target vehicle as an example, if the monitoring result represents that the target vehicle is continuously approaching to the target object, the frequency of sound emission of the click is gradually increased, or alarm information with more accurate prompt information such as 1 meter from the left side of the current vehicle, 0.5 meter from the left side of the current vehicle and the like is generated.
In another possible embodiment, the steering information at the target vehicle characterizes a left turn of the target vehicle, and the position of the target object characterizes a dead zone of the target object on the right side of the target vehicle; or when the steering information of the target vehicle indicates that the target vehicle turns right and the position of the target object indicates that the target object is in a blind area on the left side of the target vehicle, the target object is considered to have a certain influence on the safety of the target vehicle when driving, for example, when the target vehicle is driving, other vehicles possibly collide with the target vehicle, the monitoring result can correspond to four levels, and the monitoring result can comprise a 'tic' sound frequency sent at a lower frequency compared with the monitoring result corresponding to three levels, or a voice prompt message 'currently is closer to the vehicle and please drive carefully'.
Therefore, the driver of the control target vehicle can be guided to drive more safely by generating the alarm information more accurately and rapidly.
In addition, the monitoring result may also include, for example, a vehicle control instruction. Correspondingly, the driving state of the target vehicle may include, for example, steering information of the target vehicle.
Here, since the type information and the position of the target object can be acquired more efficiently and accurately, the monitoring result including the vehicle control instruction can also be generated to control safe driving of the running apparatus and the like. Wherein the driving means is for example, but not limited to, any of the following: an autonomous vehicle, a vehicle equipped with an advanced driving assistance system (Advanced Driving Assistance System, ADAS), or a robot, etc.
In a specific implementation, when the monitoring result is generated according to the type information and the position of the target object and the driving state of the target vehicle, for example, the vehicle control instruction may be generated according to the type information and the position of the target object and the steering information of the target vehicle.
Here, when the vehicle control instruction is generated according to the type information and the position of the target object and the steering information of the target vehicle, for example, the vehicle control instruction which is determined to be generated by the target vehicle can be judged according to the type information and the position of the target object, so that the target vehicle can be prevented from colliding with the target object, and safe running is ensured.
Therefore, the monitoring result is more beneficial to being deployed in the intelligent running device, the safety of the intelligent running device in the automatic driving control process is improved, and the requirements in the automatic driving field can be better met.
It will be appreciated by those skilled in the art that in the above-described method of the specific embodiments, the written order of steps is not meant to imply a strict order of execution but rather should be construed according to the function and possibly inherent logic of the steps.
Based on the same technical concept, the embodiment of the disclosure further provides a blind area monitoring device corresponding to the blind area monitoring method, and since the principle of solving the problem by the device in the embodiment of the disclosure is similar to that of the blind area monitoring method in the embodiment of the disclosure, the implementation of the device can be referred to the implementation of the method, and the repetition is omitted.
Referring to fig. 12, a schematic diagram of a blind area monitoring device according to an embodiment of the disclosure is provided, where the blind area monitoring device includes: an acquisition module 121, a detection module 122, a determination module 123, and a generation module 124; wherein, the liquid crystal display device comprises a liquid crystal display device,
an acquisition module 121, configured to acquire a current frame monitoring image acquired by an acquisition device on a target vehicle;
the detection module 122 is configured to perform object detection on the current frame monitoring image to obtain type information and a position of an object included in the image;
a determining module 123, configured to determine a target object located in a blind field of view of the target vehicle according to a position of the object and the blind field of view of the target vehicle;
and the generating module 124 is configured to generate a monitoring result according to the type information and the position of the target object and the driving state of the target vehicle.
In one possible implementation manner, the monitoring result includes warning information, and the driving state of the target vehicle includes steering information of the target vehicle; the generating module 124 is configured to, when generating a monitoring result according to the type information and the position of the target object and the driving state of the target vehicle: determining the level of alarm information according to the type information and the position of the target object and the steering information of the target vehicle; and generating and prompting the alarm information of the determined level.
In one possible implementation, the monitoring result includes a vehicle control instruction, and the driving state of the target vehicle includes steering information of the target vehicle; the generating module 124 is configured to, when generating a monitoring result according to the type information and the position of the target object and the driving state of the target vehicle: generating the vehicle control instruction according to the type information and the position of the target object and the steering information of the target vehicle; the blind zone monitoring device further includes a control module 125 for: and controlling the target vehicle to run based on the vehicle control instruction.
In one possible implementation, the determining module 123 is configured to, when determining the target object located in the blind field of view of the target vehicle according to the position of the object and the blind field of view of the target vehicle: determining current first distance information of the target vehicle and the object in the current frame monitoring image according to the position of the object in the current frame monitoring image; and determining the target object positioned in the blind area of the visual field of the target vehicle according to the current first distance information.
In a possible implementation manner, the determining module 123 is configured to, when determining, according to the position of the object in the current frame monitoring image, current first distance information between the target vehicle and the object in the current frame monitoring image: determining distance information to be adjusted between the target vehicle and the object in the current frame monitoring image based on the current frame monitoring image; and adjusting the distance information to be adjusted based on the scale change information between scales in two adjacent frames of multi-frame historical frame monitoring images acquired by the acquisition equipment and the historical first distance information between the object and the target vehicle in each frame of historical frame monitoring image in the multi-frame historical frame monitoring images to obtain the current first distance information between the target vehicle and the object.
In a possible implementation manner, the determining module 123 is configured to, when adjusting the distance information to be adjusted to obtain current first distance information between the target vehicle and the object: adjusting the distance information to be adjusted until the error amount of the scale change information is minimum, and obtaining adjusted distance information; the error amount is determined based on the distance information to be adjusted, the scale change information and historical first distance information corresponding to each frame of historical frame monitoring image in the multi-frame historical frame monitoring image; and determining the current first distance information based on the adjusted distance information.
In a possible implementation, before determining the current first distance information based on the adjusted distance information, the determining module 123 is further configured to: determining current second distance information based on the position of the object in the current frame monitoring image and calibration parameters of the acquisition equipment; the determining module 123 is configured to, when determining the current first distance information based on the adjusted distance information: determining distance offset information for the adjusted distance information based on the current second distance information, historical second distance information between the object and the target vehicle in each frame of historical frame monitoring image in the multi-frame historical frame monitoring image, the historical first distance information corresponding to the frame of historical frame monitoring image, and the adjusted distance information; and adjusting the adjusted distance information based on the distance offset information to obtain the current first distance information.
In one possible implementation, the determining module 123 is configured to, when determining distance offset information for the adjusted distance information based on the current second distance information, historical second distance information between the object and the target vehicle in each of the plurality of frames of historical frame monitoring images, the historical first distance information corresponding to the frame of historical frame monitoring images, and the adjusted distance information: determining a first linear fitting coefficient of a first fitting curve which is formed by fitting the historical second distance information corresponding to each frame of history frame monitoring image in the multi-frame history frame monitoring image and the current second distance information based on the current second distance information and the historical second distance information corresponding to each frame of history frame monitoring image in the multi-frame history frame monitoring image; determining a second linear fitting coefficient of a second fitting curve which is formed by fitting the historical first distance information corresponding to each frame of history frame monitoring image in the multi-frame history frame monitoring image and the adjusted distance information based on the historical first distance information corresponding to each frame of history frame monitoring image in the multi-frame history frame monitoring image and the adjusted distance information; distance offset information for the adjusted distance information is determined based on the first linear fit coefficient and the second linear fit coefficient.
In a possible implementation manner, the determining module 123 is configured to, when determining the current second distance information based on the position of the object in the current frame monitoring image and the calibration parameter of the acquisition device: acquiring pixel coordinate values of set corner points in a detection frame based on the position information of the object in the current frame monitoring image; and determining the current second distance information based on the pixel coordinate values of the set corner points, the calibration parameters of the acquisition equipment and the pixel coordinate values of the lane line vanishing points used in determining the calibration parameters of the acquisition equipment.
In one possible embodiment, the calibration parameters of the acquisition device include a first height value of the acquisition device relative to the ground and a focal length of the acquisition device; the determining module 123 is configured to, when determining the current second distance information based on the pixel coordinate values of the set corner point, the calibration parameters of the collecting device, and the pixel coordinate values of the lane line vanishing point used when determining the calibration parameters of the collecting device: determining a first pixel height value of the acquisition equipment relative to the ground based on the pixel coordinate value of the lane line vanishing point and the pixel coordinate value of the set corner point in the detection frame; determining a second pixel height value of the object in the current frame monitoring image relative to the ground based on the pixel coordinate values of the set corner points; determining a second height value of the object relative to the ground based on the first pixel height value, the second pixel height value, and the first height value; the current second distance information is determined based on the second height value, the focal length of the acquisition device, and the second pixel height value.
In a possible implementation manner, the determining module 123 is configured, when determining, based on the current frame monitoring image, information on a distance to be adjusted between the target vehicle and the object in the current frame monitoring image, to: acquiring scale change information between a scale of the object in the current frame monitoring image and a scale in a historical frame monitoring image adjacent to the current frame monitoring image; and determining the distance information to be adjusted based on the scale change information and the historical first distance information corresponding to the historical frame monitoring image adjacent to the current frame monitoring image.
In one possible implementation, the determining module 123 determines the scale change information of the object between scales in two adjacent frames of monitoring images in the following manner: respectively extracting first position information of a plurality of feature points contained in the object in a previous frame of monitoring image and second position information in a next frame of monitoring image in the two adjacent frames of monitoring images; and determining scale change information of the object between scales in two adjacent frames of monitoring images based on the first position information and the second position information.
In a possible implementation manner, the determining module 123 is configured to, when determining, based on the first location information and the second location information, scale change information of the object between scales in two adjacent frames of monitoring images: determining a first scale value of a target line segment formed by a plurality of feature points contained in the object in the previous frame of monitoring image based on the first position information; determining a second scale value of the target line segment in the monitoring image of the later frame based on the second position information; and determining scale change information of the object between scales in two adjacent frames of monitoring images based on the first scale value and the second scale value.
Corresponding to the blind area monitoring method in fig. 1, the embodiment of the disclosure further provides an electronic device 1300, as shown in fig. 13, which is a schematic structural diagram of the electronic device 1300 provided in the embodiment of the disclosure, including:
a processor 10, a memory 20, and a bus 30; memory 20 is used to store execution instructions, including memory 210 and external memory 220; the memory 210 is also referred to as an internal memory, and is used for temporarily storing operation data in the processor 10 and data exchanged with the external memory 220 such as a hard disk, and the processor 10 exchanges data with the external memory 220 through the memory 210, and when the electronic device 1300 is running, the processor 10 and the memory 20 communicate with each other through the bus 30, so that the processor 10 executes the following instructions: acquiring a current frame monitoring image acquired by acquisition equipment on a target vehicle; performing object detection on the current frame monitoring image to obtain type information and position of an object included in the current frame monitoring image; determining a target object positioned in a visual field blind area of the target vehicle according to the position of the object and the visual field blind area of the target vehicle; and generating a monitoring result according to the type information and the position of the target object and the driving state of the target vehicle.
The disclosed embodiments also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the blind zone monitoring method described in the method embodiments above. Wherein the storage medium may be a volatile or nonvolatile computer readable storage medium.
The embodiments of the present disclosure further provide a computer program product, where the computer program product carries program code, and instructions included in the program code may be used to perform the steps of the blind area monitoring method described in the foregoing method embodiments, and specifically reference may be made to the foregoing method embodiments, which are not described herein.
Wherein the above-mentioned computer program product may be realized in particular by means of hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied as a computer storage medium, and in another alternative embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and apparatus may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in essence or a part contributing to the prior art or a part of the technical solution, or in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present disclosure. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Finally, it should be noted that: the foregoing examples are merely specific embodiments of the present disclosure, and are not intended to limit the scope of the disclosure, but the present disclosure is not limited thereto, and those skilled in the art will appreciate that while the foregoing examples are described in detail, it is not limited to the disclosure: any person skilled in the art, within the technical scope of the disclosure of the present disclosure, may modify or easily conceive changes to the technical solutions described in the foregoing embodiments, or make equivalent substitutions for some of the technical features thereof; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the disclosure, and are intended to be included within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (12)

1. The blind area monitoring method is characterized by comprising the following steps of:
acquiring a current frame monitoring image acquired by acquisition equipment on a target vehicle;
performing object detection on the current frame monitoring image to obtain type information and position of an object included in the current frame monitoring image;
Determining a target object positioned in a visual field blind area of the target vehicle according to the position of the object and the visual field blind area of the target vehicle;
generating a monitoring result according to the type information and the position of the target object and the driving state of the target vehicle;
wherein the determining the target object located in the blind field of view of the target vehicle according to the position of the object and the blind field of view of the target vehicle comprises: acquiring scale change information between a scale of the object in the current frame monitoring image and a scale in a historical frame monitoring image adjacent to the current frame monitoring image; determining distance information to be adjusted based on the scale change information and historical first distance information corresponding to a historical frame monitoring image adjacent to the current frame monitoring image; adjusting the distance information to be adjusted until the error amount of the scale change information is minimum, and obtaining adjusted distance information; the error amount is determined based on the distance information to be adjusted, the scale change information and historical first distance information corresponding to each frame of historical frame monitoring image in the multi-frame historical frame monitoring images; determining current first distance information based on the adjusted distance information; and determining the target object positioned in the blind area of the visual field of the target vehicle according to the current first distance information.
2. The blind area monitoring method according to claim 1, wherein the monitoring result includes warning information, and the driving state of the target vehicle includes steering information of the target vehicle;
and generating a monitoring result according to the type information and the position of the target object and the driving state of the target vehicle, wherein the monitoring result comprises the following steps:
determining the level of alarm information according to the type information and the position of the target object and the steering information of the target vehicle;
and generating and prompting the alarm information of the determined level.
3. The blind area monitoring method according to claim 1, wherein the monitoring result includes a vehicle control instruction, and the driving state of the target vehicle includes steering information of the target vehicle;
and generating a monitoring result according to the type information and the position of the target object and the driving state of the target vehicle, wherein the monitoring result comprises the following steps:
generating the vehicle control instruction according to the type information and the position of the target object and the steering information of the target vehicle;
the blind area monitoring method further comprises the following steps: and controlling the target vehicle to run based on the vehicle control instruction.
4. The blind zone monitoring method according to claim 1, characterized in that the blind zone monitoring method further comprises, before determining the current first distance information based on the adjusted distance information:
determining current second distance information based on the position of the object in the current frame monitoring image and calibration parameters of the acquisition equipment;
the determining the current first distance information based on the adjusted distance information includes:
determining distance offset information for the adjusted distance information based on the current second distance information, historical second distance information between the object and the target vehicle in each frame of historical frame monitoring image in the multi-frame historical frame monitoring image, the historical first distance information corresponding to the frame of historical frame monitoring image, and the adjusted distance information;
and adjusting the adjusted distance information based on the distance offset information to obtain the current first distance information.
5. The blind zone monitoring method according to claim 4, wherein the determining distance offset information for the adjusted distance information based on the current second distance information, historical second distance information between the object and the target vehicle in each of the plurality of frames of historical frame monitoring images, the historical first distance information corresponding to the frame of historical frame monitoring image, and the adjusted distance information includes:
Determining a first linear fitting coefficient of a first fitting curve which is formed by fitting the historical second distance information corresponding to each frame of history frame monitoring image in the multi-frame history frame monitoring image and the current second distance information based on the current second distance information and the historical second distance information corresponding to each frame of history frame monitoring image in the multi-frame history frame monitoring image;
determining a second linear fitting coefficient of a second fitting curve which is formed by fitting the historical first distance information corresponding to each frame of history frame monitoring image in the multi-frame history frame monitoring image and the adjusted distance information based on the historical first distance information corresponding to each frame of history frame monitoring image in the multi-frame history frame monitoring image and the adjusted distance information;
distance offset information for the adjusted distance information is determined based on the first linear fit coefficient and the second linear fit coefficient.
6. The blind zone monitoring method according to claim 4 or 5, characterized in that the determining the current second distance information based on the position of the object in the current frame monitoring image and the calibration parameters of the acquisition device includes:
Acquiring pixel coordinate values of set corner points in a detection frame based on the position information of the object in the current frame monitoring image;
and determining the current second distance information based on the pixel coordinate values of the set corner points, the calibration parameters of the acquisition equipment and the pixel coordinate values of the lane line vanishing points used in determining the calibration parameters of the acquisition equipment.
7. The blind zone monitoring method of claim 6 wherein the calibration parameters of the acquisition device include a first height value of the acquisition device relative to ground and a focal length of the acquisition device;
the determining the current second distance information based on the pixel coordinate values of the set corner points, the calibration parameters of the collecting device, and the pixel coordinate values of the lane line vanishing points used in determining the calibration parameters of the collecting device includes:
determining a first pixel height value of the acquisition equipment relative to the ground based on the pixel coordinate value of the lane line vanishing point and the pixel coordinate value of the set corner point in the detection frame;
determining a second pixel height value of the object in the current frame monitoring image relative to the ground based on the pixel coordinate values of the set corner points;
Determining a second height value of the object relative to the ground based on the first pixel height value, the second pixel height value, and the first height value;
the current second distance information is determined based on the second height value, the focal length of the acquisition device, and the second pixel height value.
8. The blind area monitoring method according to claim 1, characterized in that the scale change information of the object between scales in two adjacent frames of monitoring images is determined in the following manner:
respectively extracting first position information of a plurality of feature points contained in the object in a previous frame of monitoring image and second position information in a next frame of monitoring image in the two adjacent frames of monitoring images;
and determining scale change information of the object between scales in two adjacent frames of monitoring images based on the first position information and the second position information.
9. The blind zone monitoring method according to claim 8, wherein the determining scale change information of the object between scales in two adjacent frames of monitoring images based on the first position information and the second position information includes:
Determining a first scale value of a target line segment formed by a plurality of feature points contained in the object in the previous frame of monitoring image based on the first position information;
determining a second scale value of the target line segment in the monitoring image of the later frame based on the second position information;
and determining scale change information of the object between scales in two adjacent frames of monitoring images based on the first scale value and the second scale value.
10. A blind zone monitoring device, comprising:
the acquisition module is used for acquiring a current frame monitoring image acquired by acquisition equipment on the target vehicle;
the detection module is used for carrying out object detection on the current frame monitoring image to obtain type information and position of an object included in the image;
the determining module is used for determining a target object positioned in the visual field blind area of the target vehicle according to the position of the object and the visual field blind area of the target vehicle;
the generation module is used for generating a monitoring result according to the type information and the position of the target object and the driving state of the target vehicle;
the determining module is specifically configured to, when determining a target object located in a blind area of a field of view of the target vehicle according to a position of the object and the blind area of the field of view of the target vehicle: acquiring scale change information between a scale of the object in the current frame monitoring image and a scale in a historical frame monitoring image adjacent to the current frame monitoring image; determining distance information to be adjusted based on the scale change information and historical first distance information corresponding to a historical frame monitoring image adjacent to the current frame monitoring image; adjusting the distance information to be adjusted until the error amount of the scale change information is minimum, and obtaining adjusted distance information; the error amount is determined based on the distance information to be adjusted, the scale change information and historical first distance information corresponding to each frame of historical frame monitoring image in the multi-frame historical frame monitoring images; determining current first distance information based on the adjusted distance information; and determining the target object positioned in the blind area of the visual field of the target vehicle according to the current first distance information.
11. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory in communication over the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the blind zone monitoring method of any one of claims 1 to 9.
12. A computer readable storage medium, characterized in that the computer readable storage medium has stored thereon a computer program which, when executed by a processor, performs the steps of the blind zone monitoring method according to any one of claims 1 to 9.
CN202110467776.4A 2021-04-28 2021-04-28 Blind area monitoring method and device, electronic equipment and storage medium Active CN113103957B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110467776.4A CN113103957B (en) 2021-04-28 2021-04-28 Blind area monitoring method and device, electronic equipment and storage medium
PCT/CN2022/084399 WO2022228023A1 (en) 2021-04-28 2022-03-31 Blind area monitoring method and apparatus, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110467776.4A CN113103957B (en) 2021-04-28 2021-04-28 Blind area monitoring method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113103957A CN113103957A (en) 2021-07-13
CN113103957B true CN113103957B (en) 2023-07-28

Family

ID=76720492

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110467776.4A Active CN113103957B (en) 2021-04-28 2021-04-28 Blind area monitoring method and device, electronic equipment and storage medium

Country Status (2)

Country Link
CN (1) CN113103957B (en)
WO (1) WO2022228023A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113103957B (en) * 2021-04-28 2023-07-28 上海商汤临港智能科技有限公司 Blind area monitoring method and device, electronic equipment and storage medium
CN116184992A (en) * 2021-11-29 2023-05-30 上海商汤临港智能科技有限公司 Vehicle control method, device, electronic equipment and storage medium

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4045862B2 (en) * 2002-06-03 2008-02-13 日産自動車株式会社 Optical axis deviation detection device for in-vehicle camera
US8164628B2 (en) * 2006-01-04 2012-04-24 Mobileye Technologies Ltd. Estimating distance to an object using a sequence of images recorded by a monocular camera
CN105279770A (en) * 2015-10-21 2016-01-27 浪潮(北京)电子信息产业有限公司 Target tracking control method and device
CN106524922B (en) * 2016-10-28 2019-01-15 深圳地平线机器人科技有限公司 Ranging calibration method, device and electronic equipment
CN110386065B (en) * 2018-04-20 2021-09-21 比亚迪股份有限公司 Vehicle blind area monitoring method and device, computer equipment and storage medium
CN108596116B (en) * 2018-04-27 2021-11-05 深圳市商汤科技有限公司 Distance measuring method, intelligent control method and device, electronic equipment and storage medium
WO2020037604A1 (en) * 2018-08-23 2020-02-27 深圳市锐明技术股份有限公司 Automobile blind area monitoring and alarming method and apparatus, device and storage medium
WO2020151560A1 (en) * 2019-01-24 2020-07-30 杭州海康汽车技术有限公司 Vehicle blind spot detection method, apparatus and system
CN111942282B (en) * 2019-05-17 2022-09-06 比亚迪股份有限公司 Vehicle and driving blind area early warning method, device and system thereof and storage medium
CN111998780B (en) * 2019-05-27 2022-07-01 杭州海康威视数字技术股份有限公司 Target ranging method, device and system
CN111829484B (en) * 2020-06-03 2022-05-03 江西江铃集团新能源汽车有限公司 Target distance measuring and calculating method based on vision
CN112489136B (en) * 2020-11-30 2024-04-16 商汤集团有限公司 Calibration method, position determination device, electronic equipment and storage medium
CN113103957B (en) * 2021-04-28 2023-07-28 上海商汤临港智能科技有限公司 Blind area monitoring method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113103957A (en) 2021-07-13
WO2022228023A1 (en) 2022-11-03

Similar Documents

Publication Publication Date Title
US11318928B2 (en) Vehicular automated parking system
JP6833630B2 (en) Object detector, object detection method and program
JP6473571B2 (en) TTC measuring device and TTC measuring program
CN106981082B (en) Vehicle-mounted camera calibration method and device and vehicle-mounted equipment
JP5939357B2 (en) Moving track prediction apparatus and moving track prediction method
CN113103957B (en) Blind area monitoring method and device, electronic equipment and storage medium
WO2018120040A1 (en) Obstacle detection method and device
CN107122770B (en) Multi-camera system, intelligent driving system, automobile, method and storage medium
US10929986B2 (en) Techniques for using a simple neural network model and standard camera for image detection in autonomous driving
CN206623754U (en) Lane detection device
CN110667474B (en) General obstacle detection method and device and automatic driving system
WO2020143916A1 (en) A method for multi-modal sensor fusion using object trajectories for cross-domain correspondence
US20220253634A1 (en) Event-based vehicle pose estimation using monochromatic imaging
CN112193241A (en) Automatic parking method
JP2019171903A (en) Looking-away determination device, looking-away determination system, looking-away determination method, program
CN113188509B (en) Distance measurement method and device, electronic equipment and storage medium
CN107458308A (en) A kind of auxiliary driving method and system
US20200193184A1 (en) Image processing device and image processing method
KR20100066952A (en) Apparatus for tracking obstacle using stereo vision and method thereof
CN112417976B (en) Pavement detection and identification method and device, intelligent terminal and storage medium
JP2020077293A (en) Lane line detection device and lane line detection method
CN112529011A (en) Target detection method and related device
KR102003387B1 (en) Method for detecting and locating traffic participants using bird's-eye view image, computer-readerble recording medium storing traffic participants detecting and locating program
EP3288260B1 (en) Image processing device, imaging device, equipment control system, equipment, image processing method, and carrier means
CN111754542A (en) Target object determination method, tracking method, device, equipment and storage medium thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40049824

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant