CN112937446A - Blind area video acquisition method and system - Google Patents
Blind area video acquisition method and system Download PDFInfo
- Publication number
- CN112937446A CN112937446A CN202110401703.5A CN202110401703A CN112937446A CN 112937446 A CN112937446 A CN 112937446A CN 202110401703 A CN202110401703 A CN 202110401703A CN 112937446 A CN112937446 A CN 112937446A
- Authority
- CN
- China
- Prior art keywords
- blind area
- threshold
- automobile
- video
- detecting
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R1/00—Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R16/00—Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for
- B60R16/02—Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements
- B60R16/023—Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements for transmission of signals between vehicle parts or subsystems
- B60R16/0231—Circuits relating to the driving or the functioning of the vehicle
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/10—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used
- B60R2300/105—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used using multiple cameras
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/20—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used
- B60R2300/202—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used displaying a blind spot scene on the vehicle part responsible for the blind spot
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/80—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
- B60R2300/802—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for monitoring and displaying vehicle exterior blind spot views
Landscapes
- Engineering & Computer Science (AREA)
- Mechanical Engineering (AREA)
- Automation & Control Theory (AREA)
- Multimedia (AREA)
- Traffic Control Systems (AREA)
Abstract
The embodiment of the application provides a method and a system for acquiring blind area videos, wherein the method comprises the following steps: detecting a sheltering object in front of the automobile after the automobile is detected to be started; under the condition that a shelter is detected to exist in front of the automobile, calculating the area of the shelter; if the time length that the area of the shelter is larger than or equal to the first threshold reaches a second threshold, starting blind area video acquisition; and displaying the collected blind area video. Through detecting the sheltering from thing in car the place ahead to calculate the area of this sheltering from thing, automatic start is to blind area video's collection and demonstration, can be great at the shelter from thing area, and when probably influencing driver's safe driving, make the driver can see the things in own sight blind area scope, the driver can make reasonable driving operation based on this blind area video, thereby avoids the emergence of accident.
Description
Technical Field
The present application relates to the field of safe driving of vehicles, and more particularly, to a blind area video capturing method and system.
Background
With the development of society and economy, automobile travel becomes an important part of people's traffic travel. Nowadays, there are more and more vehicles on the road, small vehicles running on the road, and when encountering a large vehicle in front, the driver of the small vehicle wants to change lanes and/or overtake because the large vehicle runs relatively slowly. Illustratively, a domestic driver seat is arranged in the front row on the left side in the vehicle, and a driver generally changes lanes and/or overtakes from the left side according to domestic driving habits, however, when the driver changes lanes and/or overtakes from the left side, the driver needs to change lanes and/or overtakes from the right side, but the driver is arranged on the left side in the vehicle, the sight line on the right side is blocked by a front vehicle, the blind area is large, and accidents are easy to happen.
Disclosure of Invention
The embodiment of the application provides a blind area video acquisition method and system.
According to the blind area video acquisition method, the acquisition and display of the blind area video are automatically started by detecting the sheltering object in front of the automobile and calculating the area of the sheltering object, so that a driver can see objects in the range of the own sight blind area, and accidents are avoided.
The method may be performed by a blind spot video acquisition system.
Specifically, the method comprises the following steps: detecting a shelter in front of the automobile after the automobile is detected to be started; under the condition that a shelter is detected to exist in front of the automobile, calculating the area of the shelter; if the time length that the area of the shielding object is larger than or equal to the first threshold reaches a second threshold, starting blind area video acquisition; and displaying the collected blind area video.
Based on above-mentioned scheme, through detecting the sheltering from thing in car the place ahead to calculate the area of sheltering from the thing, when the time length that the area of sheltering from the thing is greater than or equal to first threshold reaches the second threshold, automatic start is to the video collection of blind area and show, can be great at the shelter from the thing area, and probably when producing the influence to driver's safe driving, make the driver can see the things in the own sight blind area scope, the driver can make reasonable driving operation based on this blind area video, thereby avoid the emergence of accident.
Optionally, the method further comprises: and if the time length that the area of the shielding object is smaller than the first threshold reaches a third threshold, ending the display of the blind area video, or ending the acquisition and the display of the blind area video.
Optionally, after detecting that the vehicle is started, detecting a barrier in front of the vehicle includes: detecting the speed of the automobile after detecting that the automobile is started; and under the condition that the speed of the automobile is greater than zero, detecting a shelter in front of the automobile.
Optionally, the detecting the shelter in front of the automobile comprises: and detecting the sheltering object in front of the automobile by utilizing one of a monocular camera, a panoramic camera, a binocular camera, a multi-view camera, a laser radar, a millimeter wave radar or an ultrasonic radar.
Optionally, if the time length that the area of the blocking object is greater than or equal to the first threshold reaches the second threshold, starting blind area video acquisition, including: and if the time length that the area of the shielding object is larger than or equal to the first threshold value reaches a second threshold value, acquiring the blind area video by using a single-side camera or a panoramic camera.
Optionally, a camera for collecting the blind area video is mounted on the back of the rearview mirror of the automobile.
Optionally, the displaying the collected blind area video includes: and displaying the collected blind area video in the display screen in any one mode of a floating window, a full screen and a split screen.
Optionally, the method further comprises: detecting the working state of an engine or a motor; and when the engine or the motor is detected to be started, determining that the automobile is started.
In a second aspect, a blind area video capture system is provided, which includes modules or units for implementing the blind area video capture method described in any one of the first aspect and the first aspect.
In a third aspect, there is provided a computer readable storage medium comprising a computer program which, when run on a computer, causes the computer to carry out the method of any one of the first aspect and the first aspect.
In a fourth aspect, there is provided a computer program product comprising: a computer program (which may also be referred to as code, or instructions), which when executed, causes a computer to perform the method of any of the first and second aspects.
It should be understood that the second aspect to the fourth aspect of the present application correspond to the technical solutions of the first aspect of the present application, and the beneficial effects achieved by the aspects and the corresponding possible implementations are similar and will not be described again.
Drawings
Fig. 1 and fig. 2 are schematic scene diagrams of a blind area video capture method provided in an embodiment of the present application;
FIG. 3 is a system diagram of a blind area video capture method suitable for use in embodiments of the present application;
FIGS. 4 and 5 are schematic views of the positions of the front monocular camera and the right blind spot camera mounted on the vehicle;
fig. 6 is a schematic flow chart of a blind area video acquisition method provided in an embodiment of the present application;
fig. 7 is a schematic block diagram of a blind area video capture device provided in an embodiment of the present application.
Detailed Description
The technical solution in the present application will be described below with reference to the accompanying drawings.
In order to better understand the blind area video acquisition method provided in the embodiment of the present application, a scene suitable for the blind area video acquisition method provided in the embodiment of the present application is briefly described below with reference to fig. 1 and fig. 2.
Fig. 1 and fig. 2 are scene schematic diagrams suitable for the blind area video acquisition method provided in the embodiment of the present application. Small vehicles travel on the road and when encountering a large vehicle in front, the small driver may wish to change lanes and/or overtake because the large vehicle is traveling more slowly. Taking the domestic driving situation as an example, the domestic driving seat is arranged at the front row at the left side in the vehicle, and the driver generally changes lanes and/or overtakes from the left side according to the domestic driving habit. As shown in fig. 1, when a large vehicle and a small vehicle travel on a road, the large vehicle which may be in front of the road travels relatively slowly, and as can be seen from fig. 1, the left sight-line blind area of the driver is small, and the driver of the small vehicle can change lanes and/or overtake from the left. However, in an actual driving scenario, there may be some special cases (e.g., the left-hand lane in front is under construction and is out of the way), when the driver changes lanes and/or overtakes on the left side, it is necessary to change lanes and/or overtake from the right side. As shown in fig. 2, the driver selects to change lanes and/or overtake from the right side, but the driver is in the front row of the left side in the vehicle, as can be seen from fig. 2, the sight line on the right side is blocked by a large vehicle in front, the blind area of the sight line on the right side of the driver is large, and accidents are easy to happen.
Similarly, in some countries, the driver's seat is located the right front row in the car, and when the driver changes lanes and/or overtakes from the left side, also can have great left side sight blind area, also takes place the accident easily.
Therefore, it is desirable to provide a method for displaying the image in the blind area of the driver in a video mode, so that the driver can determine whether to change lanes and/or overtake the vehicle according to the video, thereby avoiding the occurrence of accidents.
Taking domestic driving conditions as an example, fig. 3 is a schematic view of a blind area video acquisition system provided by the embodiment of the application. It should be understood that the blind area video capturing system shown in fig. 3 may be deployed on an automobile, and may be used to execute the blind area video capturing method provided in the embodiment of the present application.
As shown in fig. 3, the blind area video capturing system 300 includes a vehicle body controller 301, a video processing controller 302, a multimedia controller 303, a front monocular camera 304, a blind area camera 305, a central control display screen 306, and a gateway 307, and the above-mentioned parts may be connected through a Controller Area Network (CAN) bus, and the CAN bus may be used for transmitting corresponding information. It will be appreciated that the form or format of the signals transmitted over the different CAN buses connected to the different parts described above may be different and that the gateway may translate the signals in the different form or format for transmission to the respective parts.
It should be understood that in China and other countries where the drivers' seats are located in the front row of the left side of the vehicle, when a driver changes lanes and/or overtakes from the right side, a large right-side sight blind area exists, accidents are prone to happening, and the blind area camera in fig. 3 can be a right-side blind area camera to collect right-side blind area videos.
And in some countries that the driver's seat is located the right side front-seat in the car, when the driver changes the way and/or overtaking from the left side, also can have great left side sight blind area, also easy emergence accident, and the blind area camera in figure 3 can be left side blind area camera, gathers left side blind area video.
It should also be understood that the blind zone camera 305 described above may be a single-sided camera, i.e., either the right-sided blind zone camera or the left-sided blind zone camera described above.
In addition, the front monocular camera 304 may be replaced with any one of a panoramic camera, a binocular camera, a multi-view camera, a laser radar, a millimeter wave radar, an ultrasonic radar, and the like; the blind zone camera 305 may be replaced with a panoramic camera; the central control display screen 306 can be replaced by a liquid crystal instrument or other special display screens; the CAN bus may also be replaced by a Local Interconnect Network (LIN) bus, a flex ray bus, a Media Oriented System Transport (MOST) bus, or other bus or hard wire; the gateway may be replaced by other processors capable of mutually converting the signals of the above parts.
It should be appreciated that when the front monocular camera 304 is replaced with a lidar, the video processing controller 302 may be replaced with a lidar signal processor accordingly; when the front monocular camera 304 is replaced with a millimeter wave radar, the video processing controller 302 may be replaced with a millimeter wave radar signal processor accordingly; when the front monocular camera 304 is replaced with an ultrasonic radar, the video processing controller 302 may be replaced with an ultrasonic radar signal processor accordingly. Hereinafter, the description of the same or similar cases will be omitted for the sake of brevity.
It should be noted that in the blind area video capturing system shown in fig. 3, the vehicle body controller 301, the video processing controller 302 and the multimedia controller 303 may be three independent controllers, but in some possible implementations, the three controllers may also be integrated into one controller, and the integrated controller may implement the functions that can be implemented by the three independent controllers, which is not limited in this application.
It should be further understood that the blind area video capturing system shown in fig. 3 is only exemplary, and the blind area video capturing system may be different for vehicles with different configurations, as long as the blind area video capturing method provided by the present application can be implemented, and the present application does not limit the blind area video capturing system at all.
As an example, in fig. 4, taking domestic driving conditions as an example, the blind area video capture system in fig. 3 is shown, and an example of a front monocular camera and a right blind area camera mounted on an automobile is shown. In fig. 4, the right blind spot camera is mounted at a position in the front right of the automobile.
As another example, in fig. 5, a driving situation in China is taken as an example, and the front monocular camera and the right blind zone camera are shown to be installed on the automobile corresponding to the blind zone video capturing system in fig. 3. In fig. 5, the right blind spot camera is mounted on the back of the right side rearview mirror of the automobile. The right blind spot camera in fig. 5 can photograph a wider area than the position shown in fig. 4.
It should be understood that for some countries where the operator's seat is in the front right row of the vehicle, a front monocular camera and a left blind zone camera may be deployed on the vehicle, and the left blind zone camera may be mounted on the back of the left front or left rear view mirror of the vehicle. It should also be understood that the positions of the monocular camera and the blind zone camera on the vehicle are not limited to the positions shown in fig. 4 and 5, and may be installed in other positions capable of collecting the front video and the blind zone video, which is not limited in the present application.
For convenience of understanding, the blind area video capturing method provided by the present application is described in detail below with reference to fig. 6, taking the domestic driving situation as an example, taking the blind area video capturing system 300 shown in fig. 3 as an execution subject.
Fig. 6 is a schematic flowchart of a blind area video capture method according to an embodiment of the present application. As shown in fig. 6, the method 600 may include steps 610 through 640. The individual steps in method 600 are described in detail below.
In step 610, upon detecting the start of the vehicle, a barrier in front of the vehicle is detected.
Specifically, the body controller 301 may cyclically detect whether the vehicle is started after the vehicle is powered on. And after detecting that the automobile is started, the shielding object in front of the automobile is automatically detected, so that the aim of automatically acquiring road condition information in front of the automobile is fulfilled.
Optionally, before step 610, the method further comprises: the body controller 301 may cycle through whether the vehicle is started. Currently, the energy source of an automobile may include fuel, natural gas, batteries, and the like. For an automobile which is powered by fuel oil, natural gas and the like, the automobile body controller 301 may cyclically detect the operating state of an engine of the automobile, and when detecting the start of the engine, determine that the automobile is started. For a vehicle that solely depends on a battery or a hybrid of oil and electricity, the vehicle body controller 301 may cyclically detect the operating state of a motor of the vehicle, and when detecting the start of the motor, determine that the vehicle is started.
For example, after detecting that the automobile is started, the body controller 301 may send a blocking detection instruction to the gateway 307 through a CAN bus connected between the gateway 307 and the body controller 301; after receiving the occlusion detection instruction sent by the body controller 301, the gateway 307 may convert the occlusion detection instruction into a signal form or format that CAN be interpreted by the video processing controller 302, and then the gateway 307 may send the occlusion detection instruction to the video processing controller 302 through a CAN bus connected between the video processing controller 302 and the gateway 307.
After receiving the occlusion detection instruction sent by the gateway 307, the video processing controller 302 may control the front monocular camera 304 to shoot a video through the CAN bus; the front monocular camera 304 starts capturing a front video signal in response to the control of the video processing controller 302, and may transmit the captured front video signal to the video processing controller 302 in real time. The video processing controller 302 can perform image recognition on the front video signal sent by the front monocular camera 304 through an image recognition algorithm, and detect whether an obstruction exists in front of the automobile. Therefore, the detection of the front sheltering object of the automobile can be realized.
As previously described, the front monocular camera 304 may be replaced with any of a panoramic camera, a binocular camera, a multi-view camera, and the like.
It should be understood that, after the front monocular camera 304 is replaced by any one of the panoramic camera, the binocular camera, the multi-view camera, and the like, the process of detecting the sheltering object in front of the automobile may refer to the above processing process, and for brevity, the description is omitted here.
The front monocular camera 304 may be replaced with any one of a laser radar, a millimeter wave radar, an ultrasonic radar, and the like.
It should be understood that the above-mentioned laser radar, millimeter wave radar, and ultrasonic radar may acquire point cloud data (which may include position information and/or contour information) of an obstruction and may transmit the data to a corresponding signal processor. The laser radar corresponds to the laser radar signal processor; the millimeter wave radar corresponds to the millimeter wave radar signal processor; the ultrasonic radar corresponds to an ultrasonic radar signal processor.
Optionally, after the vehicle is detected to be started, the speed of the vehicle is detected; and detecting the shelters in front of the automobile when the speed of the automobile is greater than zero.
For example, the vehicle body controller 301 may further detect the vehicle speed of the vehicle after detecting the start of the vehicle. The vehicle speed of the automobile may be detected by a vehicle speed sensor, and the current vehicle speed may be acquired by a wheel type odometer, for example. When the speed of a vehicle is greater than zero, the shelter in front of the vehicle can be detected. It should be understood that the present application is not limited to the specific implementation of obtaining the vehicle speed.
It should be understood that the two ways of determining whether to detect the shelters in front of the vehicle by whether the vehicle is started and whether the vehicle speed is greater than zero can be used separately or in combination.
For example, the vehicle body controller 301 may call the vehicle speed of the vehicle from the vehicle speed sensor after detecting that the vehicle is started, and detect the obstacle in front of the vehicle when the vehicle speed is greater than zero. That is, the vehicle body controller 301 may send the occlusion detection instruction to the gateway 307 through the CAN bus when the vehicle is started and the vehicle speed is greater than zero.
In step 620, if it is detected that a shield exists in front of the vehicle, the area of the shield is calculated.
For example, when the video processing controller 302 detects that there is an obstruction in front of the automobile, the area of the obstruction may be calculated according to the position information and/or contour information of the obstruction obtained in the image recognition algorithm.
When the front monocular camera 304 is replaced with any one of a laser radar, a millimeter wave radar, an ultrasonic radar and the like, the corresponding signal processor can calculate the area of the shielding object according to the position information and/or the outline information of the shielding object sent by the radar.
In step 630, if the length of time that the area of the obstruction is greater than or equal to the first threshold reaches a second threshold, blind area video capture is started.
It should be understood that when the area of the shelter is larger, the area of the blind area of the shelter for the driver is larger, and therefore, when the area of the shelter exceeds the preset threshold, it can be considered that the area of the blind area of the shelter for the driver may exceed a certain safety range, and may have a certain influence on the safety of the driver. Therefore, if the area of the shelter that detects car the place ahead all is in the time of the great value in a long period of time, can start blind area video acquisition, for example can utilize place ahead unilateral camera or panorama camera to gather the blind area video to be convenient for the driver to observe the road conditions of blind area.
In order to facilitate the judgment of whether the blind area video acquisition needs to be started, a time threshold and an area threshold of a shelter can be preset. For convenience of description, this area threshold is referred to as a first threshold, and this time threshold is referred to as a second threshold.
For example, if the time length that the area of the obstruction is greater than or equal to the first threshold reaches the second threshold, the video processing controller 302 may send a blind area detection instruction to the gateway 307 through a CAN bus connected between the gateway 307 and the video processing controller 302; after receiving the blind area detection instruction sent by the video processing controller 302, the gateway 307 may convert the blind area detection instruction into a signal form or format that CAN be interpreted by the multimedia controller 303, and then send the signal form or format to the multimedia controller 303 through a CAN bus connected between the multimedia controller 303 and the gateway 307.
After receiving the blind area detection instruction, the multimedia controller 303 may send a blind area camera detection start instruction to the blind area camera 305 through a CAN bus connected between the blind area camera 305 and the multimedia controller 303; the blind area camera 305 may start to collect a video signal of the blind area after receiving a blind area camera start detection instruction sent by the multimedia controller 303, and may send the collected video signal of the blind area to the multimedia controller 303 in real time.
If the area of the shielding object is smaller than the first threshold value, or the time length that the area of the shielding object is larger than or equal to the first threshold value does not reach the second threshold value, blind area video acquisition is not started, the shielding object in front of the automobile is continuously detected, and the area of the shielding object is continuously calculated under the condition that the shielding object exists.
It should be understood that the time length reaches the second threshold, specifically, the time length is greater than or equal to the second threshold; the time length does not reach the second threshold, specifically, the time length is smaller than the second threshold.
It should be understood that determining whether to start blind area video capture according to the area of the obstruction is only one possible implementation. Based on the same concept, a person skilled in the art can make simple transformations. Such variations are intended to fall within the scope of the present application.
For example, the step 620 may be replaced by calculating the horizontal width and the vertical height of the shelter when the shelter is detected to be in front of the automobile. Step 630 may be replaced by, if the horizontal width of the obstruction is greater than or equal to the fourth threshold and the time length of the vertical height of the obstruction being greater than or equal to the fifth threshold reaches the second threshold, starting the blind area video capture.
In other words, the blind area video capture is started when the time length that the horizontal width of the barrier is greater than or equal to the fourth threshold reaches the second threshold and the time length that the vertical height of the barrier is greater than or equal to the fifth threshold also reaches the second threshold.
Conversely, if the horizontal width of the shade is greater than or equal to the fourth threshold, but the vertical height is less than the fifth threshold; or, if the horizontal width of the shade is less than the fourth threshold, but the vertical height is greater than or equal to the fifth threshold; or if the time length that the horizontal width of the shelter is greater than or equal to the fourth threshold value and the vertical height of the shelter is greater than or equal to the fifth threshold value does not reach the second threshold value, the area of the sight blind area caused by the shelter to the driver is considered not to exceed a certain safety range, and the safety of the driver is probably not affected, so that the blind area video collection does not need to be started.
Still alternatively, in step 630, if the length of time that the horizontal width of the obstruction is greater than or equal to the fourth threshold reaches the sixth threshold and the length of time that the vertical height of the obstruction is greater than or equal to the fifth threshold reaches the seventh threshold, the blind area video capture is started.
Conversely, if the length of time that the horizontal width of the covering is greater than or equal to the fourth threshold reaches the sixth threshold, but the vertical height is less than the fifth threshold; or if the horizontal width of the shade is less than the fourth threshold, but the length of time that the vertical height is greater than or equal to the fifth threshold reaches a seventh threshold; or if the time length of the horizontal width of the shelter being larger than or equal to the fourth threshold does not reach the sixth threshold, but the time length of the vertical height of the shelter being larger than or equal to the fifth threshold reaches the seventh threshold; or if the time length that the horizontal width of the shelter is greater than or equal to the fourth threshold reaches the sixth threshold, but the time length that the vertical height of the shelter is greater than or equal to the fifth threshold does not reach the seventh threshold; or if the time length of the horizontal width of the shelter is greater than or equal to the fourth threshold value and does not reach the sixth threshold value, and the time length of the vertical height of the shelter is greater than or equal to the fifth threshold value and does not reach the seventh threshold value, the area of the sight blind area caused by the shelter to the driver is considered not to exceed a certain safety range, and the safety of the driver is probably not affected, so that the blind area video acquisition is not required to be started.
It should be understood that, similar to the above-mentioned process of calculating the area of the obstruction, when the video processing controller 302 detects that the obstruction exists in front of the automobile, the horizontal width and the vertical height of the obstruction can be calculated according to the position information and/or the contour information of the obstruction obtained in the image recognition algorithm; or, when the front monocular camera 304 is replaced with any one of a laser radar, a millimeter wave radar, an ultrasonic radar, and the like, the corresponding signal processor may calculate the horizontal width and the vertical height of the blocking object according to the position information and/or the profile information of the blocking object sent by the radar.
It should be understood that the time length reaches the sixth threshold or the seventh threshold, specifically, the time length is greater than or equal to the sixth threshold or the seventh threshold; the time length does not reach the sixth threshold or the seventh threshold, specifically, the time length is smaller than the sixth threshold or the seventh threshold.
It should be understood that the time length threshold, the area threshold, the horizontal width threshold, and the vertical height threshold listed herein are named only for convenience of distinction, and should not constitute any limitation on the embodiments of the present application.
In step 640, the acquired blind area video is displayed.
Illustratively, after receiving the video signal of the blind area sent by the blind area camera 305, the multimedia controller 303 may display the collected blind area video in the central control display screen 306 in any one of a floating window, a full screen and a split screen.
Alternatively, the central control display 306 may be replaced by a liquid crystal meter or other dedicated display. The display mode of the collected blind area video may be the same as the above process, and for brevity, the description is omitted here.
Optionally, if the time length that the area of the blocking object is smaller than the first threshold reaches a third threshold, ending the display of the collected blind area video, or ending the collection and display of the blind area video.
It will be appreciated that the smaller the area of the barrier, the smaller the area of the blind spot caused by the barrier to the driver, and therefore, when the area of the barrier is less than the first threshold, it can be said that the area of the blind spot caused by the barrier to the driver may not have a significant impact on the safety of the driver. Similarly, if it is detected that the area of the blocking object in front of the automobile is smaller for a longer period of time, the display of the blind area video may be ended, for example, the multimedia controller 303 controls the central control display screen 306 to end the display of the collected blind area video.
It should also be appreciated that in this manner, the blind zone camera 305 may continue to capture video signals of the blind zone and send them to the multimedia controller 303 in real time, but the multimedia controller 303 may no longer display the captured blind zone video on the central display screen 306. This can reduce the influence of the display of the blind area video on the display of other content on the center control display screen 306.
In another possible implementation, when the length of time that the area of the blocking object is smaller than the first threshold reaches the third threshold, the blind area camera 305 may end the capturing of the video signal of the blind area and no longer send the video signal to the multimedia controller 303 in real time, so that the multimedia controller 303 may end the displaying of the captured blind area video on the central control display screen 306.
It should be understood that the time length reaches the third threshold, specifically, the time length is greater than or equal to the third threshold; the time length does not reach the third threshold, specifically, the time length is smaller than the third threshold.
Optionally, corresponding to the above-mentioned alternative implementation of step 620 and step 630, the method further comprises:
and if the time length that the horizontal width of the shelter is smaller than the fourth threshold value and/or the vertical height of the shelter is smaller than the fifth threshold value reaches an eighth threshold value, finishing the acquisition of the blind area video.
Similar to the above processing, the description is omitted here for brevity.
It should be understood that the time length reaches the eighth threshold, specifically, the time length is greater than or equal to the eighth threshold; the time length does not reach the eighth threshold, specifically, the time length is smaller than the eighth threshold.
Based on the scheme, by detecting the sheltering object in front of the automobile and calculating the area of the sheltering object, when the time length that the area of the sheltering object is larger than or equal to the first threshold reaches the second threshold, or when the time length that the horizontal width of the sheltering object is larger than or equal to the fourth threshold and the vertical height of the sheltering object is larger than or equal to the fifth threshold reaches the second threshold, or when the time length that the horizontal width of the sheltering object is larger than or equal to the fourth threshold reaches the sixth threshold and the time length that the vertical height of the sheltering object is larger than or equal to the fifth threshold reaches the seventh threshold, the collection and display of the blind area video are automatically started, when the area of the sheltering object is larger and possibly influences the safe driving of the driver, the driver can see things in the range of the own sight blind area, and the driver can make reasonable driving operation based on the blind area video, thereby avoiding the occurrence of accidents. When the driver has changed lane and/or overtaking, or other situations possibly cause the barrier to change from being larger than or equal to the first threshold value to being smaller than the first threshold value, or other situations possibly cause the horizontal width of the barrier to be larger than or equal to the fourth threshold value to being smaller than the fourth threshold value, or other situations possibly cause the vertical height of the barrier to be larger than or equal to the fifth threshold value to being smaller than the fifth threshold value, the video acquisition and/or display can be automatically ended, and therefore the influence of the display of the blind area video on the display of other contents on the display screen is reduced.
Fig. 7 is a schematic block diagram of a blind area video capture device provided in an embodiment of the present application. The apparatus 700 shown in fig. 7 may be deployed in a blind spot video capture system 300 as shown in fig. 3 to implement the functions of the body controller 301, the video processing controller 302 and the multimedia controller 303.
Illustratively, as shown in fig. 7, the apparatus 700 may include: an obstruction detection unit 710, an acquisition unit 720, and an automobile state detection unit 730. The blocking object detection unit 710 may correspond to the video processing controller 302, the capturing unit 720 may correspond to the multimedia controller 303, and the car state detection unit 730 may correspond to the car body controller 301. As described above, the body controller 301, the video processing controller 302, and the multimedia controller 303 may be independent from each other or may be integrated together. In other words, the apparatus 700 may be distributed over a plurality of different devices to perform corresponding functions, or may be implemented by the same device.
Specifically, the blocking object detecting unit 710 can be used for detecting a blocking object in front of an automobile after the automobile is started; under the condition that a shelter is detected to exist in front of the automobile, calculating the area of the shelter; the capturing unit 720 may be configured to start the blind area video capturing when the time length during which the area of the obstruction is greater than or equal to the first threshold reaches the second threshold.
Optionally, the collecting unit 720 may be further configured to, when the time length that the area of the blocking object is smaller than the first threshold reaches a third threshold, end the display of the blind area video, or end the collection and the display of the blind area video.
Optionally, the blocking object detecting unit 710 may be further configured to detect a speed of the vehicle after detecting that the vehicle is started; and under the condition that the speed of the automobile is greater than zero, detecting a shelter in front of the automobile.
Optionally, the blocking object detecting unit 710 may be specifically configured to control one of a monocular camera, a panoramic camera, a binocular camera, a multi-view camera, a laser radar, a millimeter wave radar, and an ultrasonic radar, and detect the blocking object in front of the vehicle.
Optionally, the collecting unit 720 may be specifically configured to control a single-side camera or a panoramic camera to collect the blind area video.
Optionally, the capturing unit 720 may be configured to output the captured blind area video to a display screen, so as to display the captured blind area video through the display screen.
Alternatively, the vehicle state detection unit 730 may be used to detect the operating state of the engine or the motor; and when the engine or the motor is detected to be started, determining that the automobile is started.
It should be understood that the division of the units in the embodiments of the present application is illustrative, and is only one logical function division, and there may be other division manners in actual implementation. In addition, functional units in the embodiments of the present application may be integrated into one processor, may exist alone physically, or may be integrated into one module by two or more units. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The present application further provides a computer program product, the computer program product comprising: a computer program (also referred to as code, or instructions), which when executed, causes a computer to perform the method of the embodiment shown in fig. 6.
The present application also provides a computer-readable storage medium having stored thereon a computer program (also referred to as code, or instructions). Which when executed, causes a computer to perform the method of the embodiment shown in fig. 6.
It should be understood that the processor in the embodiments of the present application may be an integrated circuit chip having signal processing capability. In implementation, the steps of the above method embodiments may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The processor may be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, or discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor.
It will also be appreciated that the memory in the embodiments of the subject application can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. The non-volatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of example, but not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), double data rate SDRAM, enhanced SDRAM, SLDRAM, Synchronous Link DRAM (SLDRAM), and direct rambus RAM (DR RAM). It should be noted that the memory of the systems and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
As used in this specification, the terms "unit," "module," and the like are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, or software in execution.
Those of ordinary skill in the art will appreciate that the various illustrative logical blocks and steps (step) described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application. In the several embodiments provided in the present application, it should be understood that the disclosed apparatus, device and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
In the above embodiments, the functions of the functional units may be fully or partially implemented by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions (programs). The procedures or functions described in accordance with the embodiments of the present application are generated in whole or in part when the computer program instructions (programs) are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a Digital Versatile Disk (DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), among others.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk. The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (10)
1. A blind area video acquisition method is characterized by comprising the following steps:
detecting a shelter in front of the automobile after the automobile is detected to be started;
under the condition that a shelter is detected to exist in front of the automobile, calculating the area of the shelter;
if the time length that the area of the shielding object is larger than or equal to the first threshold reaches a second threshold, starting blind area video acquisition;
and displaying the collected blind area video.
2. The method of claim 1, wherein the method further comprises:
and if the time length that the area of the shielding object is smaller than the first threshold reaches a third threshold, ending the display of the blind area video, or ending the acquisition and the display of the blind area video.
3. The method of claim 1 or 2, wherein detecting the obstruction in front of the vehicle after detecting the vehicle is started comprises:
detecting the speed of the automobile after detecting that the automobile is started;
and under the condition that the speed of the automobile is greater than zero, detecting a shelter in front of the automobile.
4. The method of claim 1, wherein the detecting the obstruction in front of the vehicle comprises:
and detecting the sheltering object in front of the automobile by utilizing one of a monocular camera, a panoramic camera, a binocular camera, a multi-view camera, a laser radar, a millimeter wave radar or an ultrasonic radar.
5. The method of claim 1, wherein initiating a blind zone video capture if the length of time that the area of the obstruction is greater than or equal to the first threshold reaches a second threshold comprises:
and if the time length that the area of the shielding object is larger than or equal to the first threshold value reaches a second threshold value, acquiring the blind area video by using a single-side camera or a panoramic camera.
6. The method of claim 1, wherein displaying the captured blind area video comprises:
and displaying the collected blind area video in the display screen in any one mode of a floating window, a full screen and a split screen.
7. The method of claim 1, wherein the method further comprises:
detecting the working state of an engine or a motor;
and when the engine or the motor is detected to be started, determining that the automobile is started.
8. A blind area video capture system for implementing the method of any one of claims 1 to 7.
9. A computer-readable storage medium, comprising a computer program which, when run on a computer, causes the computer to perform the method of any one of claims 1 to 7.
10. A computer program product, comprising a computer program which, when executed, causes a computer to perform the method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110401703.5A CN112937446A (en) | 2021-04-14 | 2021-04-14 | Blind area video acquisition method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110401703.5A CN112937446A (en) | 2021-04-14 | 2021-04-14 | Blind area video acquisition method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112937446A true CN112937446A (en) | 2021-06-11 |
Family
ID=76232610
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110401703.5A Pending CN112937446A (en) | 2021-04-14 | 2021-04-14 | Blind area video acquisition method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112937446A (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN202686191U (en) * | 2012-06-11 | 2013-01-23 | 浙江吉利汽车研究院有限公司杭州分公司 | Detection system of blind zone in front of automobile |
CN103895588A (en) * | 2014-03-31 | 2014-07-02 | 长城汽车股份有限公司 | Dead zone detecting feedback method and system |
CN107554430A (en) * | 2017-09-20 | 2018-01-09 | 京东方科技集团股份有限公司 | Vehicle blind zone view method, apparatus, terminal, system and vehicle |
CN110775028A (en) * | 2019-10-29 | 2020-02-11 | 长安大学 | System and method for detecting automobile windshield shelters and assisting in driving |
WO2021004077A1 (en) * | 2019-07-09 | 2021-01-14 | 华为技术有限公司 | Method and apparatus for detecting blind areas of vehicle |
-
2021
- 2021-04-14 CN CN202110401703.5A patent/CN112937446A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN202686191U (en) * | 2012-06-11 | 2013-01-23 | 浙江吉利汽车研究院有限公司杭州分公司 | Detection system of blind zone in front of automobile |
CN103895588A (en) * | 2014-03-31 | 2014-07-02 | 长城汽车股份有限公司 | Dead zone detecting feedback method and system |
CN107554430A (en) * | 2017-09-20 | 2018-01-09 | 京东方科技集团股份有限公司 | Vehicle blind zone view method, apparatus, terminal, system and vehicle |
WO2021004077A1 (en) * | 2019-07-09 | 2021-01-14 | 华为技术有限公司 | Method and apparatus for detecting blind areas of vehicle |
CN110775028A (en) * | 2019-10-29 | 2020-02-11 | 长安大学 | System and method for detecting automobile windshield shelters and assisting in driving |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9569970B2 (en) | Back-sideways alarming system for vehicle and alarming control method thereof | |
US20180167551A1 (en) | Vehicle control system utilizing multi-camera module | |
US20120239269A1 (en) | Method for recognizing a turn-off maneuver | |
JP2009081666A (en) | Vehicle periphery monitoring apparatus and image displaying method | |
JPWO2018235274A1 (en) | Parking control method and parking control device | |
EP3992046A1 (en) | Vehicle driving control method and device | |
EP3503531B1 (en) | Image display apparatus | |
CN111942282A (en) | Vehicle and driving blind area early warning method, device and system thereof and storage medium | |
WO2014155954A1 (en) | Vehicle display control device | |
CN210062820U (en) | Vehicle electronic image display equipment | |
KR20160051321A (en) | Active side view system of vehicle | |
TWI789523B (en) | Solid-state imaging device, imaging device, and control method for solid-state imaging device | |
CN112214026A (en) | Driving obstacle detection method and device, vehicle and readable medium | |
CN111497745A (en) | Display system, travel control device, display control method, and storage medium | |
CN112937446A (en) | Blind area video acquisition method and system | |
JP2020120268A (en) | Display system, travel control device, display control method, and program | |
US11100353B2 (en) | Apparatus of controlling region of interest of image and method for controlling the same | |
KR102010407B1 (en) | Smart Rear-view System | |
CN112758099B (en) | Driving assistance method and device, computer equipment and readable storage medium | |
US20230001922A1 (en) | System providing blind spot safety warning to driver, method, and vehicle with system | |
CN112700658B (en) | System for image sharing of a vehicle, corresponding method and storage medium | |
CN212086348U (en) | Driving auxiliary assembly and vehicle | |
CN116419072A (en) | Vehicle camera dynamics | |
CN115393980A (en) | Recording method and device for automobile data recorder, vehicle and storage medium | |
CN115123209A (en) | Driving support device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20210611 |