CN111845557A - Safety early warning method and system for vehicle driving and related device - Google Patents

Safety early warning method and system for vehicle driving and related device Download PDF

Info

Publication number
CN111845557A
CN111845557A CN202010716968.XA CN202010716968A CN111845557A CN 111845557 A CN111845557 A CN 111845557A CN 202010716968 A CN202010716968 A CN 202010716968A CN 111845557 A CN111845557 A CN 111845557A
Authority
CN
China
Prior art keywords
vehicle
image data
blind area
moving
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010716968.XA
Other languages
Chinese (zh)
Other versions
CN111845557B (en
Inventor
林基业
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Jianchuang Electronics Co Ltd
Original Assignee
Shenzhen Jianchuang Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Jianchuang Electronics Co Ltd filed Critical Shenzhen Jianchuang Electronics Co Ltd
Priority to CN202010716968.XA priority Critical patent/CN111845557B/en
Publication of CN111845557A publication Critical patent/CN111845557A/en
Application granted granted Critical
Publication of CN111845557B publication Critical patent/CN111845557B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q9/00Arrangements or adaptations of signal devices not provided for in one of the preceding main groups, e.g. haptic signalling
    • B60Q9/008Arrangements or adaptations of signal devices not provided for in one of the preceding main groups, e.g. haptic signalling for anti-collision purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/223Analysis of motion using block-matching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed circuit television systems, i.e. systems in which the signal is not broadcast
    • H04N7/181Closed circuit television systems, i.e. systems in which the signal is not broadcast for receiving images from a plurality of remote sources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed circuit television systems, i.e. systems in which the signal is not broadcast
    • H04N7/188Capturing isolated or intermittent images triggered by the occurrence of a predetermined event, e.g. an object reaching a predetermined position
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Abstract

The application relates to the technical field of vehicle application, and discloses a safety early warning method, a safety early warning system and a related device for vehicle driving, wherein the method comprises the following steps: the image processing device acquires image data collected in a vehicle blind area; acquiring a first speed of a moving object in image data; and if the first speed of the moving object in the image data is greater than the current speed of the vehicle, sending the current image data collected in the vehicle blind area to the vehicle or terminal equipment associated with the vehicle to perform safety early warning. By the aid of the mode, the driving safety of the vehicle can be improved.

Description

Safety early warning method and system for vehicle driving and related device
Technical Field
The application relates to the technical field of vehicle application, in particular to a safety early warning method and system for vehicle driving and a related device.
Background
In order to realize safe driving of the vehicle on the road, in addition to the operation experience of a driver, another great hidden trouble is a blind area of a vehicle rearview mirror and the fact that the distance between the vehicle and the rear of the vehicle cannot be accurately distinguished through the rearview mirror. Present car has designed three rear-view mirrors in order to make the driving safer: the rearview mirror assembly comprises an inside rearview mirror, a left rearview mirror and a right rearview mirror, but for a driver, blind areas always exist due to factors such as human eye visual angles, rearview mirror angles, speed and direction changes of a vehicle in the driving process and the like, so that potential safety hazards are brought to the driver and other vehicles driving on the road.
At present, a vehicle carries a rearview mirror blind area early warning system, and the system detects whether a vehicle comes from the side rear part of the vehicle by using radar or microwave and reminds a driver by using a warning sound or a warning lamp. The method only can play a fuzzy reminding effect, a driver can notice the situation that the driver needs to deflect the sight line to one side of the rearview mirror, and cannot accurately judge the position of a rear vehicle, so that the attention of the driver is easily dispersed, the road condition of the other side of the rearview mirror cannot be considered, and new potential safety hazards are brought.
Disclosure of Invention
The technical problem mainly solved by the application is to provide a safety early warning method, a safety early warning system and a related device for vehicle driving, and the safety of vehicle driving can be improved.
The technical scheme adopted by the application is to provide a safety early warning method for vehicle driving, and the method comprises the following steps: the image processing device acquires image data collected in a vehicle blind area; acquiring a first speed of a moving object in image data; and if the first speed of the moving object in the image data is greater than the current speed of the vehicle, sending the current image data collected in the vehicle blind area to the vehicle or terminal equipment associated with the vehicle to perform safety early warning.
Wherein the vehicle blind area comprises a first blind area; wherein the first blind area is located at the rear side of the vehicle; acquiring image data collected in a vehicle blind area, comprising: acquiring first image data collected in a first blind area; acquiring a first velocity of a moving object in image data, comprising: acquiring a first distance between a first moving object in the first image data and the vehicle at a previous moment, and acquiring a second distance between the first moving object and the vehicle at a current moment; and calculating a first speed of the first moving object according to the first distance and the second distance.
Wherein, acquiring a first distance between the first moving object in the first image data and the vehicle at a previous moment and acquiring a second distance between the first moving object and the vehicle at a current moment comprises: detecting whether a first moving object exists in the first image data; if yes, a first distance between the first moving object in the first image data and the vehicle at the previous moment is acquired, and a second distance between the first moving object and the vehicle at the current moment is acquired.
Wherein detecting whether the first moving object exists in the first image data comprises: acquiring a plurality of consecutive image frames in first image data; determining whether a target moving object exists in a reference region in a plurality of consecutive image frames; and if so, confirming that the first moving object exists in the first image data.
Wherein determining whether a target moving object exists in a reference region in a plurality of consecutive image frames comprises: determining a current image frame of a plurality of continuous image frames and determining a plurality of first target areas in the current image frame; acquiring an overlapping area of the first target area and the reference area as a first effective area; acquiring a first pixel number of which the gray value is greater than a preset gray value in the first effective area and acquiring a second pixel number of the effective area; and if the first ratio between the first pixel quantity and the second pixel quantity is larger than a first reference threshold value, determining that the target moving object exists in the reference area.
Wherein, if a first ratio between the first number of pixels and the second number of pixels is greater than a first reference threshold, determining that a target moving object exists in the reference region, includes: and if the first ratio between the first pixel number and the second pixel number is larger than the first reference threshold value, determining a next image frame in the plurality of continuous image frames, determining a plurality of first target areas in the next image frame, and performing the step of acquiring the overlapping area of the first target areas and the reference area again to serve as the first effective area.
Wherein, if a first ratio between the first number of pixels and the second number of pixels is greater than a first reference threshold, determining that a target moving object exists in the reference region, includes: if a first ratio between the first pixel quantity and the second pixel quantity is larger than a first reference threshold value, acquiring first position information of a first target area in a current image frame; acquiring second position information of a first target area in a next image frame; determining a first direction of the first target area relative to the vehicle based on the first location information and the second location information; and if the first direction is the same as the second direction of the vehicle, determining that the target moving object exists in the reference area.
Wherein determining that the target moving object exists in the reference region if the first direction is the same as the second direction of the vehicle comprises: if the first direction is the same as the second direction of the vehicle, determining a plurality of second target areas in the current image frame; the second target area is an area formed by pixel points representing unnatural edges in the current image frame; acquiring overlapping areas of a plurality of second target areas and the reference area to serve as second effective areas; acquiring a third pixel number of the second effective area in a plurality of second target areas, and acquiring a fourth pixel number of the plurality of second target areas; and if a second ratio between the third pixel quantity and the fourth pixel quantity is larger than a second reference threshold value, determining that the target moving object exists in the reference area.
The vehicle blind areas further comprise a second blind area and a third blind area; the second blind area and the third blind area are respectively positioned at the left side and the right side of the vehicle; the reference area comprises a first reference area, a second reference area and a third reference area; the first reference area is located right behind the vehicle, and the second reference area and the third reference area are located on two sides of the first reference area respectively; determining whether a target moving object is present in a reference region in a plurality of consecutive image frames, comprising: judging whether a first target moving object exists in a first reference area in a plurality of continuous image frames, and if so, confirming that a first moving object exists in first image data; and/or, determining whether a second target moving object exists in a second reference region in a plurality of consecutive image frames; if yes, confirming that a first moving object exists in the first image data, and acquiring second image data of a second blind area; and/or, determining whether a third target moving object exists in a third reference region in a plurality of consecutive image frames; and if so, confirming that the first moving object exists in the first image data, and acquiring third image data of a third blind area.
The first reference area comprises a first reference sub-area, a second reference sub-area and a third reference sub-area, and the first reference sub-area, the second reference sub-area and the third reference sub-area sequentially correspond to the vehicle rear side area; judging whether a first target moving object exists in a first reference area in a plurality of continuous image frames, if so, confirming that a first moving object exists in first image data, and the method comprises the following steps: judging whether a first target moving object exists in a first reference subarea, a second reference subarea and a third reference subarea in a plurality of continuous image frames, and if the first target moving object exists in the third reference subarea, sending a first prompt tone and current image data collected in a first blind area to a vehicle or terminal equipment associated with the vehicle for safety early warning; if the first target moving object exists in the second reference sub-area, sending a second prompt tone and current image data collected in the first blind area to the vehicle or terminal equipment associated with the vehicle for safety early warning; if the first target moving object exists in the first reference sub-area, sending a third prompt tone and current image data collected in the first blind area to a vehicle or terminal equipment associated with the vehicle for safety early warning; the early warning grade of the third prompt tone is higher than that of the second prompt tone, and the early warning grade of the second prompt tone is higher than that of the first prompt tone.
After confirming that the first moving object exists in the first image data and acquiring the second image data of the second blind area, the method further comprises the following steps: when first steering data of the vehicle are acquired and the first steering data correspond to a second blind area, acquiring a second speed of a second moving object in second image data; if the second speed of the second moving object in the second image data is greater than the current speed of the vehicle, sending the current image data collected in the second blind area to the vehicle or the terminal equipment associated with the vehicle, so that the vehicle or the terminal equipment associated with the vehicle displays the current image data collected in the second blind area, and performing screen recording; if the second speed of the second moving object in the second image data is not greater than the current speed of the vehicle, sending the current image data acquired in the second blind area and the third blind area to the vehicle or terminal equipment associated with the vehicle, so that the vehicle or the terminal equipment associated with the vehicle can display the current image data acquired in the second blind area and the third blind area in a two-division mode on a display screen, and respectively record the current image data of the two-division mode; or sending the current image data collected in the first blind area, the second blind area and the third blind area to the vehicle or the terminal equipment associated with the vehicle, so that the vehicle or the terminal equipment associated with the vehicle divides the current image data collected in the first blind area, the second blind area and the third blind area into three parts on the display screen and records the current image data respectively.
If the second speed of the second moving object in the second image data is greater than the current speed of the vehicle, sending the current image data collected in the second blind area to the vehicle or the terminal device associated with the vehicle, so that the vehicle or the terminal device associated with the vehicle displays the current image data collected in the second blind area, and performing screen recording, wherein the screen recording comprises: and if the second speed of the second moving object in the second image data is greater than the current speed of the vehicle, determining whether the distance between the second moving object and the vehicle is less than a preset distance, and if so, sending the current image data collected in the second blind area to the vehicle or the terminal equipment associated with the vehicle, so that the vehicle or the terminal equipment associated with the vehicle displays the current image data collected in the second blind area, and performing screen recording.
After confirming that the first moving object exists in the first image data and acquiring third image data of a third blind area, the method comprises the following steps: when second steering data of the vehicle is acquired and the second steering data corresponds to a third blind area, acquiring a third speed of a third moving object in third image data; and if the third speed of the third moving object in the third image data is greater than the current speed of the vehicle, sending the current image data collected in the third blind area to the vehicle or the terminal equipment associated with the vehicle.
If the third speed of the third moving object in the third image data is greater than the current speed of the vehicle, sending the current image data collected in the third blind area to the vehicle or the terminal device associated with the vehicle, including: and if the third speed of the third moving object in the third image data is greater than the current speed of the vehicle, determining whether the distance between the third moving object and the vehicle is less than a preset distance, and if so, sending the current image data collected in the third blind area to the vehicle or the terminal equipment associated with the vehicle.
The vehicle blind areas comprise a first blind area, a second blind area and a third blind area; the first blind area is located on the rear side of the vehicle, and the second blind area and the third blind area are located on the left side and the right side of the vehicle respectively; the method further comprises the following steps: when a door opening signal of the vehicle is acquired, whether a fourth moving object exists in a second blind area or a third blind area corresponding to the door is confirmed, if yes, a fourth prompt tone and current image data collected by the corresponding second blind area or the third blind area are sent to the vehicle or terminal equipment associated with the vehicle, so that the vehicle or the terminal equipment displays the current image data on a corresponding display screen and controls a loudspeaker of the vehicle or the terminal equipment to play the fourth prompt tone, and early warning prompt is performed on personnel in the vehicle.
Wherein, the method also comprises: and when the current gear of the vehicle is a preset gear, determining whether a fifth moving object exists in the second blind area and/or the third blind area, if so, sending a fifth prompt tone and current image data collected by the corresponding second blind area and/or the third blind area to the vehicle or terminal equipment associated with the vehicle, so that the vehicle or the terminal equipment displays the current image data on a corresponding display screen and controls a loudspeaker of the vehicle or the terminal equipment to play the fifth prompt tone, and early warning prompt is performed on personnel in the vehicle.
Another technical scheme adopted by the application is to provide a safety early warning method for vehicle driving, which comprises the following steps: the vehicle-mounted device receives current image data which are sent by the image processing device and collected in vehicle blind areas, wherein the vehicle blind areas comprise a first blind area, a second blind area and a third blind area. The method comprises the steps that current image data collected in a vehicle blind area are sent by an image processing device when the image data collected in the vehicle blind area are obtained and the first speed of a moving object in the image data is confirmed to be larger than the current speed of a vehicle; if the current image data collected in the vehicle blind area is the current image data collected in the first blind area, displaying the current image data on a display screen of the vehicle-mounted device; if the current image data collected in the vehicle blind area is the current image data collected in the first blind area and the second blind area, the current image data collected in the first blind area and the second blind area are simultaneously displayed on a display screen of the vehicle-mounted device.
Wherein, the method also comprises: and if the current image data acquired in the vehicle blind area is the current image data acquired in the second blind area, displaying the current image data on a display screen of the vehicle-mounted device.
Wherein, this still includes: if the current image data collected in the vehicle blind area is the current image data collected in the third blind area, displaying the current image data on a display screen of the vehicle-mounted device; if the current image data collected in the vehicle blind areas are the current image data collected in the first blind area, the second blind area and the third blind area, the current image data collected in the first blind area, the second blind area and the third blind area are simultaneously displayed on a display screen of the vehicle-mounted device.
Wherein, the method also comprises: when a door opening signal of the vehicle is acquired, the opening signal is sent to an image processing device, so that the image processing device confirms whether a sixth moving object exists in a second blind area or a third blind area corresponding to the door, and if so, a sixth prompt tone and current image data collected by the corresponding second blind area or the third blind area are sent to a vehicle-mounted device; and after receiving the sixth prompt tone and the current image data collected by the corresponding second blind area or third blind area, displaying the current image data on a display screen of the vehicle-mounted device and controlling the vehicle or a loudspeaker of the vehicle-mounted device to play the sixth prompt tone so as to perform early warning prompt on personnel in the vehicle.
Wherein, the method also comprises: receiving a seventh prompt tone sent by the image processing device and current image data collected by the second blind area and/or the third blind area, wherein the seventh prompt tone is generated by the image processing device when the current gear of the vehicle is a preset gear and whether a seventh moving object exists in the second blind area and/or the third blind area is confirmed; and displaying the current image data acquired by the second blind area and/or the third blind area on a display screen of the vehicle-mounted device and controlling the vehicle or a loudspeaker of the vehicle-mounted device to play a seventh prompt tone so as to give an early warning prompt to personnel in the vehicle.
The vehicle-mounted device receives the current image data collected in the vehicle blind area and sent by the image processing device, and the method further comprises the following steps: the vehicle-mounted device receives current image data and preset parameters which are sent by the image processing device and collected in a vehicle blind area; if the current image data collected in the vehicle blind area is the current image data collected in the first blind area, displaying on a display screen of the vehicle-mounted device, wherein the displaying comprises: and if the current image data acquired in the vehicle blind area is the current image data acquired in the first blind area, displaying the current image data acquired in the first blind area on a display screen of the vehicle-mounted device based on preset parameters, recording the current image data, and storing the recorded current image data to a server.
The vehicle-mounted device comprises a sound pickup for collecting environmental sounds of a vehicle;
present image data that gathers at first blind area is shown on vehicle-mounted device's display screen to record present image data, and record present image data storage to the server, include: the current image data collected in the first blind area is displayed on a display screen of the vehicle-mounted device, the display screen is subjected to screen recording, current environment sounds of the vehicle are collected through a sound pickup, and the recorded image data and the current environment sounds are stored in a server.
Another technical scheme adopted by the application is to provide a safety early warning method for vehicle driving, which comprises the following steps: the mobile terminal receives current image data which are sent by the image processing device and collected in vehicle blind areas, wherein the vehicle blind areas comprise a first blind area, a second blind area and a third blind area. The method comprises the steps that current image data collected in a vehicle blind area are sent by an image processing device when the image data collected in the vehicle blind area are obtained and the first speed of a moving object in the image data is confirmed to be larger than the current speed of a vehicle; if the current image data collected in the vehicle blind area is the current image data collected in the first blind area, displaying the current image data on a display screen of the mobile terminal; and if the current image data collected in the vehicle blind area is the current image data collected in the first blind area and the second blind area, simultaneously displaying the current image data collected in the first blind area and the second blind area on a display screen of the mobile terminal.
Wherein, the method also comprises: and if the current image data collected in the vehicle blind area is the current image data collected in the second blind area, displaying the current image data on a display screen of the mobile terminal.
Wherein, the method also comprises: receiving an eighth prompt tone sent by the image processing device and current image data collected in the second blind area and/or the third blind area; displaying the current image data collected in the second blind area or the third blind area on a display screen of the mobile terminal and controlling a loudspeaker to play an eighth prompt tone so as to give an early warning prompt to personnel in the vehicle; when the vehicle-mounted device acquires a door opening signal of the vehicle, the eighth prompt tone sends the opening signal to the image processing device, or when the current gear of the vehicle is a preset gear, the image processing device confirms that an eighth moving object exists in current image data collected by the second blind area or the third blind area.
Wherein, the method also comprises: if the current image data collected in the vehicle blind area is the current image data collected in the third blind area, displaying the current image data on a display screen; if the current image data collected in the vehicle blind areas are the current image data collected in the first blind area, the second blind area and the third blind area, the current image data collected in the first blind area, the second blind area and the third blind area are simultaneously displayed on a display screen of the mobile terminal.
The mobile terminal receives the current image data collected in the blind area of the vehicle sent by the image processing device, and further comprises: the mobile terminal receives current image data and preset parameters which are sent by the image processing device and collected in a vehicle blind area; if the current image data collected in the vehicle blind area is the current image data collected in the first blind area, displaying the current image data on a display screen of the mobile terminal, wherein the displaying comprises the following steps: and if the current image data collected in the vehicle blind area is the current image data collected in the first blind area, displaying the current image data collected in the first blind area on a display screen of the mobile terminal based on preset parameters, recording the current image data, and storing the recorded current image data to a server.
The method for displaying the current image data collected by the first blind area on the display screen of the mobile terminal, recording the current image data and storing the recorded current image data to the server includes: the method comprises the steps of displaying current image data collected by a first blind area on a display screen of the mobile terminal, carrying out screen recording on the display screen, collecting current environment sound of a vehicle through a sound pickup of vehicle-mounted equipment, and storing the recorded current image data and the current environment sound to a server.
Wherein, the method also comprises: and responding to the first touch instruction, and sending first setting parameters to the vehicle-mounted device and/or the image processing device so that the vehicle-mounted device and/or the image processing device can carry out setting based on the first setting parameters.
Wherein, the method also comprises: responding to a second touch instruction, and acquiring historical image data from a local storage or a server; and displaying the historical image data.
Another technical solution adopted by the present application is to provide an image processing apparatus, including a processor and a memory connected to the processor; wherein the memory is used for storing program data and the processor is used for executing the program data so as to realize the method implemented by the image processing device.
Another technical scheme adopted by the application is to provide a vehicle-mounted device, which comprises a processor and a memory connected with the processor; the memory is used for storing program data, and the processor is used for executing the program data so as to realize the method implemented by the vehicle-mounted device.
Another technical solution adopted by the present application is to provide a mobile terminal, which includes a processor and a memory connected to the processor; wherein the memory is configured to store program data and the processor is configured to execute the program data to implement the method implemented by the mobile terminal provided above.
Another technical solution adopted by the present application is to provide a readable storage medium, which is used for storing program data, and when the program data is executed by a processor, the program data is used for implementing the method provided by any one of the above aspects.
The other technical scheme adopted by the application is to provide a safety early warning system for vehicle driving, which comprises an image processing device, a vehicle-mounted device and a mobile terminal; the image processing device is, for example, the image processing device described above, the in-vehicle device described above, and the mobile terminal is, for example, the mobile terminal described above.
The beneficial effect of this application is: different from the situation of the prior art, the safety early warning method for vehicle driving obtains image data collected in a vehicle blind area through the image processing device; acquiring a first speed of a moving object in image data; and if the first speed of the moving object in the image data is greater than the current speed of the vehicle, sending the current image data collected in the vehicle blind area to the vehicle or terminal equipment associated with the vehicle to perform safety early warning. In this way, the problem that can't carry out accurate warning, multi-side in coordination among the current blind area early warning scheme has been solved on the one hand, and on the other hand can improve vehicle driving safety, promotes navigating mate's use and experiences.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts. Wherein:
FIG. 1 is a schematic flow chart diagram illustrating a first embodiment of a safety warning method for vehicle driving provided by the present application;
FIG. 2 is a schematic flow chart diagram showing details of step 12 in FIG. 1 provided herein;
FIG. 3 is a schematic flow chart diagram illustrating a second embodiment of a method for driving a vehicle according to the present application;
FIG. 4 is a schematic flow chart diagram illustrating the step 32 of FIG. 3 provided herein;
FIG. 5 is a detailed flow chart of step 322 of FIG. 4 provided herein;
FIG. 6 is a schematic flow chart diagram illustrating a third embodiment of a safety precaution method for vehicle driving provided by the present application;
FIG. 7 is a schematic flowchart of a fourth embodiment of a safety precaution method for vehicle driving provided by the present application;
FIG. 8 is a schematic flow chart diagram illustrating a fifth embodiment of a safety precaution method for vehicle driving provided by the present application;
FIG. 9 is a display schematic of a display device provided herein;
FIG. 10 is a further display schematic of a display device provided herein;
FIG. 11 is a schematic flow chart diagram illustrating a sixth embodiment of a safety precaution method for vehicle driving provided by the present application;
FIG. 12 is a first comparative illustration of a lane marking provided herein;
FIG. 13 is a second comparative illustration of a lane marking provided herein;
fig. 14 is a schematic view of an application scenario of the safety warning method for vehicle driving provided in the present application; FIG. 15 is a schematic flow chart diagram illustrating a seventh embodiment of a method for providing a safety warning of vehicle driving according to the present application;
fig. 16 is a schematic flowchart of an eighth embodiment of a safety warning method for vehicle driving according to the present application;
fig. 17 is a schematic view of a first display interface of a mobile terminal in the safety warning method for vehicle driving provided by the present application;
fig. 18 is a schematic view of a second display interface of the mobile terminal in the safety warning method for vehicle driving provided by the present application;
fig. 19 is a schematic view of a third display interface of the mobile terminal in the safety warning method for vehicle driving provided by the present application;
FIG. 20 is a schematic structural diagram of an embodiment of an image processing apparatus provided in the present application;
FIG. 21 is a schematic structural diagram of an embodiment of an in-vehicle device provided by the present application;
FIG. 22 is a block diagram illustrating an embodiment of a mobile terminal provided herein;
FIG. 23 is a schematic diagram of an embodiment of a readable storage medium provided herein;
fig. 24 is a schematic structural diagram of an embodiment of a safety warning system for vehicle driving according to the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be further noted that, for the convenience of description, only some of the structures related to the present application are shown in the drawings, not all of the structures. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Referring to fig. 1, fig. 1 is a schematic flow chart of a first embodiment of a safety warning method for vehicle driving according to the present application. The present embodiment is applied to an image processing apparatus provided in a vehicle. The method comprises the following steps:
step 11: and acquiring image data collected in the vehicle blind area.
In some embodiments, a vehicle driven by a driver includes a first blind area, a second blind area, and a third blind area. Wherein the first blind area is located behind the vehicle. The second blind area and the third blind area are respectively located at the left and right sides of the vehicle and along the rear area of the vehicle.
The image processing device comprises an image acquisition device. The image data is collected by the image collecting device and transmitted to the image processing device. If a first image acquisition device is arranged behind the vehicle, for example, the first image acquisition device is arranged near a rear license plate and is used for acquiring image data of a first blind area; installing a second image acquisition device on the left side of the vehicle, such as a left side rearview mirror, and acquiring image data of a second blind area; and a third image acquisition device is arranged on the right side of the vehicle, such as a right side rearview mirror, and is used for acquiring the image data of a third blind area.
Step 12: a first velocity of a moving object in image data is acquired.
In some embodiments, feature recognition is performed on the image data to identify different classification objects in the image data. For example, when the vehicle is driven on a road, plants, street lamps, pedestrians, other vehicles on the road, and the like on the road may be included in the image data. Relatively, the vehicle and the pedestrian can be identified as moving objects.
In some embodiments, the moving object is located behind the vehicle if the image data is image data from a first blind area, the moving object is located behind the left side of the vehicle if the image data is image data from a second blind area, and the moving object is located behind the right side of the vehicle if the image data is image data from a third blind area.
Referring to fig. 2, the first dead zone is taken as an example for explanation:
step 121: a first distance between the first moving object in the first image data and the vehicle at a previous moment is acquired, and a second distance between the first moving object and the vehicle at a current moment is acquired.
Prior to step 121, first image data acquired at a first blind spot needs to be acquired.
Optionally, the first image data includes a plurality of image frames, and whether a first moving object exists in the plurality of image frames is detected, and if so, the image frame is determined to be a valid image frame. Therefore, a plurality of effective image frames are obtained, and a second distance between the first moving object and the vehicle at the current effective image frame is obtained, and a first distance between the first moving object and the vehicle at the previous effective image frame of the current effective image frame is obtained.
Step 122: and calculating a first speed of the first moving object according to the first distance and the second distance.
It is understood that the first speed of the first moving object may be calculated according to the first distance and the second distance, and then the current speed of the vehicle, and the time difference between the two effective image frames. E.g. the current speed of the vehicle is V1The first distance is L1The second distance is L2The time difference between two effective image frames at the first distance and the second distance is t, the first speed is
It is to be appreciated that in some embodiments, the first velocity of the moving object in the image data acquired in the first and/or second and/or third blind zones may be obtained according to the above-described procedure.
Step 13: and if the first speed of the moving object in the image data is greater than the current speed of the vehicle, sending the current image data collected by the vehicle blind area to the vehicle or terminal equipment associated with the vehicle to perform safety early warning.
In some embodiments, the terminal device may be a vehicle-mounted device or a mobile terminal.
It is understood that if the first speed of the moving object in the image data is greater than the current speed of the vehicle, the moving object may overtake the vehicle, and a vehicle collision is likely to occur, and the safety of the vehicle is low. Therefore, the current image data collected in the vehicle blind area is sent to the vehicle or the terminal equipment associated with the vehicle so as to carry out safety early warning.
Further, in some embodiments, the terminal device belongs to an on-board device, and is connected with the vehicle through a wireless or bluetooth device built in the vehicle or a CAN bus of the vehicle. The terminal equipment further comprises a voice reminding function, and voice reminding can be carried out when the current image data collected by the vehicle blind area is received, so that the function of safety early warning is achieved. For example, the reminding content is "please notice moving objects with danger in the XX blind area, please pay attention to avoiding".
Further, the vehicle-mounted device comprises a display screen, when current image data collected in the blind area of the vehicle is received, the current image data is displayed on the display screen, when a driver hears the voice prompt, the driver can watch the current image data displayed on the display screen, the relative position of the vehicle and a moving object is checked in real time, and a countermeasure is made in time.
In some embodiments, the terminal device belongs to a mobile terminal, and is connected with the vehicle through wireless or Bluetooth built in the vehicle. And when the current image data collected by the vehicle blind area is received, voice reminding is carried out. If the reminding content is 'please notice moving objects with danger in the XX blind area, please notice avoiding', and the current image data is played on the display screen. When hearing the voice prompt, a driver can watch the current image data played on the display screen, check the relative position of the vehicle and the moving object in real time and timely make a response measure.
Furthermore, the voice prompt can be displayed and played simultaneously on the display screen of the vehicle-mounted device and the display screen of the mobile terminal.
The embodiment acquires image data collected in a vehicle blind area; acquiring a first speed of a moving object in image data; and if the first speed of the moving object in the image data is greater than the current speed of the vehicle, sending the current image data collected by the vehicle blind area to the vehicle or terminal equipment associated with the vehicle to perform safety early warning. In this way, the problem that can't carry out accurate warning, multi-side in coordination among the current blind area early warning scheme has been solved on the one hand, and on the other hand can improve vehicle driving safety, promotes navigating mate's use and experiences.
Referring to fig. 3, fig. 3 is a schematic flowchart of a second embodiment of a safety warning method for vehicle driving according to the present application. The present embodiment is applied to an image processing apparatus provided in a vehicle. The method comprises the following steps:
step 31: first image data collected in the first blind area is acquired.
Step 32: whether a first moving object exists in the first image data is detected.
In some embodiments, referring to fig. 4, step 32 may be the following step:
step 321: a plurality of successive image frames in the first image data is acquired.
It is understood that the first image data is a video of a preset time length, and thus the first image data includes a plurality of consecutive image frames.
In some embodiments, the plurality of consecutive image frames in the first image data are color images, and the color images are converted into black and white images to facilitate subsequent operations.
Step 322: it is determined whether a target moving object exists in a reference region in a plurality of consecutive image frames.
In some embodiments, the reference region is identified by:
the image processing device comprises an image acquisition device. After the image acquisition device for acquiring the image data in the first blind area is installed behind the vehicle, debugging is carried out based on the installation position of the image acquisition device so as to obtain the distribution of the reference area of the image acquisition device when the image acquisition device acquires the image.
For example, the reference region is calculated from the feature points of the calibration checkered cloth, and the feature points can be obtained based on the mounting position of the image pickup device (e.g., the distance from the image pickup device to the right of the vehicle body) and the distance from the calibration checkered cloth to the right of the vehicle body, the height of the vehicle body, and the width of the vehicle body. The calibration cloth was also generated using standard 4 x 3 black and white squares, each of which had a size of 20CM x 20CM, and the sides used a strip of calibration cloth.
It will be appreciated that the acquisition angle of the image acquisition means is fixed and therefore the size of each image frame is the same, so the reference area can be set as a common feature for each image frame.
In some embodiments, referring to fig. 5, step 322 may be the following step:
step 3221: a current image frame of a plurality of consecutive image frames is determined, and a plurality of first target regions are determined in the current image frame.
It is understood that step 3221 is to select one image from the plurality of image frames as the current image frame to perform the required steps. The first target area is an area formed by pixel points of any size area in the current image frame.
Step 3222: acquiring an overlapping area of the first target area and the reference area as a first effective area.
In some embodiments, if the first target area overlaps the reference area, the overlapping area is used as the first effective area. It is understood that a moving object may exist in the first target region located in the reference region.
Step 3223: and acquiring a first pixel number of the first effective area with the gray value larger than the preset gray value and acquiring a second pixel number of the effective area.
It can be understood that the color of the moving object and the road hardly has the same phenomenon. If no moving object exists in the reference area, the reference area is mostly a road. Therefore, the gray value of the pixel point of the road in the reference area can be used as the preset gray value.
In some embodiments, a preset gray value may also be set based on the gray value of the pixel point of the road.
Step 3224: and if the first ratio between the first pixel quantity and the second pixel quantity is larger than a first reference threshold value, determining that the target moving object exists in the reference area.
In some embodiments, the first reference threshold may be sixty percent, seventy percent, eighty percent, ninety percent, and so on. The larger the first reference threshold value is, the smaller the distance between the target moving object and the vehicle is indicated when the first ratio is larger than the first reference threshold value.
It is understood that if the first ratio between the first number of pixels and the second number of pixels is greater than the first reference threshold, it may be determined that an object exists on the road corresponding to the reference area. It is thus determined that the target moving object exists in the reference region.
Further, in some embodiments, if a first ratio between the first number of pixels and the second number of pixels is greater than a first reference threshold, first position information of a first target area in the current image frame is acquired; acquiring second position information of a first target area in a next image frame; determining a first direction of the first target area relative to the vehicle based on the first location information and the second location information; and if the first direction is the same as the second direction of the vehicle, determining that the target moving object exists in the reference area. It can be understood that, on some roads, there are a first vehicle and a second vehicle which run in the same direction, and when the first vehicle and the second vehicle run back after meeting, there is a case that a first ratio between the first pixel quantity and the second pixel quantity is greater than a first reference threshold, and there is no corresponding problem of driving safety at this time, so when the first ratio between the first pixel quantity and the second pixel quantity is greater than the first reference threshold, further obtaining first position information of a target region in a current image frame and obtaining second position information of the target region in a next image frame; and determining a first direction of the first target area relative to the vehicle based on the first position information and the second position information, and determining that the target moving object and the vehicle advance in the same direction if the first direction is the same as the second direction of the vehicle, and determining that the target moving object exists in the reference area.
Further, in some embodiments, if the first direction is the same as the second direction of the vehicle, determining a plurality of second target regions in the current image frame; the second target area is an area formed by pixel points representing unnatural edges in the current image frame; acquiring overlapping areas of a plurality of second target areas and the reference area to serve as second effective areas; acquiring a third pixel number of the second effective area in a plurality of second target areas, and acquiring a fourth pixel number of the plurality of second target areas; and if a second ratio between the third pixel quantity and the fourth pixel quantity is larger than a second reference threshold value, determining that the target moving object exists in the reference area. It can be understood that the moving object may have an edge on the road, and a pixel point belonging to an unnatural edge may be obtained by an edge detection technique, or a pixel point with smooth sharpness may be obtained, so as to form the second target region. If the moving object is an automobile, there are tire edges, bumper edges, chassis edges, and the like. Through edge detection, the existence of the target moving object in the reference area can be more accurately confirmed.
In some embodiments, step 3224 further includes determining a next image frame of the plurality of consecutive image frames and determining a plurality of first target regions in the next image frame if a first ratio between the first number of pixels and the second number of pixels is greater than a first reference threshold, and performing step 3222 again. In this way, a plurality of image frames can be processed, and the identification accuracy is improved.
In some embodiments, if it is determined that the target moving object exists in the reference region in the plurality of consecutive image frames, step 323 is performed.
Step 323: it is confirmed that the first moving object exists in the first image data.
In some embodiments, step 32 may be applied as follows:
the vehicle blind areas comprise a first blind area, a second blind area and a third blind area; wherein, second blind area and third blind area are located the left and right sides of vehicle respectively. The reference regions include a first reference region, a second reference region and a third reference region; the first reference area is located right behind the vehicle, and the second reference area and the third reference area are located on two sides of the first reference area respectively.
It can be understood that the first blind area, the second blind area and the third blind area correspond to a first reference area, a second reference area and a third reference area.
In some embodiments, taking the image data of the first blind area as an example, the determining whether the target moving object exists in the reference area in the plurality of consecutive image frames may be determining whether the first target moving object exists in the first reference area in the plurality of consecutive image frames, and if so, confirming that the first moving object exists in the first image data. It is understood that this manner is applicable to the phenomenon that the first target moving object exists in the first reference region.
In some embodiments, taking the image data of the first blind area as an example, the determining whether the target moving object exists in the reference area in the plurality of consecutive image frames may be determining whether a second target moving object exists in a second reference area in the plurality of consecutive image frames; and if so, confirming that the first moving object exists in the first image data, and acquiring second image data of a second blind area. It is understood that this manner is applicable to the phenomenon that the first target moving object exists in the second reference region.
In some embodiments, taking the image data of the first blind area as an example, the determining whether the target moving object exists in the reference area in the plurality of consecutive image frames may be determining whether a third target moving object exists in a third reference area in the plurality of consecutive image frames; and if so, confirming that the first moving object exists in the first image data, and acquiring third image data of a third blind area. It is understood that this manner is applicable to the phenomenon that the first target moving object exists in the third reference region.
In some embodiments, taking the image data of the first blind area as an example, the determining whether the target moving object exists in the reference area in the plurality of continuous image frames may be determining whether a first target moving object exists in the first reference area in the plurality of continuous image frames, and if yes, confirming that the first moving object exists in the first image data; and determining whether a second target moving object exists in a second reference region in the plurality of consecutive image frames; if yes, confirming that a first moving object exists in the first image data, and acquiring second image data of a second blind area; and determining whether a third target moving object exists in a third reference region in the plurality of consecutive image frames; and if so, confirming that the first moving object exists in the first image data, and acquiring third image data of a third blind area. It is understood that this approach is applicable to the phenomenon that the first target moving object exists in each of the first reference region, the second reference region, and the third reference region.
Further, in some embodiments, the first reference region includes a first reference sub-region, a second reference sub-region, and a third reference sub-region, which in turn correspond to the vehicle rear region.
Judging whether a first target moving object exists in a first reference subarea, a second reference subarea and a third reference subarea in a plurality of continuous image frames, and if the first target moving object exists in the third reference subarea, sending a first prompt tone and current image data collected in a first blind area to a vehicle or terminal equipment associated with the vehicle for safety early warning; if the first target moving object exists in the second reference sub-area, sending a second prompt tone and current image data collected in the first blind area to the vehicle or terminal equipment associated with the vehicle for safety early warning; and if the first target moving object exists in the first reference sub-area, sending the third prompt tone and the current image data collected in the first blind area to the vehicle or the terminal equipment associated with the vehicle for safety early warning.
The early warning grade of the third prompt tone is higher than that of the second prompt tone, and the early warning grade of the second prompt tone is higher than that of the first prompt tone.
For example, the image acquisition device can acquire image data within 22 meters behind the vehicle, and then divides the 22 meters behind the vehicle into a first reference sub-region, a second reference sub-region and a third reference sub-region, wherein the first reference sub-region corresponds to a region within 7 meters behind the vehicle, the second reference sub-region corresponds to a region within 7-14 meters behind the vehicle, and the third reference sub-region corresponds to a region within 14-22 meters behind the vehicle. When the moving object enters the first blind area, the moving object firstly enters the third reference sub-area, and then the first prompt tone and the current image data collected in the first blind area are sent to the vehicle or the terminal equipment associated with the vehicle, so that safety early warning is carried out. And if the user does not respond in time and the speed of the moving object is too high, the moving object enters a second reference sub-area, and a second prompt tone and the current image data collected in the first blind area are sent to the vehicle or the terminal equipment associated with the vehicle so as to perform safety early warning. And if the user does not respond in time and the speed of the moving object is too high, the moving object enters the first reference sub-area, and the third prompt tone and the current image data collected in the first blind area are sent to the vehicle or the terminal equipment associated with the vehicle.
The early warning grade of the third prompt tone is higher than that of the second prompt tone, and the early warning grade of the second prompt tone is higher than that of the first prompt tone.
It is to be appreciated that while playing the first alert tone, the user may react according to viewing the current image data of the first blind zone displayed by the vehicle or a terminal device associated with the vehicle. While playing the second alert tone, the user may react according to viewing current image data of a first blind zone displayed by the vehicle or a terminal device associated with the vehicle. When the third prompt tone is played, the vehicle can be automatically controlled to turn or accelerate to avoid the moving object, so that traffic accidents are avoided.
In some embodiments, taking the image data of the first blind area as an example, the determining whether the target moving object exists in the reference area in the plurality of continuous image frames may be determining whether a first target moving object exists in the first reference area in the plurality of continuous image frames, and if yes, confirming that the first moving object exists in the first image data; and determining whether a second target moving object exists in a second reference region in the plurality of consecutive image frames; and if so, confirming that the first moving object exists in the first image data, and acquiring second image data of a second blind area. It is understood that this approach applies to the phenomenon that the first target moving object is present in both the first reference region and the second reference region.
In some embodiments, taking the image data of the first blind area as an example, the determining whether the target moving object exists in the reference area in the plurality of continuous image frames may be determining whether a first target moving object exists in the first reference area in the plurality of continuous image frames, and if yes, confirming that the first moving object exists in the first image data; and determining whether a third target moving object exists in a third reference region in the plurality of consecutive image frames; and if so, confirming that the first moving object exists in the first image data, and acquiring third image data of a third blind area. It is understood that this manner is applicable to a phenomenon in which the first target moving object exists in both the first reference region and the third reference region.
In some embodiments, taking the image data of the first blind area as an example, the determining whether the target moving object exists in the reference area in the plurality of consecutive image frames may be determining whether a second target moving object exists in a second reference area in the plurality of consecutive image frames; if yes, confirming that a first moving object exists in the first image data, and acquiring second image data of a second blind area; and determining whether a third target moving object exists in a third reference region in the plurality of consecutive image frames; and if so, confirming that the first moving object exists in the first image data, and acquiring third image data of a third blind area. It is understood that this manner is applicable to a phenomenon in which the first target moving object exists in both the second reference region and the third reference region.
Upon confirming that the first moving object exists in the first image data, step 33 is executed.
Step 33: a first distance between the first moving object in the first image data and the vehicle at a previous moment is acquired, and a second distance between the first moving object and the vehicle at a current moment is acquired.
Step 34: and calculating a first speed of the first moving object according to the first distance and the second distance.
Step 35: and if the first speed of the moving object in the image data is greater than the current speed of the vehicle, sending the current image data collected by the vehicle blind area to the vehicle or terminal equipment associated with the vehicle to perform safety early warning.
It is understood that steps 33-35 are the same or similar to the above embodiments and are not described herein again.
By the method, the accuracy of judging the moving object in the blind area image is improved. And the problem that accurate reminding and multi-side cooperation cannot be carried out in the existing blind area early warning scheme is solved, the driving safety of the vehicle can be improved, and the use experience of drivers is improved.
Referring to fig. 6, fig. 6 is a schematic flowchart of a third embodiment of a safety warning method for vehicle driving according to the present application. The method comprises the following steps:
step 61: when first steering data of the vehicle is acquired and the first steering data corresponds to the second blind area, a second speed of the second moving object in the second image data is acquired.
In connection with the above-described embodiment, it is understood that, before step 61, it may be confirmed that the first moving object exists in the first image data captured from the first blind area, and the first moving object is located in the second reference area, so that during traveling, the first moving object may travel to the image capture area of the second blind area, so that the second image data captured in the second blind area is to be acquired. When the first turning data of the vehicle is acquired and the first turning data corresponds to the second blind area, whether a second moving object exists in the second image data or not is judged, and if the second moving object exists, a second speed of the second moving object is acquired.
Alternatively, the first steering data may be a steering signal such as a steering signal manually operated by a driver of the vehicle, or may be a steering angle of a steering wheel.
Step 62: and if the second speed of the second moving object in the second image data is greater than the current speed of the vehicle, sending the current image data collected in the second blind area to the vehicle or the terminal equipment associated with the vehicle.
And the vehicle or the terminal equipment associated with the vehicle displays the received current image data collected by the second blind area on the current image data collected by the second blind area, and performs screen recording. And further, collecting the environmental sound while recording the screen, and uploading the recorded data and the collected environmental sound to a server. It can be understood that if the second speed of the second moving object in the second image data is greater than the current speed of the vehicle, the second moving object may overtake the vehicle, and the steering operation is performed at this time, which is low in safety. And sending the current image data collected by the second blind area to the display equipment.
In some embodiments, if the second speed of the second moving object in the second image data is greater than the current speed of the vehicle, it is determined whether the distance between the second moving object and the vehicle is less than a preset distance, and if so, the current image data collected in the second blind area is sent to the vehicle or a terminal device associated with the vehicle, so that the vehicle or the terminal device associated with the vehicle displays the current image data collected in the second blind area, and screen recording is performed.
It can be understood that if the second speed of the second moving object in the second image data is greater than the current speed of the vehicle, the second moving object may overtake the vehicle, and if the distance between the vehicle and the moving object is less than the preset distance, a steering operation is performed at this time, a traffic accident may occur at a high probability, and a safety risk between the vehicle and an occupant in the vehicle is increased. Safety early warning can be carried out at the moment to avoid a driver to carry out steering operation, and current image data collected by the second blind area is sent to the display device, so that the driver can timely respond, and the vehicle and a moving object can keep a safe distance.
And step 63: and if the second speed of the second moving object in the second image data is not greater than the current speed of the vehicle, sending the current image data collected in the second blind area and the third blind area to the vehicle or the terminal equipment associated with the vehicle.
Optionally, the vehicle or the terminal device associated with the vehicle displays the current image data collected in the second blind area and the third blind area in a two-division manner on the display screen, and records the current image data displayed in the two-division manner respectively. And further, collecting the environmental sound while recording the screen, and uploading the recorded data and the collected environmental sound to a server.
In some embodiments, if the second speed of the second moving object in the second image data is not greater than the current speed of the vehicle, the current image data collected in the first blind area, the second blind area and the third blind area are sent to the vehicle or the terminal device associated with the vehicle, so that the vehicle or the terminal device associated with the vehicle divides the current image data collected in the first blind area, the second blind area and the third blind area on the display screen and records the current image data of the three divided displays respectively. And further, collecting the environmental sound while recording the screen, and uploading the recorded data and the collected environmental sound to a server.
In some embodiments, if the second speed of the second moving object in the second image data is greater than the current speed of the vehicle, the continuous time of the first turning data is obtained, and if the continuous time of the first turning data is greater than a preset time, the current image data collected in the second blind area and the first preset parameter are sent to the display device, so that the display device is configured according to the first preset parameter.
It can be understood that if the continuous time of the first steering data is greater than the preset time, it can be confirmed that the vehicle will be steered, and then the current image data collected in the second blind area and the first preset parameter are sent to the display device, so that the display device is configured according to the first preset parameter. If the first preset parameter comprises voice broadcasting, recording current image data, amplifying and playing the current image data on the display equipment, and uploading the recorded image data. When receiving the first preset parameter and the current image data, the display device responds to the first preset parameter, performs amplification display on the current image data in a corresponding proportion, records the current image data, and uploads the recorded data to the server. Through the mode, the display equipment can be correspondingly set through the preset parameters sent by the image processing device, so that the self-adaptive configuration of the display equipment is realized, and manual adjustment is not needed.
In some embodiments, if the second speed of the second moving object in the second image data is greater than the current speed of the vehicle, obtaining the continuous time of the first steering data, if the continuous time of the first steering data is not greater than a preset time, calculating a first difference between the second speed and the current speed, and if the first difference is greater than the preset difference, sending the current image data collected in the second blind area and a second preset parameter to the display device, so that the display device is configured according to the second preset parameter; the first preset parameter is the same as the second preset parameter. And if the first difference is not greater than the preset difference, sending the current image data collected by the second blind area and a third preset parameter to the display equipment so that the display equipment is configured according to the third preset parameter. If the third preset parameter includes recording the current image data, amplifying and playing the current image data on the display device, and uploading the recorded image data. Taking the third preset data as an example of recording the current image data, performing amplification playing on the current image data on the display device, and uploading the recorded image data, the following description is made: and when receiving the third preset parameter and the current image data, the display equipment responds to the third preset parameter, performs amplification display on the current image data in a corresponding proportion, records the current image data, and uploads the recorded data to the server. Through the mode, the display equipment can be correspondingly set through the preset parameters sent by the image processing device, so that the self-adaptive configuration of the display equipment is realized, and manual adjustment is not needed.
Referring to fig. 7, fig. 7 is a schematic flowchart of a fourth embodiment of a safety warning method for vehicle driving according to the present application. The method comprises the following steps:
step 71: and when the second steering data of the vehicle is acquired and the second steering data corresponds to a third blind area, acquiring a third speed of a third moving object in the third image data.
In connection with the above-described embodiment, it is understood that, before step 71, it may be confirmed that the first moving object exists in the first image data captured from the first blind area, and the first moving object is located in the third reference area, and therefore, during traveling, the first moving object may travel to the image capture area of the third blind area, so that the third image data captured in the third blind area is to be acquired. When the first turning data of the vehicle is acquired and the first turning data corresponds to the third blind area, whether a third moving object exists in the third image data or not is judged, and if the third moving object exists, a third speed of the third moving object is acquired.
Step 72: and if the third speed of the third moving object in the third image data is greater than the current speed of the vehicle, sending the current image data collected in the third blind area to the vehicle or the terminal equipment associated with the vehicle.
It can be understood that if the third speed of the third moving object in the third image data is greater than the current speed of the vehicle, the third moving object may overtake the vehicle, and the steering operation is performed at this time, which is low in safety. And sending the current image data collected by the third blind area to the display equipment.
In some embodiments, if the third speed of the third moving object in the third image data is greater than the current speed of the vehicle, it is determined whether the distance between the third moving object and the vehicle is less than a preset distance, and if so, the current image data collected in the third blind area is sent to the vehicle or a terminal device associated with the vehicle, so that the vehicle or the terminal device associated with the vehicle displays the current image data collected in the third blind area, and a screen recording is performed.
It can be understood that if the third speed of the third moving object in the third image data is greater than the current speed of the vehicle, the third moving object may overtake the vehicle, and if the distance between the vehicle and the moving object is less than the preset distance, a steering operation is performed at this time, a traffic accident may occur at a high probability, and a safety risk between the vehicle and an occupant in the vehicle is increased. Safety early warning can be carried out at the moment to avoid steering operation of a driver, and current image data collected by the third blind area is sent to the display device, so that the driver can timely respond, and a safe distance is kept between the vehicle and a moving object.
In some embodiments, the vehicle blind zones include a first blind zone, a second blind zone, and a third blind zone; the first blind area is located on the rear side of the vehicle, and the second blind area and the third blind area are located on the left side and the right side of the vehicle respectively; when the image processing device obtains a door opening signal of the vehicle, whether a fourth moving object exists in a second blind area or a third blind area corresponding to the door is confirmed, if yes, a fourth prompt sound and current image data collected by the corresponding second blind area or the third blind area are sent to the vehicle or terminal equipment associated with the vehicle, so that the vehicle or the terminal equipment displays the current image data on a corresponding display screen and controls a loudspeaker of the vehicle or the terminal equipment to play the fourth prompt sound, and early warning prompt is conducted on personnel in the vehicle. Through the mode, the vehicle door opening condition is early-warned in the vehicle running process, the safety of vehicles, people in the vehicles and moving objects is improved, and traffic accidents are reduced.
In some embodiments, when the current gear of the vehicle is a preset gear, the image processing device determines whether a fifth moving object exists in the second blind area and/or the third blind area, and if so, sends a fifth prompt tone and current image data collected by the corresponding second blind area and/or the third blind area to the vehicle or a terminal device associated with the vehicle, so that the vehicle or the terminal device displays the current image data on a corresponding display screen and controls a speaker of the vehicle or the terminal device to play the fifth prompt tone, so as to give an early warning prompt to people in the vehicle. In some embodiments, the preset gear may be a P gear, or when the vehicle is currently parked at the roadside, the vehicle-inside person gets off the vehicle, and if a moving object runs at this time, a potential safety hazard may occur. By the mode, the moving objects of the corresponding blind areas are confirmed in real time, the safety of vehicles, people in the vehicles and the moving objects is improved, and traffic accidents are reduced.
Referring to fig. 8, fig. 8 is a schematic flowchart of a fifth embodiment of a safety warning method for vehicle driving according to the present application. The method comprises the following steps:
step 81: first image data collected in the first blind area is acquired.
In this embodiment, the vehicle blind area still includes first blind area, and first blind area sets up in the rear side of this vehicle.
Step 82: a first velocity of a first moving object in first image data is acquired.
Step 83: and if the first speed of the first moving object in the first image data is greater than the current speed of the vehicle, sending the current image data collected in the first blind area to the display equipment.
In some embodiments, if the first speed of the first moving object in the first image data is greater than the current speed of the vehicle, then obtaining current steering data of the vehicle; and if the current steering data of the vehicle is not acquired, sending the current image data acquired in the first blind area to the display equipment.
It can be understood that if the current steering data of the vehicle is not acquired, it can be confirmed that the vehicle is running in a straight line, and there is no possibility of steering temporarily, and the current image data acquired in the first blind area is sent to the display device to remind a vehicle driver to confirm whether steering is needed to avoid the third moving object, so as to improve driving safety.
In an application scene, in the driving process of a vehicle, image data of a first blind area, a second blind area and a third blind area are acquired in real time in an image processing device and are sent to a display device, the display device divides a display screen of the display device into three display areas according to the mode shown in fig. 10, and the image data of the first blind area, the second blind area and the third blind area are displayed. When the image processing device acquires first steering data of the vehicle, if the first steering data corresponds to a second blind area, acquiring second image data acquired in the second blind area; acquiring a second speed of a second moving object in second image data; and if the second speed of the second moving object in the second image data is greater than the current speed of the vehicle, sending the current image data collected in the second blind area to the display equipment, switching the display mode of the graph 10 when the display equipment receives the current image data of the second blind area, only displaying the current image data of the second blind area received this time, and performing voice prompt. In the displaying process, the image processing apparatus may still obtain real-time speed data of the second moving object in the current image data of the second blind area in real time, and when the speed data is smaller than the current speed of the vehicle, or the second moving object has disappeared in the second blind area (for example, the speed of the second moving object has exceeded the vehicle or is too small, and is far away from the collecting range of the second blind area), the display device exits from the current display mode, and is converted into the display mode shown in fig. 10. If the first steering data corresponds to a third blind area, acquiring third image data collected in the third blind area; acquiring a third speed of a third moving object in third image data; and if the third speed of the third moving object in the third image data is higher than the current speed of the vehicle, sending the current image data collected in the third blind area to the display equipment, switching the display mode of the graph 10 when the display equipment receives the current image data of the third blind area, only displaying the current image data of the third blind area received this time, and performing voice prompt. In the displaying process, the image processing apparatus still obtains real-time speed data of the third moving object in the current image data of the third blind area in real time, and when the speed data is smaller than the current speed of the vehicle or the third moving object disappears in the third blind area (for example, the speed of the third moving object exceeds the current speed of the vehicle or is too small to be far away from the collecting range of the third blind area), the display device exits from the current display mode, and is converted into the display mode shown in fig. 10. If a first moving object is detected in the first image data of the first blind area, and the first speed of the first moving object is greater than the current speed of the vehicle, acquiring second steering data of the vehicle; and if the second steering data of the vehicle is not acquired, sending the current image data acquired in the first blind area to the display equipment. When receiving the current image data of the first blind area, the display device switches the display mode of fig. 10, only displays the current image data of the first blind area received this time, and performs prompt tone or voice prompt to prompt the presence of an object with a higher speed behind the person in the vehicle. In some embodiments, the recording of the screen of the display device is performed while the prompt tone or voice prompt is performed, and the recorded image data is uploaded to a server or sent to a mobile terminal.
In some embodiments, if the first moving object of the first blind area moves to the second blind area or the third blind area, a fourth speed of the first moving object in the second blind area or the third blind area is obtained; and if the fourth speed of the first moving object in the second blind area or the third blind area is higher than the current speed of the vehicle, sending the current image data collected by the second blind area or the third blind area to the display equipment, so that the display equipment displays the received current image according to the mode and records the current image.
It can be understood that, according to the movement of the first moving object in the first blind area, it may be determined whether the first moving object turns, and if the first moving object turns and the first moving object turns, then no moving object exists in the first blind area, an instruction may be sent to the display device, so that the display device is switched from the display mode of fig. 10 to the display mode of fig. 9.
Referring to fig. 11, fig. 11 is a schematic flowchart of a sixth embodiment of a safety warning method for vehicle driving according to the present application. The method comprises the following steps:
step 111: and acquiring fourth image data acquired by a front camera of the vehicle.
It is understood that the present embodiment is applicable to the case where the image processing apparatus cannot acquire the steering data from the CAN line when the vehicle is steering, or to the case where the image processing apparatus acquires the steering data, to perform further confirmation of whether or not the steering data is confirmed.
Step 112: a first angle between the lane line and the vehicle in the fourth image data is acquired.
In some embodiments, step 112 may be identifying a first lane line at the current time and a second lane line at a previous time in the fourth image; and calculating the angle of an included angle formed between the first lane line and the second lane line, and taking the angle of the included angle as the first angle.
As shown in fig. 12, the first lane lines at the present time are B1 and B2; the angle formed between the second lane lines at the previous time a1 and a2 and B1 and a1 is α.
As shown in fig. 13, the first lane lines at the present time are B1 and B2; the angle formed between the second lane lines a1 and a2 and B1 and a1 at the previous time is β.
Step 113: and if the first angle is larger than a first preset angle, confirming that the first steering data corresponds to the second blind area.
As will be understood by referring to fig. 12 and 13, with the second lane line at the previous time as a reference, and the first lane line at the current time is located on the right side of the second lane line, the angle of the included angle between the first lane line and the second lane line is positive, and if the first angle is greater than the first preset angle, it is determined that the first turn data corresponds to the second blind area.
If the first angle is 10 degrees and the first preset angle is 5 degrees, the first angle is larger than the first preset angle, and it is determined that the first steering data corresponds to the second blind area.
Step 114: and if the first angle is smaller than a second preset angle, determining that the first steering data corresponds to a third blind area.
As will be understood by referring to fig. 12 and 13, when the second lane line at the previous time is taken as a reference, and the first lane line at the current time is located on the left side of the second lane line, the angle of the included angle between the first lane line and the second lane line is negative, and the first angle is smaller than the second preset angle, it is determined that the first steering data corresponds to the third blind area.
And if the first angle is-10 degrees and the second preset angle is-5 degrees, the first angle is smaller than the second preset angle, and the first steering data is confirmed to correspond to the third blind area.
It is understood that the second blind area is located on the left side to the left rear side of the vehicle, and the third blind area is located on the right side to the right rear side of the vehicle. After the steering data of the vehicle is confirmed, the operation is performed according to the method of the other embodiments according to the vehicle blind area corresponding to the steering data.
Referring to fig. 14, several embodiments described above will be explained: the vehicle C in fig. 14 uses the method in the above embodiment. The vehicle C runs on a road with 3 lanes, wherein the 3 lanes are lane 1, lane 2 and lane 3. Vehicle C is now located in lane 2. There is a vehicle D on lane 1, a vehicle F behind a vehicle C on lane 2, and a vehicle E on lane 3. The visible area of the left side rearview mirror of the vehicle C is 1 area, the visible area of the right side rearview mirror is 2 areas, the visible area of the second blind area is gamma 1 area, the visible area of the third blind area is gamma 2 area, and the visible area of the first blind area is 1 area. All the areas of the left side and the right side corresponding to the vehicle, namely the 180-degree range, can be collected in the areas of the second blind area and the third blind area, and the maximum distance is 22 meters. The area of the first blind spot may capture a 110-degree wide-angle area within 22 meters of the rear side of the vehicle. When the vehicle C is located in the lane 2, the second blind area, the third blind area and the first blind area simultaneously acquire image data of corresponding areas and send the image data to the display equipment. And if the vehicle D appears in the visible area of the second blind area of the lane C, identifying the vehicle D to judge whether the speed of the vehicle D is greater than the current speed of the vehicle C, and if so, reminding. And if the steering data of the steering lane 1 of the vehicle C is acquired at the moment, judging whether the speed of the vehicle D is greater than the current speed of the vehicle C, if so, switching to independently display the image data of the second blind area on the display equipment, reminding, storing the image data, and uploading the image data to a server. And if the vehicle E appears in the visible area of the third blind area of the lane C, identifying the vehicle E to judge whether the speed of the vehicle E is greater than the current speed of the vehicle C, and if so, reminding. And if the steering data of the steering lane 3 of the vehicle C is acquired at the moment, judging whether the speed of the vehicle E is greater than the current speed of the vehicle C, if so, switching to independently display the image data of the third blind area on the display equipment, reminding, storing the image data, and uploading the image data to a server. If the vehicle F appears in the visible area of the first blind area of the lane C, the vehicle F is identified to judge whether the speed of the vehicle F is greater than the current speed of the vehicle C, if so, the image data of the first blind area is switched to be displayed independently on the display device, and the reminding and image data are stored and uploaded to the server.
Further, if the vehicle C is traveling in the lane 1, the image acquisition of the second blind area may be stopped at this time; if the vehicle C turns to the lane 3 for running, the image capturing of the third blind area may be stopped at this time. The display device can only display the image data of the other two blind areas.
Further, the image acquisition device of first blind area includes three camera, and the data synthesis that will three camera gathers sends display device to show.
Referring to fig. 15, fig. 15 is a schematic flow chart of a seventh embodiment of a safety warning method for vehicle driving according to the present application. The method comprises the following steps:
step 151: the vehicle-mounted device receives current image data collected in the vehicle blind area and sent by the image processing device.
The vehicle blind area includes a first blind area, a second blind area and a third blind area. The current image data collected in the vehicle blind area is sent by the image processing device when the image data collected in the vehicle blind area is acquired and the first speed of the moving object in the image data is confirmed to be larger than the current speed of the vehicle.
In some embodiments, the image processing device responds to the steering data in any one of the above embodiments, and when the speed of the moving object in the blind area is greater than the current speed of the vehicle, acquires the current image data of the corresponding blind area, and transmits the current image data to the vehicle-mounted device.
Wherein, the vehicle-mounted device is connected with the image processing device through Bluetooth or wireless or a vehicle can bus.
Step 152: and if the current image data collected in the vehicle blind area is the current image data collected in the first blind area, displaying the current image data on a display screen of the vehicle-mounted device.
In some embodiments, before step 152, the vehicle-mounted device displays image data corresponding to a plurality of blind areas according to a preset configuration, and switches the display screen when receiving the current image data collected in the second blind area, and plays the current image data collected in the second blind area separately, and performs a prompt.
Step 153: if the current image data collected in the vehicle blind area is the current image data collected in the first blind area and the second blind area, the current image data collected in the first blind area and the second blind area are simultaneously displayed on a display screen of the vehicle-mounted device.
It is understood that if step 153 is executed, it is determined that there are no dangerous moving objects in the blind areas.
In some embodiments, if the current image data collected in the vehicle blind area is the current image data collected in the second blind area, the current image data is displayed on a display screen of the vehicle-mounted device. And when the current image data collected in the second blind area is received, switching the display picture, independently displaying the current image data collected in the second blind area, and reminding, such as voice reminding.
In some embodiments, if the current image data collected in the vehicle blind area is the current image data collected in the third blind area, the current image data is displayed on a display screen of the vehicle-mounted device. And when the current image data collected in the third blind area is received, switching the display picture, independently playing the current image data collected in the third blind area, and reminding. In some embodiments, if the current image data collected in the vehicle blind areas is the current image data collected in the first blind area, the second blind area, and the third blind area, the current image data collected in the first blind area, the second blind area, and the third blind area are simultaneously displayed on the display screen of the on-vehicle device.
In some embodiments, the vehicle-mounted device is further connected with the vehicle, and CAN acquire a corresponding signal from a CAN bus of the vehicle, and if the vehicle door opening signal is acquired, it CAN be confirmed that the vehicle door is detected to be opened. And sending the opening signal to an image processing device to enable the image processing device to confirm whether a sixth moving object exists in a second blind area or a third blind area corresponding to the vehicle door, and if so, sending a sixth prompt tone and current image data collected by the corresponding second blind area or the third blind area to the vehicle-mounted device. And after receiving a sixth prompt tone and the current image data collected by the corresponding second blind area or third blind area, displaying the current image data on a display screen of the vehicle-mounted device and controlling the vehicle or a loudspeaker of the vehicle-mounted device to play the sixth prompt tone so as to perform early warning prompt on personnel in the vehicle. For example, the following steps are carried out: if the door on the left side of the vehicle is opened, a first instruction is sent to the image processing device, the image processing device judges whether a moving object exists in a second blind area at the current moment or not according to the first instruction, if so, a prompt tone is generated to send the current image to the vehicle-mounted device, so that the vehicle-mounted device switches a display screen on the display screen, and plays current image data independently and controls a loudspeaker to play a sixth prompt tone to give an early warning prompt to people in the vehicle. If the door on the right side of the vehicle is opened, a first instruction is sent to the image processing device, the image processing device judges whether a moving object exists in a third blind area at the current moment or not according to the first instruction, if so, a prompt tone is generated to send the current image to the vehicle-mounted device, so that the vehicle-mounted device switches the display mode of the display screen on the display screen, and plays the current image data and controls the loudspeaker to play a sixth prompt tone independently to give an early warning prompt to people in the vehicle. Through the mode, the vehicle door opening condition is early-warned in the vehicle running process, the safety of vehicles, people in the vehicles and moving objects is improved, and traffic accidents are reduced.
Further, in some embodiments, a seventh warning sound sent by the image processing device and current image data collected by the second blind area and/or the third blind area are received, wherein the seventh warning sound is generated by the image processing device when the current gear of the vehicle is a preset gear and whether a seventh moving object exists in the second blind area and/or the third blind area is confirmed; and displaying the current image data acquired by the second blind area and/or the third blind area on a display screen of the vehicle-mounted device and controlling the vehicle or a loudspeaker of the vehicle-mounted device to play a seventh prompt tone so as to give an early warning prompt to personnel in the vehicle. In some embodiments, the preset gear may be a P gear, or when the vehicle is currently parked at the roadside, the vehicle-inside person gets off the vehicle, and if a moving object runs at this time, a potential safety hazard may occur. By the mode, the moving objects of the corresponding blind areas are confirmed in real time, the safety of vehicles, people in the vehicles and the moving objects is improved, and traffic accidents are reduced.
In some embodiments, the vehicle-mounted device is connected with the mobile terminal, such as a bluetooth connection or a wireless connection. And the vehicle-mounted device receives the instructions sent by the mobile terminal and carries out corresponding configuration according to the instructions. If the reminding prompt tone is set for the vehicle-mounted device, the display modes are set, such as 2-screen division, 3-screen division and 4-screen division. Wherein, the 2-split screen is used for displaying the image data of two blind areas. The 3-split screen is used for displaying the image data of the three blind areas. The 4 split screens are used for displaying 4 image data, and besides the image data of three blind areas, the 4 split screens also comprise a vehicle front camera for collecting the image data in front.
In some embodiments, the vehicle-mounted device receives current image data and preset parameters collected in a vehicle blind area, which are sent by the image processing device; and if the current image data acquired in the vehicle blind area is the current image data acquired in the first blind area, displaying the current image data acquired in the first blind area on a display screen based on preset parameters, recording the current image data, and storing the recorded current image data to a server.
It can be understood that, if the current image data acquired in the vehicle blind area is the current image data acquired in the second blind area or the third blind area, the current image data acquired in the second blind area or the third blind area is displayed on the display screen based on the preset parameters, and the current image data is recorded and stored in the server.
Further, the in-vehicle apparatus includes a sound pickup for collecting environmental sounds of the vehicle. And displaying the current image data collected in the first blind area on a display screen, recording the current image data, collecting the current environmental sound of the vehicle through a sound pick-up, and storing the recorded current image data and the current environmental sound to a server. The method and the device realize omnibearing sound collection, enable the image data to have environmental sound, and can restore the recorded scene to the maximum extent when playing back the image data. For example, the in-vehicle apparatus may include a plurality of sound collectors, and the ambient sound collection may be performed by disposing the plurality of sound collectors at respective positions of the vehicle. For example, the sound pickup is an omnidirectional sound pickup, and omnidirectional sound collection can be performed by one omnidirectional sound pickup.
It can be understood that the current image data collected in the second or third blind area is displayed on the display screen and recorded, and the current environmental sound of the vehicle is collected through the sound pickup and stored to the server.
Different from the situation in the prior art, in the safety warning method for vehicle driving provided in this embodiment, the vehicle-mounted device receives the current image data collected in the vehicle blind areas sent by the image processing device, where the vehicle blind areas include a first blind area, a second blind area, and a third blind area. The method comprises the steps that current image data collected in a vehicle blind area are sent by an image processing device when the image data collected in the vehicle blind area are obtained and the first speed of a moving object in the image data is confirmed to be larger than the current speed of a vehicle; if the current image data collected in the vehicle blind area is the current image data collected in the first blind area, displaying the current image data on a display screen of the vehicle-mounted device; if the current image data collected in the vehicle blind area is the current image data collected in the first blind area and the second blind area, the current image data collected in the first blind area and the second blind area are simultaneously displayed on a display screen of the vehicle-mounted device. By means of the method, the problems that accurate reminding and multi-side cooperation cannot be achieved in an existing blind area early warning scheme are solved, active configuration of each rearview mirror blind area image system of the vehicle is achieved on the basis of human-computer interaction, vehicle driving safety can be improved, and user experience is improved.
Referring to fig. 16, fig. 16 is a schematic flowchart of an eighth embodiment of a safety warning method for vehicle driving according to the present application. The method comprises the following steps:
step 161: and the mobile terminal receives the current image data which is sent by the image processing device and collected in the blind area of the vehicle.
The vehicle blind area includes a first blind area, a second blind area and a third blind area. The current image data collected in the vehicle blind area is sent by the image processing device when the image data collected in the vehicle blind area is acquired and the first speed of the moving object in the image data is confirmed to be larger than the current speed of the vehicle.
In some embodiments, the image processing device responds to the steering data in any one of the above embodiments, and when the speed of the moving object in the blind area is greater than the current speed of the vehicle, acquires the current image data of the corresponding blind area, and sends the current image data to the mobile terminal.
In some embodiments, the vehicle further comprises an on-board device connected to the image processing device, and the image processing device acquires current image data collected in the vehicle blind area and sends the current image data to the on-board device and the mobile terminal, so that a display screen on the on-board device and a display screen on the mobile terminal are simultaneously displayed in real time.
Wherein, the mobile terminal is connected with the image processing device through Bluetooth or wireless connection.
Step 162: and if the current image data collected in the vehicle blind area is the current image data collected in the first blind area, displaying the current image data on a display screen of the mobile terminal.
Before step 162, the mobile terminal displays image data corresponding to a plurality of blind areas according to a preset configuration, and switches a display picture when receiving current image data collected in a second blind area, and plays the current image data collected in the second blind area separately, and performs a prompt.
Step 163: and if the current image data collected in the vehicle blind area is the current image data collected in the first blind area and the second blind area, simultaneously displaying the current image data collected in the first blind area and the second blind area on a display screen of the mobile terminal.
It is understood that if step 163 is executed, it is confirmed that there are no dangerous moving objects in the blind areas.
In some embodiments, if the current image data collected in the vehicle blind area is the current image data collected in the second blind area, the current image data is displayed on a display screen of the mobile terminal. And when the current image data collected in the second blind area is received, switching the display picture, independently playing the current image data collected in the second blind area, and reminding.
In some embodiments, if the current image data collected in the vehicle blind area is the current image data collected in the third blind area, the current image data is displayed on a display screen of the mobile terminal. And when the current image data collected in the third blind area is received, switching the display picture, independently playing the current image data collected in the third blind area, and reminding.
In some embodiments, if the current image data collected in the vehicle blind area is the current image data collected in the first blind area, the second blind area, and the third blind area, the current image data collected in the first blind area, the second blind area, and the third blind area are simultaneously displayed on the display screen of the mobile terminal.
In some embodiments, the mobile terminal receives an eighth prompt tone sent by the image processing device and current image data collected in the second blind area and/or the third blind area; displaying the current image data collected in the second blind area or the third blind area on a display screen of the mobile terminal and controlling a loudspeaker to play an eighth prompt tone so as to give an early warning prompt to personnel in the vehicle; when the vehicle-mounted device acquires a door opening signal of the vehicle, the eighth prompt tone sends the opening signal to the image processing device, or when the current gear of the vehicle is a preset gear, the image processing device confirms that an eighth moving object exists in current image data collected by the second blind area or the third blind area. Through the mode, the vehicle door opening condition is early-warned in the vehicle running process, the safety of vehicles, people in the vehicles and moving objects is improved, and traffic accidents are reduced. By the mode, the moving objects of the corresponding blind areas are confirmed in real time, safety of vehicles, people in the vehicles and the moving objects is improved, and traffic accidents are reduced.
In some embodiments, the mobile terminal receives current image data and preset parameters collected in a vehicle blind area, which are sent by an image processing device; and if the current image data collected in the vehicle blind area is the current image data collected in the second blind area, displaying the current image data collected in the first blind area on a display screen of the mobile terminal based on preset parameters, recording the current image data, and storing the recorded current image data to a server. Further, the vehicle-mounted device can be controlled to pick up the environmental sound and synchronously upload the environmental sound to the server. If so, displaying the current image data collected by the first blind area on a display screen of the mobile terminal, carrying out screen recording on the display screen, collecting the current environment sound of the vehicle through a sound pickup of the vehicle-mounted equipment, and storing the recorded current image data and the current environment sound to the server.
In some embodiments, in response to the first touch instruction, the first setting parameter is sent to the vehicle-mounted device and/or the image processing device, so that the vehicle-mounted device and/or the image processing device performs setting based on the first setting parameter. The following description is made with reference to fig. 17: as shown in fig. 17, the mobile terminal may be set to perform selection, such as setting of multiple functions, e.g., whether recording is on, adjusting the volume, selecting warning sounds, warning prompt level, whether the front camera is on, and lane deviation reminding. The mobile terminal responds to the first touch instruction and sends the first setting parameters to the vehicle-mounted device and/or the image processing device so that the vehicle-mounted device and/or the image processing device can carry out setting based on the first setting parameters.
The mobile terminal and the vehicle-mounted device can be connected through a data line. The user sets parameters on the mobile terminal, and the vehicle-mounted device can synchronously respond to the parameters to complete corresponding setting.
In some embodiments, in response to the second touch instruction, obtaining historical image data from a local storage or a server; and displaying the historical data. The following description will be made with reference to fig. 18 and 19: fig. 18 shows the image data recorded in different states, and the image data is divided into a local video and a cloud video. Clicking on the local video will result in multiple video files as shown in fig. 19. The user can delete, move and delete the source file after moving the video files. In the mobile terminal, the historical image data can be played back and sorted in this way, and materials can be provided for subsequent system upgrade. The scene during recording can be restored to the maximum extent during playback of the image data.
In some embodiments, the mobile terminal receives a ninth prompt tone sent by the image processing device and current image data collected in the second blind area or the third blind area, and plays the current image data collected in the second blind area or the third blind area on a display screen of the mobile terminal and controls a loudspeaker to play the ninth prompt tone so as to give an early warning prompt to people in the vehicle; the ninth prompt tone is generated when the vehicle-mounted device detects that the door of the vehicle is opened and the image processing device confirms that the fifth moving object exists in the second blind area or the third blind area corresponding to the door at the current time. And recording and uploading the current image data to the server. These image data can be shown as data at the time of a traffic accident, contributing to the division of responsibility for the traffic accident.
Different from the situation in the prior art, the safety warning method for vehicle driving provided in this embodiment receives, through the mobile terminal, current image data collected in vehicle blind areas sent by the image processing device, where the vehicle blind areas include a first blind area, a second blind area, and a third blind area. The method comprises the steps that current image data collected in a vehicle blind area are sent by an image processing device when the image data collected in the vehicle blind area are obtained and the first speed of a moving object in the image data is confirmed to be larger than the current speed of a vehicle; if the current image data collected in the vehicle blind area is the current image data collected in the first blind area, displaying the current image data on a display screen of the mobile terminal; and if the current image data collected in the vehicle blind area is the current image data collected in the first blind area and the second blind area, simultaneously displaying the current image data collected in the first blind area and the second blind area on a display screen of the mobile terminal. By means of the method, the problems that accurate reminding and multi-side cooperation cannot be achieved in an existing blind area early warning scheme are solved, active configuration of each rearview mirror blind area image system of the vehicle is achieved on the basis of human-computer interaction, vehicle driving safety can be improved, and user experience is improved.
Referring to fig. 20, fig. 20 is a schematic structural diagram of an embodiment of an image processing apparatus provided in the present application. The image processing apparatus 200 includes a processor 201 and a memory 202 connected to the processor 201; wherein, the memory 202 is used for storing program data, and the processor 201 is used for executing the program data, so as to realize the following method:
the image processing device acquires image data collected in a vehicle blind area; acquiring a first speed of a moving object in image data; and if the first speed of the moving object in the image data is greater than the current speed of the vehicle, sending the current image data collected in the vehicle blind area to the vehicle or terminal equipment associated with the vehicle to perform safety early warning.
It will be appreciated that the processor 201 is arranged to execute the program data and is also arranged to implement the method performed by the image processing apparatus in any of the embodiments described above.
Referring to fig. 21, fig. 21 is a schematic structural diagram of an embodiment of a vehicle-mounted device provided in the present application. The in-vehicle device 210 includes a processor 211 and a memory 212 connected to the processor 211; wherein, the memory 212 is used for storing program data, and the processor 211 is used for executing the program data, so as to realize the following method:
the vehicle-mounted device receives current image data which are sent by the image processing device and collected in vehicle blind areas, wherein the vehicle blind areas comprise a first blind area, a second blind area and a third blind area. The method comprises the steps that current image data collected in a vehicle blind area are sent by an image processing device when the image data collected in the vehicle blind area are obtained and the first speed of a moving object in the image data is confirmed to be larger than the current speed of a vehicle; if the current image data collected in the vehicle blind area is the current image data collected in the first blind area, displaying the current image data on a display screen of the vehicle-mounted device; if the current image data collected in the vehicle blind area is the current image data collected in the first blind area and the second blind area, the current image data collected in the first blind area and the second blind area are simultaneously displayed on a display screen of the vehicle-mounted device.
It will be appreciated that the processor 211 is configured to execute the program data and is also configured to implement the method performed by the in-vehicle device in any of the embodiments described above.
Referring to fig. 22, fig. 22 is a schematic structural diagram of an embodiment of a mobile terminal provided in the present application. The mobile terminal 220 includes a processor 221 and a memory 222 coupled to the processor 221; wherein the memory 222 is used for storing program data, and the processor 221 is used for executing the program data, so as to realize the following method:
the mobile terminal receives current image data which are sent by the image processing device and collected in vehicle blind areas, wherein the vehicle blind areas comprise a first blind area, a second blind area and a third blind area. The method comprises the steps that current image data collected in a vehicle blind area are sent by an image processing device when the image data collected in the vehicle blind area are obtained and the first speed of a moving object in the image data is confirmed to be larger than the current speed of a vehicle; if the current image data collected in the vehicle blind area is the current image data collected in the first blind area, displaying the current image data on a display screen of the mobile terminal; and if the current image data collected in the vehicle blind area is the current image data collected in the first blind area and the second blind area, simultaneously displaying the current image data collected in the first blind area and the second blind area on a display screen of the mobile terminal.
It will be appreciated that the processor 221 is configured to execute the program data and is also configured to implement the method performed by the mobile terminal in any of the embodiments described above.
Referring to fig. 23, fig. 23 is a schematic structural diagram of an embodiment of a readable storage medium provided in the present application. The readable storage medium 230 is for storing program data 231, the program data 231, when executed by the processor, being for implementing the method of:
the image processing device acquires image data collected in a vehicle blind area; acquiring a first speed of a moving object in image data; if the first speed of the moving object in the image data is greater than the current speed of the vehicle, sending the current image data collected in the vehicle blind area to the vehicle or terminal equipment associated with the vehicle for safety early warning; or the like, or, alternatively,
the vehicle-mounted device receives current image data which are sent by the image processing device and collected in vehicle blind areas, wherein the vehicle blind areas comprise a first blind area, a second blind area and a third blind area. The method comprises the steps that current image data collected in a vehicle blind area are sent by an image processing device when the image data collected in the vehicle blind area are obtained and the first speed of a moving object in the image data is confirmed to be larger than the current speed of a vehicle; if the current image data collected in the vehicle blind area is the current image data collected in the first blind area, displaying the current image data on a display screen of the vehicle-mounted device; if the current image data collected in the vehicle blind area is the current image data collected in the first blind area and the second blind area, the current image data collected in the first blind area and the second blind area are simultaneously displayed on a display screen of the vehicle-mounted device; or the like, or, alternatively,
the mobile terminal receives current image data which are sent by the image processing device and collected in vehicle blind areas, wherein the vehicle blind areas comprise a first blind area, a second blind area and a third blind area. The method comprises the steps that current image data collected in a vehicle blind area are sent by an image processing device when the image data collected in the vehicle blind area are obtained and the first speed of a moving object in the image data is confirmed to be larger than the current speed of a vehicle; if the current image data collected in the vehicle blind area is the current image data collected in the first blind area, displaying the current image data on a display screen of the mobile terminal; and if the current image data collected in the vehicle blind area is the current image data collected in the first blind area and the second blind area, simultaneously displaying the current image data collected in the first blind area and the second blind area on a display screen of the mobile terminal.
It will be appreciated that the program data 231, when executed by a processor, is also operative to implement any of the embodiment methods described above.
Referring to fig. 24, fig. 24 is a schematic structural diagram of an embodiment of a safety warning system for vehicle driving according to the present disclosure. The safety precaution system 240 includes an image processing device 241, an in-vehicle device 242, and a mobile terminal 243;
the image processing device 241, the in-vehicle device 242, and the mobile terminal 243 are the same as those in any of the above embodiments.
It is understood that the image processing device 241, the vehicle-mounted device 242 and the mobile terminal 243 can be used for implementing the corresponding method of any of the above embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other manners. For example, the above-described device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated units in the other embodiments described above may be stored in a computer-readable storage medium if they are implemented in the form of software functional units and sold or used as separate products. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above embodiments are merely examples and are not intended to limit the scope of the present disclosure, and all modifications, equivalents, and flow charts using the contents of the specification and drawings of the present disclosure or those directly or indirectly applied to other related technical fields are intended to be included in the scope of the present disclosure.
The embodiment of the application also discloses:
A1. a safety precaution method of vehicle driving, the method comprising:
the image processing device acquires image data collected in a vehicle blind area;
acquiring a first speed of a moving object in the image data;
and if the first speed of the moving object in the image data is greater than the current speed of the vehicle, sending the current image data collected in the vehicle blind area to the vehicle or the terminal equipment associated with the vehicle to perform safety early warning.
A2. According to the method as set forth in a1,
the vehicle blind zone comprises a first blind zone; wherein the first blind area is located on a rear side of the vehicle;
the acquiring of the image data collected in the vehicle blind area comprises the following steps:
acquiring first image data collected in the first blind area;
the acquiring a first velocity of a moving object in the image data comprises:
acquiring a first distance between a first moving object in the first image data and the vehicle at a previous moment, and acquiring a second distance between the first moving object and the vehicle at a current moment;
and calculating a first speed of the first moving object according to the first distance and the second distance.
A3. According to the method as set forth in a2,
the acquiring a first distance between a first moving object in the first image data and the vehicle at a previous time and acquiring a second distance between the first moving object and the vehicle at a current time comprises:
detecting whether a first moving object exists in the first image data;
if yes, acquiring a first distance between a first moving object in the first image data and the vehicle at a previous moment, and acquiring a second distance between the first moving object and the vehicle at a current moment.
A4. According to the method as set forth in a3,
the detecting whether a first moving object exists in the first image data includes:
acquiring a plurality of consecutive image frames in the first image data;
determining whether a target moving object exists in a reference region in a plurality of the consecutive image frames;
and if so, confirming that a first moving object exists in the first image data.
A5. According to the method as set forth in a4,
the determining whether a target moving object exists in a reference region in a plurality of the consecutive image frames includes:
determining a current image frame of the plurality of consecutive image frames and determining a plurality of first target regions in the current image frame;
acquiring an overlapping area of the first target area and the reference area as a first effective area;
acquiring a first pixel number of which the gray value is greater than a preset gray value in the first effective area and acquiring a second pixel number of the effective area;
determining that a target moving object exists in the reference region if a first ratio between the first number of pixels and the second number of pixels is greater than a first reference threshold.
A6. According to the method as set forth in a5,
determining that a target moving object exists in the reference region if a first ratio between the first number of pixels and the second number of pixels is greater than a first reference threshold, including:
if a first ratio between the first number of pixels and the second number of pixels is greater than a first reference threshold, determining a next image frame of the plurality of consecutive image frames, determining a plurality of first target regions in the next image frame, and performing the step of acquiring an overlapping region of the first target regions and the reference region again as a first effective region.
A7. According to the method as set forth in a5,
determining that a target moving object exists in the reference region if a first ratio between the first number of pixels and the second number of pixels is greater than a first reference threshold, including:
if a first ratio between the first pixel quantity and the second pixel quantity is larger than a first reference threshold value, acquiring first position information of the first target area in the current image frame;
acquiring second position information of the first target area in the next image frame;
determining a first direction of the first target area relative to the vehicle based on the first location information and the second location information;
and if the first direction is the same as the second direction of the vehicle, determining that a target moving object exists in the reference area.
A8. According to the method as set forth in a7,
determining that a target moving object exists in the reference region if the first direction is the same as the second direction of the vehicle, including:
determining a plurality of second target areas in the current image frame if the first direction is the same as a second direction of the vehicle; the second target area is an area formed by pixel points representing unnatural edges in the current image frame;
acquiring overlapping areas of the plurality of second target areas and the reference area as second effective areas;
acquiring a third pixel number occupied by the second effective area in the plurality of second target areas, and acquiring a fourth pixel number of the plurality of second target areas;
determining that a target moving object exists in the reference region if a second ratio between the third number of pixels and the fourth number of pixels is greater than a second reference threshold.
A9. According to the method as set forth in a4,
the vehicle blind areas further comprise a second blind area and a third blind area; the second blind area and the third blind area are respectively positioned at the left side and the right side of the vehicle;
the reference regions include a first reference region, a second reference region, and a third reference region; wherein the first reference area is located right behind the vehicle, and the second reference area and the third reference area are located on two sides of the first reference area respectively;
the determining whether a target moving object exists in a reference region in a plurality of the consecutive image frames includes:
judging whether a first target moving object exists in a first reference area in a plurality of continuous image frames, and if so, confirming that a first moving object exists in the first image data; and/or the presence of a gas in the gas,
determining whether a second target moving object exists in a second reference region in the plurality of the consecutive image frames; if so, confirming that a first moving object exists in the first image data, and acquiring second image data of the second blind area; and/or the presence of a gas in the gas,
determining whether a third target moving object exists in a third reference region in the plurality of the consecutive image frames; and if so, confirming that a first moving object exists in the first image data, and acquiring third image data of the third blind area.
A10. According to the method as set forth in a9,
the first reference area comprises a first reference sub-area, a second reference sub-area and a third reference sub-area, and the first reference sub-area, the second reference sub-area and the third reference sub-area sequentially correspond to the vehicle rear side area;
the determining whether a first target moving object exists in a first reference region in a plurality of the continuous image frames, and if so, determining that a first moving object exists in the first image data, includes:
judging whether a first target moving object exists in the first reference sub-area, the second reference sub-area and the third reference sub-area in the plurality of continuous image frames, and if the first target moving object exists in the third reference sub-area, sending a first prompt tone and current image data collected in the first blind area to the vehicle or terminal equipment associated with the vehicle for safety early warning; if a first target moving object exists in the second reference sub-area, sending a second prompt tone and current image data collected in the first blind area to the vehicle or terminal equipment associated with the vehicle for safety early warning; if a first target moving object exists in the first reference sub-area, sending a third prompt tone and current image data collected in the first blind area to the vehicle or terminal equipment associated with the vehicle for safety early warning;
the early warning grade of the third prompt tone is higher than that of the second prompt tone, and the early warning grade of the second prompt tone is higher than that of the first prompt tone.
A11. According to the method as set forth in a9,
after confirming that the first moving object exists in the first image data and acquiring the second image data of the second blind area, the method further comprises the following steps:
when first steering data of the vehicle is acquired and the first steering data corresponds to the second blind area, acquiring a second speed of a second moving object in second image data;
if the second speed of a second moving object in the second image data is greater than the current speed of the vehicle, sending the current image data collected in a second blind area to the vehicle or the terminal equipment associated with the vehicle, so that the vehicle or the terminal equipment associated with the vehicle displays the current image data collected in the second blind area, and performing screen recording;
if the second speed of a second moving object in the second image data is not greater than the current speed of the vehicle, sending the current image data acquired in the second blind area and the third blind area to the vehicle or the terminal equipment associated with the vehicle, so that the vehicle or the terminal equipment associated with the vehicle divides and displays the current image data acquired in the second blind area and the third blind area on a display screen, and records the current image data of the divided and displayed two times respectively; or sending the current image data collected in the first blind area, the second blind area and the third blind area to the vehicle or the terminal equipment associated with the vehicle, so that the vehicle or the terminal equipment associated with the vehicle can display the current image data collected in the first blind area, the second blind area and the third blind area in a three-division mode on a display screen, and respectively record the current image data displayed in the three-division mode.
A12. According to the method as set forth in a11,
if the second speed of the second moving object in the second image data is greater than the current speed of the vehicle, sending the current image data collected in the second blind area to the vehicle or the terminal device associated with the vehicle, so that the vehicle or the terminal device associated with the vehicle displays the current image data collected in the second blind area, and performing screen recording, including:
if the second speed of a second moving object in the second image data is greater than the current speed of the vehicle, determining whether the distance between the second moving object and the vehicle is smaller than a preset distance, if so, sending the current image data collected in a second blind area to the vehicle or terminal equipment associated with the vehicle, so that the vehicle or the terminal equipment associated with the vehicle displays the current image data collected in the second blind area, and performing screen recording.
A13. According to the method as set forth in a9,
after confirming that the first moving object exists in the first image data and acquiring the third image data of the third blind area, the method includes:
when second steering data of the vehicle is acquired and the second steering data corresponds to the third blind area, acquiring a third speed of a third moving object in third image data;
and if the third speed of the third moving object in the third image data is greater than the current speed of the vehicle, sending the current image data collected in the third blind area to the vehicle or the terminal equipment associated with the vehicle.
A14. According to the method as set forth in a13,
if the third speed of the third moving object in the third image data is greater than the current speed of the vehicle, sending the current image data collected in the third blind area to the vehicle or the terminal device associated with the vehicle, including:
and if the third speed of the third moving object in the third image data is greater than the current speed of the vehicle, determining whether the distance between the third moving object and the vehicle is less than a preset distance, and if so, sending the current image data collected in the third blind area to the vehicle or the terminal equipment associated with the vehicle.
A15. The method according to claim A1, wherein the first and second signal signals are transmitted via a transmission line,
the vehicle blind areas comprise a first blind area, a second blind area and a third blind area; the first blind area is located on the rear side of the vehicle, and the second blind area and the third blind area are located on the left side and the right side of the vehicle respectively;
the method further comprises the following steps:
when a door opening signal of the vehicle is acquired, whether a fourth moving object exists in the second blind area or the third blind area corresponding to the door is confirmed, if yes, a fourth prompt tone and corresponding current image data collected by the second blind area or the third blind area are sent to the vehicle or terminal equipment associated with the vehicle, so that the vehicle or the terminal equipment displays the current image data on a corresponding display screen and controls a loudspeaker of the vehicle or the terminal equipment to play the fourth prompt tone, and early warning prompt is carried out on personnel in the vehicle.
A16. The method of a15, the method further comprising:
when the current gear of vehicle is the preset gear, confirm whether there is the fifth moving object in the second blind area and/or the third blind area, if, send fifth warning sound and corresponding the current image data that the second blind area and/or the third blind area was gathered to the vehicle or with the terminal equipment that the vehicle is correlated with, so that the vehicle or terminal equipment shows on the display screen that corresponds current image data and control the vehicle or terminal equipment's speaker broadcast the fifth warning sound, in order to right personnel in the vehicle carry out the early warning suggestion.
B17. A safety precaution method of vehicle driving, the method comprising:
the vehicle-mounted device receives current image data which are sent by the image processing device and collected in the vehicle blind areas, wherein the vehicle blind areas comprise a first blind area, a second blind area and a third blind area. The image processing device sends the current image data acquired in the vehicle blind area when the image processing device acquires the image data acquired in the vehicle blind area and confirms that the first speed of a moving object in the image data is greater than the current speed of the vehicle;
if the current image data collected in the vehicle blind area is the current image data collected in the first blind area, displaying the current image data on a display screen of the vehicle-mounted device;
if the current image data collected in the vehicle blind area is the current image data collected in the first blind area and the second blind area, simultaneously displaying the current image data collected in the first blind area and the second blind area on a display screen of the vehicle-mounted device;
B18. the method of B17, the method further comprising:
and if the current image data collected in the vehicle blind area is the current image data collected in the second blind area, displaying the current image data on a display screen of the vehicle-mounted device.
B19. The method of B17, the method further comprising:
if the current image data collected in the vehicle blind area is the current image data collected in the third blind area, displaying the current image data on a display screen of the vehicle-mounted device;
and if the current image data collected in the vehicle blind area is the current image data collected in the first blind area, the second blind area and the third blind area, simultaneously displaying the current image data collected in the first blind area, the second blind area and the third blind area on a display screen of the vehicle-mounted device.
B20. The method of B17, the method further comprising:
when a door opening signal of the vehicle is acquired, the opening signal is sent to the image processing device, so that the image processing device confirms whether a sixth moving object exists in the second blind area or the third blind area corresponding to the door, and if so, a sixth prompt tone and current image data collected by the second blind area or the third blind area corresponding to the door are sent to the vehicle-mounted device;
and after receiving a sixth prompt tone and the corresponding current image data acquired by the second blind area or the third blind area, displaying the current image data and controlling the vehicle or a loudspeaker of the vehicle-mounted device to play the sixth prompt tone on a display screen of the vehicle-mounted device so as to carry out early warning prompt on personnel in the vehicle.
B21. The method of claim B20, the method further comprising:
receiving a seventh prompt tone sent by the image processing device and current image data collected by the second blind area and/or the third blind area, wherein the seventh prompt tone is generated by the image processing device when the current gear of the vehicle is a preset gear and whether a seventh moving object exists in the second blind area and/or the third blind area is confirmed;
and displaying the current image data collected by the second blind area and/or the third blind area on a display screen of the vehicle-mounted device, and controlling the vehicle or a loudspeaker of the vehicle-mounted device to play the seventh prompt tone so as to give an early warning prompt to personnel in the vehicle.
B22. According to the method as set forth in B17,
the vehicle-mounted device receives the current image data collected in the vehicle blind area and sent by the image processing device, and the vehicle-mounted device further comprises:
the vehicle-mounted device receives current image data and preset parameters which are sent by the image processing device and collected in the vehicle blind area;
if the current image data collected in the vehicle blind area is the current image data collected in the first blind area, displaying on a display screen of the vehicle-mounted device, wherein the displaying comprises:
and if the current image data acquired in the vehicle blind area is the current image data acquired in the first blind area, displaying the current image data acquired in the first blind area on a display screen of the vehicle-mounted device based on preset parameters, recording the current image data, and storing the recorded current image data to a server.
B23. According to the method as set forth in B22,
the vehicle-mounted device comprises a sound pickup for collecting environmental sounds of the vehicle;
the displaying of the current image data collected in the first blind area on the display screen of the vehicle-mounted device, recording of the current image data, and storage of the recorded current image data to a server, includes:
the display screen of the vehicle-mounted device displays the current image data collected by the first blind area, and the current image data is right, the display screen carries out screen recording, and the current environment sound of the vehicle is collected through the sound pick-up, and the recorded image data and the current environment sound are stored in the server.
C24. A safety precaution method of vehicle driving, the method comprising:
the mobile terminal receives current image data which are sent by the image processing device and collected in the vehicle blind areas, wherein the vehicle blind areas comprise a first blind area, a second blind area and a third blind area. The image processing device sends the current image data acquired in the vehicle blind area when the image processing device acquires the image data acquired in the vehicle blind area and confirms that the first speed of a moving object in the image data is greater than the current speed of the vehicle;
if the current image data collected in the vehicle blind area is the current image data collected in the first blind area, displaying the current image data on a display screen of the mobile terminal;
and if the current image data collected in the vehicle blind area is the current image data collected in the first blind area and the second blind area, simultaneously displaying the current image data collected in the first blind area and the second blind area on a display screen of the mobile terminal.
C25. The method of C24, the method further comprising:
and if the current image data collected in the vehicle blind area is the current image data collected in the second blind area, displaying the current image data on a display screen of the mobile terminal.
C26. The method of C24, the method further comprising:
receiving an eighth prompt tone sent by an image processing device and current image data collected in the second blind area and/or the third blind area;
displaying the current image data collected in the second blind area or the third blind area on a display screen of the mobile terminal and controlling a loudspeaker to play the eighth prompt tone so as to give an early warning prompt to personnel in the vehicle; when the vehicle-mounted device acquires a door opening signal of the vehicle, the eighth prompt tone is sent to the image processing device by the vehicle-mounted device, or when the current gear of the vehicle is a preset gear, the image processing device confirms that an eighth moving object exists in current image data collected by the second blind area or the third blind area.
C27. The method of C24, the method further comprising:
if the current image data collected in the vehicle blind area is the current image data collected in the third blind area, displaying the current image data on the display screen; and if the current image data collected in the vehicle blind area is the current image data collected in the first blind area, the second blind area and the third blind area, simultaneously displaying the current image data collected in the first blind area, the second blind area and the third blind area on a display screen of the mobile terminal.
C28. According to the method as set forth in C24,
the mobile terminal receives the current image data collected in the vehicle blind area and sent by the image processing device, and the method further comprises the following steps:
the mobile terminal receives current image data and preset parameters which are sent by the image processing device and collected in the vehicle blind area;
if the current image data collected in the vehicle blind area is the current image data collected in the first blind area, displaying on a display screen of the mobile terminal, wherein the displaying comprises:
and if the current image data collected in the vehicle blind area is the current image data collected in the first blind area, displaying the current image data collected in the first blind area on a display screen of the mobile terminal based on preset parameters, recording the current image data, and storing the recorded current image data to a server.
C29. According to the method as set forth in C28,
the displaying the current image data collected by the first blind area on the display screen of the mobile terminal, recording the current image data, and storing the recorded current image data to a server includes:
the method comprises the steps that current image data collected by a first blind area are displayed on a display screen of the mobile terminal, the display screen is used for screen recording, a sound pickup collects current environment sounds of a vehicle through vehicle-mounted equipment, and the recorded current image data and the current environment sounds are stored in a server.
C30. The method of C24, the method further comprising:
and responding to a first touch instruction, and sending a first setting parameter to an on-board device and/or an image processing device so as to enable the on-board device and/or the image processing device to carry out setting based on the first setting parameter.
C31. The method of C24, the method further comprising:
responding to a second touch instruction, and acquiring historical image data from a local storage or a server;
and displaying the historical image data.
D32. An image processing apparatus comprising a processor and a memory connected to the processor;
wherein the memory is adapted to store program data and the processor is adapted to execute the program data to perform the method of any of a1-a 16.
E33. An in-vehicle apparatus comprising a processor and a memory connected to the processor;
wherein the memory is configured to store program data and the processor is configured to execute the program data to perform the method of any of B17-B23.
F34. A mobile terminal comprising a processor and a memory connected to the processor;
wherein the memory is adapted to store program data and the processor is adapted to execute the program data to perform the method of any of C24-C31.
G35. A readable storage medium for storing program data which, when executed by a processor, is for implementing a method as described in any one of a1-a16, or B17-B23, or C24-C31.
H36. A safety early warning system for vehicle driving comprises an image processing device, a vehicle-mounted device and a mobile terminal;
wherein the image processing device is the image processing device described in D32, the in-vehicle device is the in-vehicle device described in E33, and the mobile terminal is the mobile terminal described in F34.

Claims (10)

1. A safety warning method for vehicle driving, the method comprising:
the image processing device acquires image data collected in a vehicle blind area;
acquiring a first speed of a moving object in the image data;
and if the first speed of the moving object in the image data is greater than the current speed of the vehicle, sending the current image data collected in the vehicle blind area to the vehicle or the terminal equipment associated with the vehicle to perform safety early warning.
2. The method of claim 1,
the vehicle blind zone comprises a first blind zone; wherein the first blind area is located on a rear side of the vehicle;
the acquiring of the image data collected in the vehicle blind area comprises the following steps:
acquiring first image data collected in the first blind area;
the acquiring a first velocity of a moving object in the image data comprises:
acquiring a first distance between a first moving object in the first image data and the vehicle at a previous moment, and acquiring a second distance between the first moving object and the vehicle at a current moment;
and calculating a first speed of the first moving object according to the first distance and the second distance.
3. The method of claim 2,
the acquiring a first distance between a first moving object in the first image data and the vehicle at a previous time and acquiring a second distance between the first moving object and the vehicle at a current time comprises:
detecting whether a first moving object exists in the first image data;
if yes, acquiring a first distance between a first moving object in the first image data and the vehicle at a previous moment, and acquiring a second distance between the first moving object and the vehicle at a current moment.
4. A safety warning method for vehicle driving, the method comprising:
the vehicle-mounted device receives current image data which are sent by the image processing device and collected in the vehicle blind areas, wherein the vehicle blind areas comprise a first blind area, a second blind area and a third blind area. The image processing device sends the current image data acquired in the vehicle blind area when the image processing device acquires the image data acquired in the vehicle blind area and confirms that the first speed of a moving object in the image data is greater than the current speed of the vehicle;
if the current image data collected in the vehicle blind area is the current image data collected in the first blind area, displaying the current image data on a display screen of the vehicle-mounted device;
and if the current image data collected in the vehicle blind area is the current image data collected in the first blind area and the second blind area, simultaneously displaying the current image data collected in the first blind area and the second blind area on a display screen of the vehicle-mounted device.
5. A safety warning method for vehicle driving, the method comprising:
the mobile terminal receives current image data which are sent by the image processing device and collected in the vehicle blind areas, wherein the vehicle blind areas comprise a first blind area, a second blind area and a third blind area. The image processing device sends the current image data acquired in the vehicle blind area when the image processing device acquires the image data acquired in the vehicle blind area and confirms that the first speed of a moving object in the image data is greater than the current speed of the vehicle;
if the current image data collected in the vehicle blind area is the current image data collected in the first blind area, displaying the current image data on a display screen of the mobile terminal;
and if the current image data collected in the vehicle blind area is the current image data collected in the first blind area and the second blind area, simultaneously displaying the current image data collected in the first blind area and the second blind area on a display screen of the mobile terminal.
6. An image processing apparatus, characterized in that the image processing apparatus comprises a processor and a memory connected with the processor;
wherein the memory is for storing program data and the processor is for executing the program data to implement the method of any one of claims 1-3.
7. An in-vehicle apparatus, characterized in that the in-vehicle apparatus includes a processor and a memory connected to the processor;
wherein the memory is for storing program data and the processor is for executing the program data to implement the method of claim 4.
8. A mobile terminal, characterized in that the mobile terminal comprises a processor and a memory connected with the processor;
wherein the memory is for storing program data and the processor is for executing the program data to implement the method of claim 5.
9. A readable storage medium, characterized in that the readable storage medium is for storing program data, which when executed by a processor is for implementing the method as claimed in any one of claims 1-3, or claim 4 or claim 5.
10. The safety early warning system for vehicle driving is characterized by comprising an image processing device, a vehicle-mounted device and a mobile terminal;
wherein the image processing apparatus is according to claim 6, the in-vehicle apparatus is according to claim 7, and the mobile terminal is according to claim 8.
CN202010716968.XA 2020-07-23 2020-07-23 Safety early warning method and system for vehicle driving and related device Active CN111845557B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010716968.XA CN111845557B (en) 2020-07-23 2020-07-23 Safety early warning method and system for vehicle driving and related device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010716968.XA CN111845557B (en) 2020-07-23 2020-07-23 Safety early warning method and system for vehicle driving and related device
PCT/CN2020/125077 WO2022016730A1 (en) 2020-07-23 2020-10-30 Safety alert method and system for vehicle driving, and related apparatus

Publications (2)

Publication Number Publication Date
CN111845557A true CN111845557A (en) 2020-10-30
CN111845557B CN111845557B (en) 2022-04-29

Family

ID=72949448

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010716968.XA Active CN111845557B (en) 2020-07-23 2020-07-23 Safety early warning method and system for vehicle driving and related device

Country Status (2)

Country Link
CN (1) CN111845557B (en)
WO (1) WO2022016730A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022016731A1 (en) * 2020-07-23 2022-01-27 深圳市健创电子有限公司 Image processing method for vehicle blind spots, system, and related apparatus

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007015662A (en) * 2005-07-11 2007-01-25 Toyota Motor Corp Blind corner monitor system
CN102136147A (en) * 2011-03-22 2011-07-27 深圳英飞拓科技股份有限公司 Target detecting and tracking method, system and video monitoring device
US20160196748A1 (en) * 2015-01-02 2016-07-07 Atieva, Inc. Automatically Activated Blind Spot Camera System
CN106740470A (en) * 2016-11-21 2017-05-31 奇瑞汽车股份有限公司 A kind of blind area monitoring method and system based on full-view image system
CN106740505A (en) * 2017-02-07 2017-05-31 深圳市小飞达电子有限公司 A kind of automobile rear view mirror blind zone detecting system and automobile rearview mirror
CN107031661A (en) * 2017-03-16 2017-08-11 浙江零跑科技有限公司 A kind of lane change method for early warning and system based on blind area camera input
CN107161081A (en) * 2017-05-11 2017-09-15 重庆长安汽车股份有限公司 A kind of right side fade chart picture automatically opens up system and method
CN207106337U (en) * 2017-04-10 2018-03-16 江苏车视杰电子有限公司 A kind of vehicle blind zone early warning system
CN108010383A (en) * 2017-09-29 2018-05-08 北京车和家信息技术有限公司 Blind zone detection method, device, terminal and vehicle based on driving vehicle
CN108674313A (en) * 2018-06-05 2018-10-19 浙江零跑科技有限公司 A kind of blind area early warning system and method based on vehicle-mounted back vision wide angle camera
CN109204141A (en) * 2018-09-19 2019-01-15 深圳市众鸿科技股份有限公司 Method for early warning and device in vehicle travel process
CN109591698A (en) * 2017-09-30 2019-04-09 上海欧菲智能车联科技有限公司 Blind area detection system, blind zone detection method and vehicle
KR101941903B1 (en) * 2017-08-25 2019-04-12 문영실 Device and method that rearview mirror serves as camera monitor and prevents collision with navigation
CN109952231A (en) * 2016-12-30 2019-06-28 金泰克斯公司 With the on-demand full display mirror for scouting view
CN110228416A (en) * 2019-06-24 2019-09-13 合肥工业大学 A kind of early warning system and its method based on driver's turning vision dead zone detection

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10173590B2 (en) * 2017-02-27 2019-01-08 GM Global Technology Operations LLC Overlaying on an in-vehicle display road objects associated with potential hazards
CN110341640A (en) * 2018-04-02 2019-10-18 郑州宇通客车股份有限公司 A kind of visual early warning system of car and its mobile detection alarming method for power
CN109815832A (en) * 2018-12-28 2019-05-28 深圳云天励飞技术有限公司 Driving method for early warning and Related product
CN110364024A (en) * 2019-06-10 2019-10-22 深圳市锐明技术股份有限公司 Environment control method, device and the car-mounted terminal of driving vehicle
CN110901536A (en) * 2019-12-09 2020-03-24 江苏理工学院 Blind area detection alarm system and working method thereof

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007015662A (en) * 2005-07-11 2007-01-25 Toyota Motor Corp Blind corner monitor system
CN102136147A (en) * 2011-03-22 2011-07-27 深圳英飞拓科技股份有限公司 Target detecting and tracking method, system and video monitoring device
US20160196748A1 (en) * 2015-01-02 2016-07-07 Atieva, Inc. Automatically Activated Blind Spot Camera System
CN106740470A (en) * 2016-11-21 2017-05-31 奇瑞汽车股份有限公司 A kind of blind area monitoring method and system based on full-view image system
CN109952231A (en) * 2016-12-30 2019-06-28 金泰克斯公司 With the on-demand full display mirror for scouting view
CN106740505A (en) * 2017-02-07 2017-05-31 深圳市小飞达电子有限公司 A kind of automobile rear view mirror blind zone detecting system and automobile rearview mirror
CN107031661A (en) * 2017-03-16 2017-08-11 浙江零跑科技有限公司 A kind of lane change method for early warning and system based on blind area camera input
CN207106337U (en) * 2017-04-10 2018-03-16 江苏车视杰电子有限公司 A kind of vehicle blind zone early warning system
CN107161081A (en) * 2017-05-11 2017-09-15 重庆长安汽车股份有限公司 A kind of right side fade chart picture automatically opens up system and method
KR101941903B1 (en) * 2017-08-25 2019-04-12 문영실 Device and method that rearview mirror serves as camera monitor and prevents collision with navigation
CN108010383A (en) * 2017-09-29 2018-05-08 北京车和家信息技术有限公司 Blind zone detection method, device, terminal and vehicle based on driving vehicle
CN109591698A (en) * 2017-09-30 2019-04-09 上海欧菲智能车联科技有限公司 Blind area detection system, blind zone detection method and vehicle
CN108674313A (en) * 2018-06-05 2018-10-19 浙江零跑科技有限公司 A kind of blind area early warning system and method based on vehicle-mounted back vision wide angle camera
CN109204141A (en) * 2018-09-19 2019-01-15 深圳市众鸿科技股份有限公司 Method for early warning and device in vehicle travel process
CN110228416A (en) * 2019-06-24 2019-09-13 合肥工业大学 A kind of early warning system and its method based on driver's turning vision dead zone detection

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022016731A1 (en) * 2020-07-23 2022-01-27 深圳市健创电子有限公司 Image processing method for vehicle blind spots, system, and related apparatus

Also Published As

Publication number Publication date
CN111845557B (en) 2022-04-29
WO2022016730A1 (en) 2022-01-27

Similar Documents

Publication Publication Date Title
US10744938B1 (en) Collision avoidance and/or pedestrian detection system
JP4933658B2 (en) In-vehicle image display device
US20070088488A1 (en) Vehicle safety system
EP2896029B1 (en) Backward movement indicator apparatus for a vehicle
CN102211547B (en) Driving visual blind area detection system and method
CN111267734A (en) Safety protection system for large transport vehicle and early warning method thereof
CN111845557B (en) Safety early warning method and system for vehicle driving and related device
CN110728770A (en) Vehicle running monitoring method, device and system and electronic equipment
US20220122457A1 (en) Lane change notification
CN110901536A (en) Blind area detection alarm system and working method thereof
CN111369828B (en) Safety early warning system and method for vehicle turning blind area
JP2000251198A (en) Periphery monitor device for vehicle
US20210287546A1 (en) Advanced pedestrian and/or driver alert and/or collision avoidance system
CN110667475A (en) Auxiliary monitoring system and method for blind area of passenger car
CN111845783B (en) Image processing method and system for vehicle blind area and related device
CN214796205U (en) 360-degree panoramic wireless safety auxiliary system for rear loading of motor vehicle
CN215904409U (en) Blind area covers driver assistance system and vehicle
US11267393B2 (en) Vehicular alert system for alerting drivers of other vehicles responsive to a change in driving conditions
CN112744159A (en) Vehicle, rear view display device thereof, and control method for rear view display device
CN113205686A (en) 360-degree panoramic wireless safety auxiliary system for rear loading of motor vehicle
CN111785078A (en) Driving monitoring method, vehicle-mounted terminal and computer readable storage medium
CN113240940A (en) Automobile reminding monitoring method, electronic equipment and storage medium
JP2020088678A (en) Vehicle safe start support device
CN112455450A (en) Vehicle rear-end collision early warning method, device, storage medium and device
JP2020046728A (en) On-vehicle unit

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant