WO2022016731A1 - 车辆盲区的图像处理方法、系统及相关装置 - Google Patents

车辆盲区的图像处理方法、系统及相关装置 Download PDF

Info

Publication number
WO2022016731A1
WO2022016731A1 PCT/CN2020/125090 CN2020125090W WO2022016731A1 WO 2022016731 A1 WO2022016731 A1 WO 2022016731A1 CN 2020125090 W CN2020125090 W CN 2020125090W WO 2022016731 A1 WO2022016731 A1 WO 2022016731A1
Authority
WO
WIPO (PCT)
Prior art keywords
blind spot
vehicle
image data
data collected
current
Prior art date
Application number
PCT/CN2020/125090
Other languages
English (en)
French (fr)
Inventor
林基业
Original Assignee
深圳市健创电子有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市健创电子有限公司 filed Critical 深圳市健创电子有限公司
Publication of WO2022016731A1 publication Critical patent/WO2022016731A1/zh

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/10Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to vehicle motion
    • B60W40/105Speed
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/188Capturing isolated or intermittent images triggered by the occurrence of a predetermined event, e.g. an object reaching a predetermined position
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/80Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
    • B60R2300/802Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for monitoring and displaying vehicle exterior blind spot views
    • B60R2300/8026Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for monitoring and displaying vehicle exterior blind spot views in addition to a rear-view mirror system
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0001Details of the control system
    • B60W2050/0043Signal treatments, identification of variables or parameters, parameter estimation or state estimation
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/146Display means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2520/00Input parameters relating to overall vehicle dynamics
    • B60W2520/10Longitudinal speed

Definitions

  • the present application relates to the technical field of vehicle applications, and in particular, to an image processing method, system and related device for a blind spot of a vehicle.
  • a rear-view mirror blind spot warning system which uses radar or microwave to detect whether there is an approaching vehicle behind the vehicle and alerts the driver through sound or light.
  • This method can only have the effect of a vague reminder.
  • the driver cannot take the initiative to make adaptive adjustments and needs to turn his eyes to one side of the rearview mirror to notice it.
  • the road conditions on the other side cannot be taken into account. personnel bring new security risks.
  • the main technical problem to be solved by the present application is to provide an image processing method, system and related device for a blind spot of a vehicle, which can improve the driving safety of the vehicle.
  • a technical solution adopted in the present application is to provide an image processing method for a blind spot of a vehicle.
  • the method includes: an image processing device obtains first steering data of a vehicle, and obtains a blind spot of the vehicle corresponding to the first steering data; wherein the blind spot of the vehicle includes: The first blind spot and the second blind spot, the first blind spot and the second blind spot are respectively located from the left side to the left rear side or the right side to the right rear side of the vehicle; if the vehicle blind spot corresponds to the first blind spot, the first blind spot collected in the first blind spot is obtained.
  • the current image collected in the first blind spot will be The data is sent to the display device; if the first speed of the first moving object in the first image data is not greater than the current speed of the vehicle, the current image data collected in the first blind spot and the second blind spot are sent to the display device.
  • the method further includes: if the blind spot of the vehicle corresponds to the second blind spot, obtaining second image data collected in the second blind spot; obtaining the second speed of the second moving object in the second image data; If the second speed of the second moving object is greater than the current speed of the vehicle, the current image data collected in the second blind spot will be sent to the display device; if the second speed of the second moving object in the second image data is not greater than the vehicle's current speed At the current speed, the current image data collected in the first blind area and the second blind area are sent to the display device.
  • acquiring the first speed of the first moving object in the first image data includes: acquiring the first distance between the first moving object in the first image data and the vehicle at the previous moment, and acquiring the current The second distance from the vehicle at the moment; the first speed of the first moving object is calculated according to the first distance and the second distance.
  • sending the current image data collected in the first blind area to the display device including: if the first image data in the first image data If the first speed of the moving object is greater than the current speed of the vehicle, and the second distance is less than the first preset distance, the current image data collected in the first blind spot is sent to the display device.
  • acquiring the second speed of the second moving object in the second image data includes: acquiring the third distance between the second moving object in the second image data and the vehicle at the previous moment, and acquiring the current time of the second moving object The fourth distance from the vehicle at the moment; the second speed of the second moving object is calculated according to the third distance and the fourth distance.
  • sending the current image data collected in the second blind spot to the display device including: if the second image data in the second image data If the second speed of the moving object is greater than the current speed of the vehicle, and the fourth distance is less than the second preset distance, the current image data collected in the second blind spot is sent to the display device.
  • the blind spot of the vehicle further includes a third blind spot
  • the third blind spot is arranged on the rear side of the vehicle
  • the method further includes: obtaining third image data collected in the third blind spot; obtaining third image data of the third moving object in the third image data Speed; if the third speed of the third moving object in the third image data is greater than the current speed of the vehicle, the current image data collected in the third blind spot is sent to the display device.
  • sending the current image data collected in the third blind spot to the display device including: if the third If the third speed of the moving object is greater than the current speed of the vehicle, the second steering data of the vehicle is acquired; if the second steering data of the vehicle is not acquired, the current image data collected in the third blind spot is sent to the display device.
  • the method further includes: if the third moving object in the third blind spot moves to the first blind spot or the second blind spot, acquiring the fourth speed of the third moving object in the first blind spot or the second blind spot; The fourth speed of the third moving object in the second blind spot is greater than the current speed of the vehicle, and the current image data collected in the first blind spot or the second blind spot is sent to the display device.
  • acquiring the first steering data of the vehicle and acquiring the blind spot of the vehicle corresponding to the first steering data includes: acquiring fourth image data collected by the front camera of the vehicle; acquiring the first angle between the lane line and the vehicle in the fourth image data ; If the first angle is greater than the first preset angle, confirm that the first steering data corresponds to the first blind area; if the first angle is smaller than the second preset angle, confirm that the first steering data corresponds to the second blind area.
  • acquiring the first angle between the lane line and the vehicle in the fourth image data includes: identifying the first lane line at the current moment and the second lane line at the previous moment in the fourth image; calculating the first lane line and the second lane line The angle of the included angle formed between the lane lines, and the angle of the included angle is taken as the first angle.
  • the method further includes a first preset parameter, a second preset parameter and a third preset parameter; if the first speed of the first moving object in the first image data is greater than the current speed of the vehicle, Sending the current image data collected in the blind area to the display device includes: if the first speed of the first moving object in the first image data is greater than the current speed of the vehicle, obtaining the continuous time of the first turning data, if the first turning data is If the continuous time is longer than the preset time, the current image data collected in the first blind area and the first preset parameters are sent to the display device, so that the display device is configured according to the first preset parameters; or, if the first image data contains If the first speed of the first moving object is greater than the current speed of the vehicle, the continuous time of the first steering data is obtained, and if the continuous time of the first steering data is not greater than the preset time, the first speed and the current speed are calculated.
  • a difference value if the first difference value is greater than the preset difference value, the current image data collected in the first blind area and the second preset parameter are sent to the display device, so that the display device is configured according to the second preset parameter, If the first difference is not greater than the preset difference, the current image data collected in the first blind area and the third preset parameter are sent to the display device, so that the display device is configured according to the third preset parameter.
  • the method further includes: when the current gear of the vehicle is a preset gear, if it is detected that the door of the vehicle is opened, an early warning reminder is performed.
  • the method includes: an on-board device receives current image data collected in the blind spot of the vehicle and sent by an image processing device, wherein the on-board device includes a display screen, and the vehicle
  • the blind spot includes the first blind spot and the second blind spot; if the current image data collected in the blind spot of the vehicle is the current image data collected in the first blind spot, it will be displayed on the display screen; if the current image data collected in the blind spot of the vehicle is the current image data collected in the blind spot of the vehicle The current image data collected in the first blind area and the second blind area are displayed on the display screen simultaneously with the current image data collected in the first blind area and the second blind area.
  • the method further includes: if the current image data collected in the blind spot of the vehicle is the current image data collected in the second blind spot, displaying it on the display screen.
  • the vehicle blind spot further includes a third blind spot
  • the method further includes: if the current image data collected in the vehicle blind spot is the current image data collected by the third blind spot, displaying it on the display screen; if the current image data collected in the vehicle blind spot is The current image data collected by the first blind spot, the second blind spot and the third blind spot will be displayed on the display screen simultaneously with the current image data collected by the first blind spot, the second blind spot and the third blind spot.
  • the in-vehicle device receives the current image data collected in the blind spot of the vehicle sent by the image processing device, and further includes: the in-vehicle device receives the current image data and preset parameters collected in the blind spot of the vehicle sent by the image processing device;
  • the data is the current image data collected in the first blind spot, and then displayed on the display screen, including: if the current image data collected in the vehicle blind spot is the current image data collected in the first blind spot, then based on preset parameters, display the data on the display.
  • the screen displays the current image data collected in the first blind area, records the current image data, and stores the recorded current image data to the server.
  • the method includes: a mobile terminal receives current image data collected in the blind spot of the vehicle and sent by an image processing device, wherein the blind spot of the vehicle includes a first blind spot and a blind spot of the vehicle.
  • the second blind spot if the current image data collected in the blind spot of the vehicle is the current image data collected in the first blind spot, it will be displayed on the display screen of the mobile terminal; if the current image data collected in the blind spot of the vehicle is the current image data collected in the first blind spot and The current image data collected in the second blind area is displayed on the display screen of the mobile terminal simultaneously with the current image data collected in the first blind area and the second blind area.
  • the method further includes: if the current image data collected in the blind spot of the vehicle is the current image data collected in the second blind spot, displaying it on the display screen.
  • the vehicle blind spot further includes a third blind spot
  • the method further includes: if the current image data collected in the vehicle blind spot is the current image data collected in the third blind spot, displaying it on the display screen; if the current image data collected in the vehicle blind spot is the current image data collected in the vehicle blind spot
  • the data is the current image data collected in the first blind area, the second blind area and the third blind area, and the current image data collected in the first blind area, the second blind area and the third blind area are simultaneously displayed on the display screen of the mobile terminal.
  • the mobile terminal receives the current image data collected in the blind spot of the vehicle sent by the image processing device, and further includes: the mobile terminal receives the current image data and preset parameters collected in the blind spot of the vehicle sent by the image processing device; if the current image data collected in the blind spot of the vehicle is The image data is the current image data collected in the first blind spot, and is displayed on the display screen of the mobile terminal, including: if the current image data collected in the vehicle blind spot is the current image data collected in the first blind spot, based on the preset parameter, display the current image data collected in the first blind area on the display screen of the mobile terminal, record the current image data, and store the recorded current image data to the server.
  • the method further includes: in response to the first touch command, sending a first setting parameter to the vehicle-mounted device and/or the image processing device, so that the vehicle-mounted device and/or the image processing device perform setting based on the first setting parameter.
  • the method further includes: in response to the second touch instruction, acquiring historical image data from a local storage or a server; and playing the historical image data.
  • Another technical solution adopted in the present application is to provide an image processing device, the image processing device includes a processor and a memory connected to the processor; wherein the memory is used to store program data, and the processor is used to execute the program data to realize The method provided by the above technical solution.
  • vehicle-mounted device includes a processor and a memory connected to the processor; wherein, the memory is used for storing program data, and the processor is used for executing the program data, so as to realize the above technology method provided by the program.
  • the mobile terminal includes a processor and a memory connected to the processor; wherein the memory is used for storing program data, and the processor is used for executing the program data, so as to realize the above technology method provided by the program.
  • Another technical solution adopted in the present application is to provide a readable storage medium, where the readable storage medium is used to store program data, and when the program data is executed by a processor, it is used to implement any of the methods provided by the above technical solutions.
  • Another technical solution adopted in the present application is to provide an image processing system for a blind spot of a vehicle, the image processing system includes an image processing device, a vehicle-mounted device and a mobile terminal; wherein, the image processing device is the image processing device provided by the above technical solution,
  • the vehicle-mounted device such as the vehicle-mounted device and the mobile terminal provided by the above-mentioned technical solutions are the same as the mobile terminal provided by the above-mentioned technical solutions.
  • the image processing method for the blind spot of the vehicle obtains the first steering data of the vehicle through the image processing device, and obtains the blind spot of the vehicle corresponding to the first steering data; wherein, The blind spot of the vehicle includes a first blind spot and a second blind spot, and the first blind spot and the second blind spot are located on the left or right side of the vehicle respectively; if the blind spot of the vehicle corresponds to the first blind spot, the first image data collected in the first blind spot is obtained; The first speed of the first moving object in the first image data; if the first speed of the first moving object in the first image data is greater than the current speed of the vehicle, the current image data collected in the first blind spot is sent to the display device; If the first speed of the first moving object in the first image data is not greater than the current speed of the vehicle, the current image data collected in the first blind spot and the second blind spot are sent to the display device.
  • the existing blind spot early warning scheme cannot provide accurate reminders and multi-side coordination
  • the active configuration of the blind spot image system of each rearview mirror of the vehicle is realized. It can improve the driving safety of the vehicle.
  • FIG. 1 is a schematic flowchart of a first embodiment of an image processing method for a blind spot of a vehicle provided by the present application
  • Fig. 2 is the specific flow chart of step 13 in Fig. 1 provided by this application;
  • FIG. 3 is a schematic display diagram of the display device provided by the present application.
  • FIG. 4 is a schematic flowchart of a second embodiment of an image processing method for a blind spot of a vehicle provided by the present application;
  • Fig. 5 is another display schematic diagram of the display device provided by the present application.
  • FIG. 6 is a schematic flowchart of a third embodiment of an image processing method for a blind spot of a vehicle provided by the present application.
  • FIG. 7 is a schematic diagram of a first comparison of lane lines provided by the present application.
  • FIG. 9 is a schematic diagram of an application scenario of an image processing method for a blind spot of a vehicle provided by the present application.
  • FIG. 10 is a schematic flowchart of the fourth embodiment of the image processing method for the blind spot of a vehicle provided by the present application.
  • FIG. 11 is a schematic flowchart of the fifth embodiment of the image processing method for the blind spot of a vehicle provided by the present application.
  • FIG. 12 is a schematic diagram of a first display interface of a mobile terminal in the image processing method for a blind spot of a vehicle provided by the present application;
  • FIG. 13 is a schematic diagram of a second display interface of a mobile terminal in the image processing method for a blind spot of a vehicle provided by the present application;
  • FIG. 14 is a schematic diagram of a third display interface of a mobile terminal in the image processing method for a blind spot of a vehicle provided by the present application;
  • 15 is a schematic structural diagram of an embodiment of an image processing apparatus provided by the present application.
  • 16 is a schematic structural diagram of an embodiment of a vehicle-mounted device provided by the present application.
  • FIG. 17 is a schematic structural diagram of an embodiment of a mobile terminal provided by the present application.
  • FIG. 18 is a schematic structural diagram of an embodiment of a readable storage medium provided by the present application.
  • FIG. 19 is a schematic structural diagram of an embodiment of an image processing system for a blind spot of a vehicle provided by the present application.
  • FIG. 1 is a schematic flowchart of a first embodiment of an image processing method for a blind spot of a vehicle provided by the present application. The method includes:
  • Step 11 The image processing device acquires the first steering data of the vehicle, and acquires the blind spot of the vehicle corresponding to the first steering data.
  • the blind spot of the vehicle includes a first blind spot and a second blind spot
  • the first blind spot and the second blind spot are respectively located from the left side to the left rear side or the right side to the right rear side of the vehicle.
  • first blind spot and the second blind spot of the vehicle are correspondingly provided with image acquisition devices, and the image acquisition devices belong to image processing devices.
  • the image content corresponding to the first blind area and the second blind area is acquired by the image acquisition device.
  • the image acquisition apparatus starts to collect data of the first blind spot and the second blind spot, and when the image processing apparatus acquires the first steering data, acquires the vehicle blind area corresponding to the first steering data.
  • the first steering data may be a steering signal, such as a steering signal manually operated by a vehicle driver, or may be a steering angle of a steering wheel.
  • the first blind spot corresponds to the left side of the vehicle
  • the second blind spot corresponds to the right side of the vehicle. If the driver of the vehicle operates the turn signal to turn left, the relevant data of the first blind spot will be obtained; If the turn signal is turned to the right, the relevant data of the second blind spot will be obtained.
  • Step 12 If the blind spot of the vehicle corresponds to the first blind spot, acquire the first image data collected in the first blind spot.
  • the first image data is image data of a preset time length
  • the end time period of the image data is the current time point
  • Step 13 Acquire the first speed of the first moving object in the first image data.
  • step 13 may specifically be the following steps:
  • Step 131 Obtain the first distance between the first moving object and the vehicle at the previous moment in the first image data, and obtain the second distance between the first moving object and the vehicle at the current moment.
  • the first image data includes multiple image frames, and it is detected whether the first moving object exists in the multiple image frames, and if there is, it is confirmed that the image frame is a valid image frame. Therefore, multiple valid image frames will be obtained, the second distance to the vehicle when the valid image frame of the first moving object at the current moment is obtained, and the valid image frame of the first moving object at the previous moment of the valid image frame at the current moment The first distance from the vehicle.
  • Step 132 Calculate the first speed of the first moving object according to the first distance and the second distance.
  • the first speed of the first moving object can be calculated according to the first distance and the second distance, and then obtaining the current speed of the vehicle and the time difference between the above-mentioned two valid image frames.
  • the current speed of the vehicle is V 1
  • the first distance is L 1
  • the second distance is L 2
  • the time difference between the two valid image frames of the first distance and the second distance is t
  • Step 14 If the first speed of the first moving object in the first image data is greater than the current speed of the vehicle, send the current image data collected in the first blind spot to the display device.
  • the display device may be a vehicle-mounted device or a mobile terminal.
  • the first moving object in the first image data is greater than the current speed of the vehicle, the first moving object may overtake the vehicle, and the steering operation is performed at this time, which is less safe. Then, the current image data collected in the first blind area is sent to the display device.
  • the display device belongs to a vehicle-mounted device, and is connected to the image processing device through a built-in wireless or Bluetooth of the vehicle or a CAN bus of the vehicle.
  • the display device also includes a voice reminder function, which can perform a voice reminder when the current image data collected in the first blind area is received.
  • the reminder content is "Please note that there are dangerous moving objects in the first blind spot".
  • step 14 may be if the first moving object in the first image data is If the first speed is greater than the current speed of the vehicle, and the second distance is less than the first preset distance, the current image data collected in the first blind spot is sent to the display device.
  • the display device displays the current image data collected in the first blind spot, and controls the speaker to play a prompt sound to remind the driver that there is movement in the first blind spot
  • the driver can watch the image data displayed by the display device to make corresponding driving adjustments.
  • Step 15 If the first speed of the first moving object in the first image data is not greater than the current speed of the vehicle, send the current image data collected in the first blind spot and the second blind spot to the display device.
  • the display device displays the current image data collected in the first blind area and the second blind area in two divisions on the display screen, and respectively records the current image data collected in the first blind area and the second blind area displayed in two divisions.
  • the display device simultaneously displays the current image data collected in the first blind area and the second blind area on the display screen according to the preset settings. As shown in FIG. 3 , the left side of the display screen of the display device displays the first blind spot image, and the right side displays the second blind spot image.
  • the image processing apparatus acquires the image data of the first blind spot and the image data of the second blind spot in real time, and sends them to the display device, and the display device, according to the method shown in FIG.
  • the image data of the first blind area and the image data of the second blind area are displayed.
  • the image processing device acquires the first steering data of the vehicle, and if the first steering data corresponds to the first blind spot, acquires the first image data collected in the first blind spot; acquires the first speed of the first moving object in the first image data ; If the first speed of the first moving object in the first image data is greater than the current speed of the vehicle, the current image data collected in the first blind area is sent to the display device, and the display device receives the current image data in the first blind area.
  • the display mode shown in FIG. 3 is switched, only the current image data of the first blind area received this time is displayed, and a voice reminder is performed.
  • the image processing device will still acquire the real-time speed data of the first moving object in the current image data of the first blind spot in real time.
  • the display device exits the current display mode and converts to It is the display mode shown in Figure 3.
  • the second image data collected in the second blind area is acquired; the second speed of the second moving object in the second image data is acquired; if the second image data If the second speed of the second moving object is greater than the current speed of the vehicle, the current image data collected in the second blind spot is sent to the display device.
  • the display device displays the current image data collected in the second blind area, controls the speaker to play the first prompt sound, and records and stores the current image data collected in the second blind area. If the second speed of the second moving object in the second image data is not greater than the current speed of the vehicle, the current image data collected in the first blind spot and the second blind spot are sent to the display device.
  • the display device displays the current image data collected in the first blind area and the second blind area in two divisions on the display screen, and respectively records the current image data collected in the first blind area and the second blind area displayed in two divisions. It can be understood that when the display device receives the current image data of the second blind spot, it switches the display mode of FIG. 3 , only displays the current image data of the second blind spot received this time, and gives a voice reminder. During the display process, the image processing device will still acquire the real-time speed data of the second moving object in the current image data of the second blind spot. When the speed data of the second moving object is less than the current speed of the vehicle, the display device exits the current display mode , switch to the display mode shown in Figure 3.
  • obtaining the second speed of the second moving object in the second image data may be obtaining the third distance between the second moving object in the second image data and the vehicle at the previous moment, and obtaining the second moving object at the current moment. the fourth distance from the vehicle; the second speed of the second moving object is calculated according to the third distance and the fourth distance. If the second speed of the second moving object in the second image data is greater than the current speed of the vehicle, and the fourth distance is less than the second preset distance, the current image data collected in the second blind spot is sent to the display device.
  • the display device displays the current image data collected in the second blind area, and controls the speaker to play the first prompt sound.
  • the continuous time of the first steering data is obtained, and if the continuous time of the first steering data is greater than the preset time, Then, the current image data collected in the first blind area and the first preset parameters are sent to the display device, so that the display device is configured according to the first preset parameters.
  • the continuous time of the first steering data is greater than the preset time, it can be confirmed that the vehicle will perform steering driving, and the current image data collected in the first blind spot and the first preset parameters will be sent to the display device to display
  • the device is configured according to the first preset parameter.
  • the first preset parameters include voice broadcasting, recording current image data, zooming and playing the current image data on a display device, and uploading the recorded image data.
  • the continuous time of the first steering data is acquired, and if the continuous time of the first steering data is not greater than the preset time , the first difference between the first speed and the current speed is calculated, and if the first difference is greater than the preset difference, the current image data collected in the first blind area and the second preset parameters are sent to the display device, so that the The display device is configured according to the second preset parameter; wherein, the first preset parameter is the same as the second preset parameter.
  • the current image data collected in the first blind area and the third preset parameter are sent to the display device, so that the display device is configured according to the second preset parameter.
  • the third preset parameter includes recording the current image data, zooming and playing the current image data on the display device, and uploading the recorded image data. Taking the third preset data as recording the current image data, enlarging and playing the current image data on the display device, and uploading the recorded image data as an example to illustrate: when the display device receives the third preset parameter and the current image data, In response to the third preset parameter, the current image data is enlarged and displayed in a corresponding proportion, and is recorded, and the recorded data is uploaded to the server.
  • the display device can perform corresponding settings on the display device through the preset parameters sent by the image processing device, so as to realize the adaptive configuration of the display device without manual adjustment.
  • the current gear of the vehicle is a preset gear
  • an early warning reminder is performed. If the preset gear is P, and the current gear of the vehicle is also P, and the door of the vehicle is detected to be open, the blind spot corresponding to the door will be detected for moving objects, and if there is a moving object, an early warning will be given. Can improve vehicle safety.
  • a moving object detection is performed on the blind area corresponding to the door, and if there is a moving object, an early warning reminder is performed, which can improve the safety of the vehicle.
  • the image processing method for the blind spot of the vehicle obtains the first steering data of the vehicle through an image processing device, and obtains the blind spot of the vehicle corresponding to the first steering data; wherein the blind spot of the vehicle includes the first steering data.
  • the first blind spot and the second blind spot are respectively located from the left side to the left rear side or the right side to the right rear side of the vehicle; if the vehicle blind spot corresponds to the first blind spot, the first image data collected in the first blind spot is obtained ; Obtain the first speed of the first moving object in the first image data; if the first speed of the first moving object in the first image data is greater than the current speed of the vehicle, send the current image data collected in the first blind spot To the display device; if the first speed of the first moving object in the first image data is not greater than the current speed of the vehicle, send the current image data collected in the first blind spot and the second blind spot to the display device.
  • the existing blind spot early warning scheme cannot provide accurate reminders and multi-side coordination
  • the active configuration of the blind spot image system of each rearview mirror of the vehicle is realized. It can improve the driving safety of the vehicle.
  • FIG. 4 is a schematic flowchart of a second embodiment of an image processing method for a blind spot of a vehicle provided by the present application.
  • the method includes:
  • Step 41 Acquire third image data collected in the third blind area.
  • the blind spot of the vehicle further includes a third blind spot, and the third blind spot is arranged on the rear side of the vehicle.
  • Step 42 Acquire the third velocity of the third moving object in the third image data.
  • Step 43 If the third speed of the third moving object in the third image data is greater than the current speed of the vehicle, send the current image data collected in the third blind spot to the display device.
  • the second steering data of the vehicle is acquired; if the second steering data of the vehicle is not acquired, the The current image data collected in the third blind area is sent to the display device. It can be understood that if the second steering data of the vehicle is not obtained, it can be confirmed that the vehicle is driving in a straight line, and there is no possibility of steering, then the current image data collected by the third blind spot is sent to the display device to remind the vehicle driver to confirm Whether it is necessary to steer to avoid the third moving object to improve driving safety.
  • the image data of the first blind spot, the second blind spot and the third blind spot are acquired in real time in the image processing device, and sent to the display device.
  • the display screen of the display device is divided into three display areas, and the image data of the first blind area, the second blind area and the third blind area are displayed.
  • the image processing device acquires the first steering data of the vehicle, if the first steering data corresponds to the first blind area, acquire the first image data collected in the first blind area; acquire the first image data of the first moving object in the first image data speed; if the first speed of the first moving object in the first image data is greater than the current speed of the vehicle, the current image data collected in the first blind spot is sent to the display device, and the display device receives the current image of the first blind spot
  • the display mode of FIG. 5 is switched, and only the current image data of the first blind area received this time is displayed, and a voice reminder is given.
  • the image processing device will still acquire the real-time speed data of the first moving object in the current image data of the first blind spot.
  • the display device exits the current display mode and converts to the display shown in Figure 5 ). model. If the first turning data corresponds to the second blind area, acquire the second image data collected in the second blind area; acquire the second speed of the second moving object in the second image data; if the second moving object in the second image data If the second speed is greater than the current speed of the vehicle, the current image data collected in the second blind spot will be sent to the display device.
  • the display device When the display device receives the current image data of the second blind spot, it will switch the display mode shown in FIG. Display the current image data of the second blind spot received this time, and give a voice reminder. During the display process, the image processing device will still acquire the real-time speed data of the second moving object in the current image data of the second blind spot. When the speed data is less than the current speed of the vehicle or the second moving object has disappeared in the second blind spot (If the vehicle has passed the vehicle or the speed of the second moving object is too small and is far away from the collection range of the second blind spot), (or after passing the own device, the display device exits the current display mode and converts to the display shown in Figure 5 ). model.
  • the second steering data of the vehicle is acquired; if the second steering data of the vehicle is not acquired Turning to data, the current image data collected in the third blind area is sent to the display device.
  • the display device When the display device receives the current image data of the third blind spot, it switches the display mode shown in Figure 5, only displays the current image data of the third blind spot received this time, and gives a prompt tone or voice reminder to remind the car There is a faster object behind the person.
  • screen recording of the display device is performed at the same time as the prompt tone or voice reminder is performed, and the recorded image data is uploaded to the server or sent to the mobile terminal.
  • the fourth speed of the third moving object in the first blind spot or the second blind spot is acquired;
  • the fourth speed of the third moving object in the second blind spot is greater than the current speed of the vehicle, then the current image data collected in the first blind spot or the second blind spot is sent to the display device, so that the display device displays the received current image in the above manner , and record the current image.
  • the display device sends an instruction to switch the display device from the display mode of FIG. 5 to the display mode of FIG. 3 .
  • FIG. 6 is a schematic flowchart of a third embodiment of an image processing method for a blind spot of a vehicle provided by the present application.
  • the method includes:
  • Step 61 Obtain fourth image data collected by the front camera of the vehicle.
  • this embodiment is applicable to when the vehicle is turning, the image processing device cannot obtain the turning data from the CAN line, or is used to further confirm whether the turning data is confirmed when the image processing device obtains the turning data.
  • Step 62 Obtain the first angle between the lane line and the vehicle in the fourth image data.
  • step 62 may be to identify the first lane line at the current moment and the second lane line at the previous moment in the fourth image; calculate the angle formed between the first lane line and the second lane line , and take the angle of the included angle as the first angle.
  • the first lane lines at the current moment are B1 and B2; the second lane lines at the previous moment are A1 and A2, and the angle of the included angle formed between B1 and A1 is ⁇ .
  • the first lane lines at the current moment are B1 and B2; the second lane lines at the previous moment are A1 and A2, and the angle of the included angle formed between B1 and A1 is ⁇ .
  • Step 63 If the first angle is greater than the first preset angle, confirm that the first steering data corresponds to the first blind area.
  • the first angle is 10 degrees and the first preset angle is 5 degrees, if the first angle is greater than the first preset angle, it is confirmed that the first steering data corresponds to the first blind area.
  • Step 64 If the first angle is smaller than the second preset angle, confirm that the first steering data corresponds to the second blind area.
  • the first angle is -10 degrees and the second preset angle is -5 degrees, and the first angle is smaller than the second preset angle, it is confirmed that the first steering data corresponds to the second blind spot.
  • the first blind spot is located from the left side to the left rear side of the vehicle
  • the second blind area is located from the right side to the right rear side of the vehicle.
  • the vehicle C in FIG. 9 uses the method in the above-mentioned embodiments.
  • the vehicle C is driving on a road with three lanes, wherein the three lanes are lane 1, lane 2, and lane 3 respectively.
  • vehicle C is in lane 2.
  • vehicle D in lane 1
  • vehicle F in lane 2 behind vehicle C
  • vehicle E in lane 3.
  • the visible area of the left rearview mirror of vehicle C is the ⁇ 1 area
  • the visible area of the right rearview mirror is the ⁇ 2 area
  • the visible area of the first blind spot is the ⁇ 1 area
  • the visible area of the second blind spot is the ⁇ 2 area.
  • the visible area of the third blind spot is the ⁇ 1 area.
  • the areas of the first blind spot and the second blind spot can collect all areas on the left and right sides of the vehicle, that is, a 180-degree range and a maximum distance of 22 meters.
  • the area of the third blind spot can capture a 110-degree wide-angle area within 22 meters behind the vehicle.
  • the first blind spot, the second blind spot and the third blind spot simultaneously collect image data of the corresponding areas and send them to the display device. If vehicle D appears in the visible area of the first blind spot of lane C at this time, vehicle D will be identified to determine whether the speed of vehicle D is greater than the current speed of vehicle C, and if so, a reminder will be given.
  • the display device is switched to display the image data of the first blind spot alone, and reminds and images The data is saved and uploaded to the server. If the vehicle E appears in the visible area of the second blind spot of the lane C at this time, the vehicle E is identified to determine whether the speed of the vehicle E is greater than the current speed of the vehicle C, and if so, a reminder is given.
  • the display device is switched to display the image data of the second blind spot separately, and reminds and images The data is saved and uploaded to the server. If the vehicle F appears in the visible area of the third blind spot of the lane C at this time, the vehicle F is identified to determine whether the speed of the vehicle F is greater than the current speed of the vehicle C, and if so, the display device is switched to display the first The image data of the three blind spots, and remind and save the image data, and upload it to the server.
  • the image acquisition in the first blind spot can be stopped at this time; if the vehicle C turns to drive in lane 3, the image acquisition in the second blind spot can be stopped at this time.
  • the display device can only display the image data of the remaining two blind areas.
  • the image acquisition device in the third blind area includes three cameras, and the data collected by the three cameras is synthesized and sent to a display device for display.
  • FIG. 10 is a schematic flowchart of the fourth embodiment of the image processing method for the blind spot of a vehicle provided by the present application.
  • the method includes:
  • Step 101 The vehicle-mounted device receives the current image data collected in the blind area of the vehicle and sent by the image processing device.
  • the image processing device responds to the steering data in any of the above embodiments, and when the speed of the moving object in the blind spot is greater than the current speed of the vehicle, obtains current image data corresponding to the blind spot and sends it to the vehicle-mounted device.
  • the in-vehicle device and the image processing device are connected through bluetooth or wireless or vehicle can bus.
  • Step 102 If the current image data collected in the blind spot of the vehicle is the current image data collected in the first blind spot, display it on the display screen.
  • the vehicle-mounted device displays image data corresponding to multiple blind spots according to the preset configuration, and when receiving the current image data collected in the first blind spot, it switches the display screen and plays the current image data collected in the first blind spot separately , and make a voice reminder.
  • Step 103 If the current image data collected in the blind spot of the vehicle is the current image data collected in the first blind spot and the second blind spot, the current image data collected in the first blind spot and the second blind spot are simultaneously displayed on the display screen.
  • step 103 it is confirmed that there are no dangerous moving objects in the current multiple blind spots.
  • the current image data collected in the blind spot of the vehicle is the current image data collected in the second blind spot, it is displayed on the display screen.
  • the display screen is switched, the current image data collected in the second blind area is played independently, and a reminder is given.
  • the blind spot of the vehicle further includes a third blind spot, and if the current image data collected in the blind spot of the vehicle is the current image data collected in the third blind spot, it is displayed on the display screen.
  • the display screen is switched, the current image data collected in the third blind area is played alone, and a reminder is given.
  • the current image data collected in the blind spot of the vehicle is the current image data collected in the first blind spot, the second blind spot, and the third blind spot
  • the first blind spot, the second blind spot, and the third blind spot are simultaneously displayed on the display screen. The current image data collected by the three blind spots.
  • the in-vehicle device is further connected to the vehicle, and can obtain a corresponding signal from the CAN bus of the vehicle. If a door opening signal is obtained, it can be confirmed to detect that the door of the vehicle is opened. Then, send the first instruction to the image processing device, so that when the image processing device confirms that there is a fourth moving object in the first blind spot or the second blind spot corresponding to the door at the current time, it will combine the third prompt sound with the fourth moving object in the fourth moving object.
  • the current image data collected in a blind spot or a second blind spot is sent to the vehicle-mounted device.
  • a first command is sent to the image processing device, and the image processing device determines whether there is a moving object in the first blind spot at the current moment according to the first command, and if so, generates a prompt sound to send the current image to the vehicle-mounted device, so that the vehicle-mounted device switches the display screen on the display screen, plays the current image data alone, and controls the speaker to play the third prompt sound, so as to give an early warning to the personnel in the vehicle.
  • a first command is sent to the image processing device, and the image processing device determines whether there is a moving object in the second blind spot at the current moment according to the first command, and if so, generates a prompt sound and sends the current image to the vehicle-mounted device , so that the vehicle-mounted device switches the display screen on the display screen, plays the current image data separately and controls the speaker to play the third prompt sound, so as to give an early warning to the personnel in the vehicle.
  • the personnel in the vehicle can be reminded to ensure the personal safety of the personnel in the vehicle, and the occurrence of traffic accidents can be reduced.
  • the vehicle-mounted device is connected to the mobile terminal, such as a Bluetooth connection or a wireless connection.
  • the in-vehicle device receives the instructions sent by the mobile terminal, and performs corresponding configuration according to these instructions. For example, to set the reminder sound for the vehicle-mounted device, set the display mode, such as 2-split screen, 3-split screen, and 4-split screen.
  • the 2-split screen is used to display the image data of the two blind spots.
  • the 3-split screen is used to display the image data of the three blind spots.
  • the 4-split screen is used to display 4 image data, in addition to the image data of the three blind spots, it also includes the image data collected by the front camera of the vehicle.
  • the vehicle-mounted device receives the current image data and preset parameters collected in the blind spot of the vehicle sent by the image processing device; if the current image data collected in the blind spot of the vehicle is the current image data collected in the first blind spot, the preset parameters parameter, display the current image data collected in the first blind area on the display screen, record the current image data, and store the recorded current image data to the server.
  • the in-vehicle device includes a plurality of pickups for collecting ambient sounds of the vehicle. Display the current image data collected in the first blind spot on the display screen, record the current image data, collect the current ambient sound of the vehicle through multiple pickups, and store the recorded current image data and current ambient sound to the server. Realize all-round sound collection, make the image data have ambient sound, and restore the scene during recording to the greatest extent when playing back the image data.
  • the image processing method for the blind spot of the vehicle receives the current image data collected in the blind spot of the vehicle sent by the image processing device through the on-board device, wherein the on-board device includes a display screen, and the blind spot of the vehicle includes the first image data.
  • the on-board device includes a display screen
  • the blind spot of the vehicle includes the first image data.
  • a blind spot and a second blind spot if the current image data collected in the blind spot of the vehicle is the current image data collected in the first blind spot, it will be displayed on the display screen; if the current image data collected in the blind spot of the vehicle is the current image data collected in the first blind spot and The current image data collected in the second blind area is displayed on the display screen at the same time as the current image data collected in the first blind area and the second blind area.
  • the existing blind spot early warning scheme cannot provide accurate reminders and multi-side coordination
  • the active configuration of the blind spot image system of each rearview mirror of the vehicle is realized. It can improve vehicle driving safety and improve user experience.
  • FIG. 11 is a schematic flowchart of a fifth embodiment of an image processing method for a blind spot of a vehicle provided by the present application.
  • the method includes:
  • Step 111 The mobile terminal receives the current image data collected in the blind area of the vehicle sent by the image processing device.
  • the image processing apparatus responds to the steering data in any of the above embodiments, and when the speed of the moving object in the blind spot is greater than the current speed of the vehicle, obtains current image data corresponding to the blind spot and sends it to the mobile terminal.
  • the vehicle further includes an on-board device, which is connected to the image processing device.
  • the image processing device obtains the current image data collected in the blind area of the vehicle and sends it to the on-board device and the mobile terminal, so that the display screen on the on-board device and the mobile terminal can be sent to the on-board device and the mobile terminal.
  • the display screen on the mobile terminal simultaneously displays in real time.
  • the mobile terminal and the image processing apparatus are connected through Bluetooth or wireless.
  • Step 112 If the current image data collected in the blind spot of the vehicle is the current image data collected in the first blind spot, display it on the display screen of the mobile terminal.
  • the mobile terminal displays the image data corresponding to multiple blind spots according to the preset configuration, when receiving the current image data collected in the first blind spot, it switches the display screen, and plays the current image data collected in the first blind spot separately , and remind.
  • Step 113 If the current image data collected in the blind spot of the vehicle is the current image data collected in the first blind spot and the second blind spot, the current image data collected in the first blind spot and the second blind spot are simultaneously displayed on the display screen of the mobile terminal.
  • step 113 it is confirmed that there are no dangerous moving objects in the current multiple blind spots.
  • the current image data collected in the blind spot of the vehicle is the current image data collected in the second blind spot, it is displayed on the display screen of the mobile terminal.
  • the display screen is switched, the current image data collected in the second blind area is played independently, and a reminder is given.
  • the blind spot of the vehicle further includes a third blind spot, and if the current image data collected in the blind spot of the vehicle is the current image data collected in the third blind spot, it is displayed on the display screen of the mobile terminal.
  • the display screen is switched, the current image data collected in the third blind area is played alone, and a reminder is given.
  • the current image data collected in the blind spot of the vehicle is the current image data collected in the first blind spot, the second blind spot, and the third blind spot
  • the first blind spot, the third blind spot and the third blind spot are simultaneously displayed on the display screen of the mobile terminal.
  • the current image data collected in the second blind spot and the third blind spot are simultaneously displayed on the display screen of the mobile terminal.
  • the mobile terminal receives the current image data collected in the blind spot of the vehicle and the preset parameters sent by the image processing device; if the current image data collected in the blind spot of the vehicle is the current image data collected in the first blind spot, the preset Set parameters, display the current image data collected in the first blind spot on the display screen of the mobile terminal, record the current image data, and store the recorded current image data to the server.
  • the in-vehicle device can be controlled to pick up the ambient sound and upload it to the server synchronously.
  • the first setting parameter in response to the first touch command, is sent to the vehicle-mounted device and/or the image processing device, so that the vehicle-mounted device and/or the image processing device perform setting based on the first setting parameter.
  • the mobile terminal sends the first setting parameters to the vehicle-mounted device and/or the image processing device, so that the vehicle-mounted device and/or the image processing device perform settings based on the first setting parameters.
  • the mobile terminal and the vehicle-mounted device can be connected through a data line.
  • the user sets parameters on the mobile terminal, and the in-vehicle device can respond to the parameters synchronously, thereby completing the corresponding settings.
  • the historical image data in response to the second touch instruction, is acquired from the local storage or the server; the historical image data is played. 13 and 14.
  • Figure 13 shows the image data recorded in different states, which are divided into local video and cloud video. Click on the local video, and multiple video files as shown in Figure 14 will appear. Users can delete, move, and delete source files after moving these video files.
  • historical image data can be played back and organized in this way, which can provide materials for subsequent system upgrades. The scene at the time of recording can be reproduced to the greatest extent when the image data is played back.
  • the mobile terminal receives the fourth prompt tone sent by the image processing apparatus and the current image data collected in the first blind spot or the second blind spot, and plays the first blind spot or the second blind spot on the display screen of the mobile terminal
  • the collected current image data and the control speaker play the fourth prompt sound to give an early warning to the personnel in the vehicle; wherein, the fourth prompt sound is detected by the vehicle-mounted device to detect that the door of the vehicle is opened, and the image processing device confirms the current time. Generated when there is a fifth moving object in a blind spot or a second blind spot. And record and upload the current image data to the server.
  • the image processing method for the blind spot of the vehicle receives the current image data collected in the blind spot of the vehicle sent by the image processing device through the mobile terminal, wherein the blind spot of the vehicle includes the first blind spot and the second blind spot. ; If the current image data collected in the blind spot of the vehicle is the current image data collected in the first blind spot, it will be displayed on the display screen of the mobile terminal; if the current image data collected in the blind spot of the vehicle is the current image data collected in the first blind spot and the second blind spot The current image data collected is displayed on the display screen of the mobile terminal simultaneously with the current image data collected in the first blind area and the second blind area.
  • the existing blind spot early warning scheme cannot provide accurate reminders and multi-side coordination
  • the active configuration of the blind spot image system of each rearview mirror of the vehicle is realized. It can improve vehicle driving safety and improve user experience.
  • FIG. 15 is a schematic structural diagram of an embodiment of an image processing apparatus provided by the present application.
  • the image processing apparatus 150 includes a processor 151 and a memory 152 connected to the processor 151; wherein, the memory 152 is used for storing program data, and the processor 151 is used for executing the program data, so as to realize the following methods:
  • the image processing device acquires the first steering data of the vehicle, and acquires the blind spot of the vehicle corresponding to the first steering data; wherein, the blind spot of the vehicle includes a first blind spot and a second blind spot, and the first blind spot and the second blind spot are respectively located from the left side to the left rear of the vehicle side or right side to right rear side; if the blind spot of the vehicle corresponds to the first blind spot, obtain the first image data collected in the first blind spot; obtain the first speed of the first moving object in the first image data; if the first image If the first speed of the first moving object in the data is greater than the current speed of the vehicle, the current image data collected in the first blind spot will be sent to the display device; if the first speed of the first moving object in the first image data is not greater than the current speed of the vehicle, the current image data collected in the first blind spot and the second blind spot is sent to the display device.
  • processor 151 is used for executing program data, and is also used for implementing the method executed by the image processing apparatus in any of the foregoing embodiments.
  • FIG. 16 is a schematic structural diagram of an embodiment of the vehicle-mounted device provided by the present application.
  • the in-vehicle device 160 includes a processor 161 and a memory 162 connected to the processor 161; wherein, the memory 162 is used to store program data, and the processor 161 is used to execute the program data to implement the following methods:
  • the in-vehicle device receives the current image data collected in the blind spot of the vehicle sent by the image processing device, wherein the in-vehicle device includes a display screen, and the blind spot of the vehicle includes a first blind spot and a second blind spot; if the current image data collected in the blind spot of the vehicle is in the first blind spot The current image data collected will be displayed on the display screen; if the current image data collected in the blind spot of the vehicle is the current image data collected in the first blind spot and the second blind spot, the first blind spot and the second blind spot will be displayed on the display screen at the same time. The current image data collected in the second blind spot.
  • processor 161 is used for executing program data, and is also used for implementing the method executed by the vehicle-mounted device in any of the foregoing embodiments.
  • FIG. 17 is a schematic structural diagram of an embodiment of a mobile terminal provided by the present application.
  • the mobile terminal 170 includes a processor 171 and a memory 172 connected to the processor 171; wherein, the memory 172 is used to store program data, and the processor 171 is used to execute the program data to implement the following methods:
  • the mobile terminal receives the current image data collected in the blind spot of the vehicle sent by the image processing device, wherein the blind spot of the vehicle includes the first blind spot and the second blind spot; if the current image data collected in the blind spot of the vehicle is the current image data collected in the first blind spot, Then it is displayed on the display screen of the mobile terminal; if the current image data collected in the blind spot of the vehicle is the current image data collected in the first blind spot and the second blind spot, it will be displayed on the display screen of the mobile terminal in the first blind spot and the second blind spot at the same time.
  • the current image data collected in the second blind spot is the current image data collected in the second blind spot.
  • processor 171 is configured to execute program data, and is also configured to implement the method executed by the mobile terminal in any of the foregoing embodiments.
  • FIG. 18 is a schematic structural diagram of an embodiment of a readable storage medium provided by the present application.
  • the readable storage medium 180 is used to store program data 181, and when executed by the processor, the program data 181 is used to implement the following methods:
  • the image processing device acquires the first steering data of the vehicle, and acquires the blind spot of the vehicle corresponding to the first steering data; wherein, the blind spot of the vehicle includes a first blind spot and a second blind spot, and the first blind spot and the second blind spot are respectively located from the left side to the left rear of the vehicle side or right side to right rear side; if the blind spot of the vehicle corresponds to the first blind spot, obtain the first image data collected in the first blind spot; obtain the first speed of the first moving object in the first image data; if the first image If the first speed of the first moving object in the data is greater than the current speed of the vehicle, the current image data collected in the first blind spot will be sent to the display device; if the first speed of the first moving object in the first image data is not greater than the current speed of the vehicle, the current image data collected in the first blind spot and the second blind spot will be sent to the display device; or,
  • the in-vehicle device receives the current image data collected in the blind spot of the vehicle sent by the image processing device, wherein the in-vehicle device includes a display screen, and the blind spot of the vehicle includes a first blind spot and a second blind spot; if the current image data collected in the blind spot of the vehicle is in the first blind spot The current image data collected will be displayed on the display screen; if the current image data collected in the blind spot of the vehicle is the current image data collected in the first blind spot and the second blind spot, it will be displayed on the display screen in the first blind spot and the second blind spot at the same time. Current image data collected in the second blind spot; or,
  • the mobile terminal receives the current image data collected in the blind spot of the vehicle sent by the image processing device, wherein the blind spot of the vehicle includes the first blind spot and the second blind spot; if the current image data collected in the blind spot of the vehicle is the current image data collected in the first blind spot, Then it is displayed on the display screen of the mobile terminal; if the current image data collected in the blind spot of the vehicle is the current image data collected in the first blind spot and the second blind spot, it will be displayed on the display screen of the mobile terminal in the first blind spot and the second blind spot at the same time.
  • the current image data collected in the second blind spot is the current image data collected in the second blind spot.
  • program data 181 is executed by the processor, it is also used to implement the method of any of the above embodiments.
  • FIG. 19 is a schematic structural diagram of an embodiment of an image processing system for a blind spot of a vehicle provided by the present application.
  • the image processing system 190 includes an image processing device 191, a vehicle-mounted device 192 and a mobile terminal 193;
  • the image processing device 191 is the image processing device in any of the above embodiments
  • the vehicle-mounted device 192 is the vehicle-mounted device in any of the above embodiments
  • the mobile terminal 193 is the mobile terminal in any of the above embodiments.
  • the image processing device 191 the vehicle-mounted device 192 and the mobile terminal 193 can be used to implement the method corresponding to any of the foregoing embodiments.
  • the disclosed method and device may be implemented in other manners.
  • the device implementations described above are only illustrative.
  • the division of the modules or units is only a logical function division. In actual implementation, there may be other divisions.
  • multiple units or components may be Incorporation may either be integrated into another system, or some features may be omitted, or not implemented.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this implementation manner.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
  • the integrated units in the other embodiments described above are implemented in the form of software functional units and sold or used as independent products, they may be stored in a computer-readable storage medium.
  • the technical solutions of the present application can be embodied in the form of software products in essence, or the parts that contribute to the prior art, or all or part of the technical solutions, and the computer software products are stored in a storage medium , including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program codes .
  • An image processing method for a blind spot of a vehicle comprising:
  • the image processing device acquires first steering data of the vehicle, and acquires a blind spot of the vehicle corresponding to the first steering data; wherein, the blind spot of the vehicle includes a first blind spot and a second blind spot, and the first blind spot and the third blind spot The second blind spot is located from the left side to the left rear side or the right side to the right rear side of the vehicle;
  • the blind spot of the vehicle corresponds to the first blind spot, acquiring first image data collected in the first blind spot;
  • the method according to A1, the method further comprises:
  • the blind spot of the vehicle corresponds to the second blind spot, acquiring second image data collected in the second blind spot;
  • the acquiring the first speed of the first moving object in the first image data includes:
  • a first speed of the first moving object is calculated according to the first distance and the second distance.
  • sending the current image data collected in the first blind spot to the display device including:
  • the current image collected in the first blind spot will be Data is sent to the display device.
  • the acquiring the second speed of the second moving object in the second image data includes:
  • a second speed of the second moving object is calculated according to the third distance and the fourth distance.
  • sending the current image data collected in the second blind spot to the display device including:
  • the current image collected in the second blind spot will be Data is sent to the display device.
  • the vehicle blind spot further comprises a third blind spot, and the third blind spot is arranged on the rear side of the vehicle;
  • the method also includes:
  • the current image data collected in the third blind spot is sent to the display device.
  • sending the current image data collected in the third blind spot to the display device including:
  • the current image data collected in the third blind spot is sent to a display device.
  • the method further comprises:
  • the current image data collected in the first blind spot or the second blind spot sent to the display device.
  • the acquiring the first steering data of the vehicle, and acquiring the blind spot of the vehicle corresponding to the first steering data includes:
  • first angle is greater than a first preset angle, confirming that the first steering data corresponds to the first blind area
  • the first angle is smaller than the second preset angle, it is confirmed that the first steering data corresponds to the second blind area.
  • the acquiring the first angle between the lane line and the vehicle in the fourth image data includes:
  • the angle of the included angle formed between the first lane line and the second lane line is calculated, and the angle of the included angle is used as the first angle.
  • A12 The method according to A1, further comprising a first preset parameter, a second preset parameter and a third preset parameter;
  • sending the current image data collected in the first blind spot to the display device including:
  • the continuous time of the first steering data is acquired, and if the continuous time of the first steering data is greater than a preset time, the current image data collected in the first blind area and the first preset parameters are sent to the display device, so that the display device is configured according to the first preset parameters; or,
  • the first speed of the first moving object in the first image data is greater than the current speed of the vehicle, obtain the continuous time of the first steering data, and if the continuous time of the first steering data is not greater than the predetermined speed If the time is set, the first difference between the first speed and the current speed is calculated. If the first difference is greater than the preset difference, the current image data collected in the first blind area and the first Two preset parameters are sent to the display device, so that the display device is configured according to the second preset parameter, and if the first difference is not greater than the preset difference, the first blind area is collected The current image data and the third preset parameter are sent to the display device, so that the display device is configured according to the third preset parameter.
  • A13 The method according to any one of A1-A12, further comprising:
  • An image processing method for a blind spot of a vehicle comprising:
  • the in-vehicle device receives the current image data collected in the blind spot of the vehicle sent by the image processing device, wherein the in-vehicle device includes a display screen, and the blind spot of the vehicle includes a first blind spot and a second blind spot;
  • the current image data collected in the blind spot of the vehicle is the current image data collected in the first blind spot, display it on the display screen;
  • the current image data collected in the blind spot of the vehicle is the current image data collected in the first blind spot and the second blind spot, then the first blind spot and the second blind spot are displayed on the display screen at the same time.
  • the current image data collected in the blind area is the current image data collected in the first blind spot and the second blind spot.
  • the current image data collected in the blind spot of the vehicle is the current image data collected in the second blind spot, it is displayed on the display screen.
  • the current image data collected in the blind spot of the vehicle is the current image data collected in the third blind spot, display it on the display screen;
  • the current image data collected from the blind spot of the vehicle is the current image data collected from the first blind spot, the second blind spot and the third blind spot, the first blind spot, all the current image data collected by the second blind area and the third blind area.
  • the vehicle-mounted device receives the current image data collected by the blind spot of the vehicle sent by the image processing device, and further includes:
  • the vehicle-mounted device receives the current image data and preset parameters collected in the blind spot of the vehicle and sent by the image processing device;
  • displaying on the display screen includes:
  • the current image data collected in the blind spot of the vehicle is the current image data collected in the first blind spot
  • the current image data collected in the first blind spot is displayed on the display screen based on preset parameters, and The current image data is recorded, and the recorded current image data is stored in the server.
  • An image processing method for a blind spot of a vehicle comprising:
  • the mobile terminal receives the current image data collected in the blind spot of the vehicle sent by the image processing device, wherein the blind spot of the vehicle includes a first blind spot and a second blind spot;
  • the current image data collected in the blind spot of the vehicle is the current image data collected in the first blind spot, display it on the display screen of the mobile terminal;
  • the current image data collected in the blind spot of the vehicle is the current image data collected in the first blind spot and the second blind spot, it will be displayed on the display screen of the mobile terminal in the first blind spot and the second blind spot at the same time.
  • the current image data collected in the second blind area is the current image data collected in the first blind spot and the second blind spot.
  • the current image data collected in the blind spot of the vehicle is the current image data collected in the second blind spot, it is displayed on the display screen.
  • the current image data collected in the blind spot of the vehicle is the current image data collected in the third blind spot, it is displayed on the display screen; if the current image data collected in the blind spot of the vehicle is the current image data collected in the third blind spot.
  • the current image data collected from a blind spot, the second blind spot and the third blind spot are simultaneously displayed on the display screen of the mobile terminal in the first blind spot, the second blind spot and the third blind spot The current image data acquired.
  • the mobile terminal receives the current image data collected in the blind spot of the vehicle sent by the image processing device, and further includes:
  • the mobile terminal receives the current image data and preset parameters collected in the blind spot of the vehicle and sent by the image processing device;
  • the current image data collected in the blind spot of the vehicle is the current image data collected in the first blind spot
  • display on the display screen of the mobile terminal including:
  • the current image collected in the blind spot of the vehicle is the current image data collected in the first blind spot
  • the current image collected in the first blind spot is displayed on the display screen of the mobile terminal based on preset parameters data, record the current image data, and store the recorded current image data to the server.
  • a first setting parameter is sent to the vehicle-mounted device and/or the image processing device, so that the vehicle-mounted device and/or the image processing device perform setting based on the first setting parameter.
  • An image processing device comprising a processor and a memory connected to the processor;
  • the memory is used for storing program data
  • the processor is used for executing the program data, so as to implement the method according to any one of A1-A13.
  • An in-vehicle device comprising a processor and a memory connected to the processor;
  • the memory is used for storing program data
  • the processor is used for executing the program data, so as to implement the method according to any one of B14-B17.
  • a mobile terminal comprising a processor and a memory connected to the processor;
  • the memory is used for storing program data
  • the processor is used for executing the program data, so as to implement the method according to any one of C18-C23.
  • a readable storage medium which is used to store program data, which, when executed by a processor, is used to realize any one of A1-A13, B14-B17 or C18-C23 method described in item.
  • An image processing system for a blind spot of a vehicle comprising an image processing device, a vehicle-mounted device and a mobile terminal;
  • the image processing device is the image processing device described in D24
  • the vehicle-mounted device is the vehicle-mounted device described in E25
  • the mobile terminal is the mobile terminal described in F26.

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Mechanical Engineering (AREA)
  • Transportation (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Traffic Control Systems (AREA)

Abstract

一种车辆盲区的图像处理方法、系统及相关装置,图像处理方法包括:图像处理装置获取车辆的第一转向数据,并获取第一转向数据对应的车辆盲区;其中,车辆盲区包括第一盲区和第二盲区,第一盲区和第二盲区分别位于车辆左侧至左后侧或右侧至右后侧;若车辆盲区对应第一盲区,则获取在第一盲区采集的第一图像数据;获取第一图像数据中的第一运动对象的第一速度;若第一图像数据中的第一运动对象的第一速度大于车辆的当前速度,则将在第一盲区采集的当前图像数据发送给显示设备;若第一图像数据中的第一运动对象的第一速度不大于车辆的当前速度,则将在第一盲区和第二盲区采集的当前图像数据发送给显示设备。通过上述方式,能够提高车辆驾驶安全性。

Description

车辆盲区的图像处理方法、系统及相关装置 技术领域
本申请涉及车辆应用技术领域,特别是涉及一种车辆盲区的图像处理方法、系统及相关装置。
背景技术
为了实现车辆在路上安全行驶,除了驾驶人员自身的操作经验以外,另外一个较大的隐患就是车辆后视镜的盲区和不能通过后视镜准确分辨身后车距。现在的汽车为了使行车更加的安全,设计了三个后视镜:车内后视镜、左后视镜、右后视镜,但是对于驾驶人员而言,由于人眼视角,后视镜角度以及车辆在行驶过程中速度以及方向变化等因素造成盲区始终存在,从而给驾驶人员和其他路上行驶车辆带来安全隐患。
目前已经有车辆搭载后视镜盲区预警系统,该系统是利用雷达或微波侦测车辆侧后方是否有来车并通过声音或灯光来提醒驾驶人员。这种方法只能起到一个模糊提醒的效果,驾驶人员无法主动进行适应性调整而且需要把视线偏向一侧后视镜才能注意到,导致另外一侧的路况则无法兼顾,这样又会给驾驶人员带来新的安全隐患。
发明内容
本申请主要解决的技术问题是提供车辆盲区的图像处理方法、系统及相关装置,能够提高车辆驾驶安全性。
本申请采用的一种技术方案是提供一种车辆盲区的图像处理方法,该方法包括:图像处理装置获取车辆的第一转向数据,并获取第一转向数据对应的车辆盲区;其中,车辆盲区包括第一盲区和第二盲区,第一盲区和第二盲区分别位于车辆左侧至左后侧或右侧至右后侧;若车辆盲区对应第一盲区,则获取在第一盲区采集的第一图像数据;获取第一图像数据中的第一运动对象的第一速度;若第一图像数据中的第一运动对象的第一速度大于车辆的当前速度,则将在第一盲区采集的当前图像数据发送给显示设备;若第一图像数据中的第一运动对象的第一速度不大于车辆的当前速度,则将在第一盲区和第二盲区采集的当前图像数据发送给显示设备。
其中,该方法还包括:若车辆盲区对应第二盲区,则获取在第二盲区采集的第二图像数据;获取第二图像数据中的第二运动对象的第 二速度;若第二图像数据中的第二运动对象的第二速度大于车辆的当前速度,则将在第二盲区采集的当前图像数据发送给显示设备;若第二图像数据中的第二运动对象的第二速度不大于车辆的当前速度,则将在第一盲区和第二盲区采集的当前图像数据发送给显示设备。
其中,获取第一图像数据中的第一运动对象的第一速度,包括:获取第一图像数据中的第一运动对象在前一时刻与车辆的第一距离,以及获取第一运动对象在当前时刻与车辆的第二距离;根据第一距离和第二距离计算出第一运动对象的第一速度。
其中,若第一图像数据中的第一运动对象的第一速度大于车辆的当前速度,则将在第一盲区采集的当前图像数据发送给显示设备,包括:若第一图像数据中的第一运动对象的第一速度大于车辆的当前速度,且第二距离小于第一预设距离,则将在第一盲区采集的当前图像数据发送给显示设备。
其中,获取第二图像数据中的第二运动对象的第二速度,包括:获取第二图像数据中的第二运动对象在前一时刻与车辆的第三距离,以及获取第二运动对象在当前时刻与车辆的第四距离;根据第三距离和第四距离计算出第二运动对象的第二速度。
其中,若第二图像数据中的第二运动对象的第二速度大于车辆的当前速度,则将在第二盲区采集的当前图像数据发送给显示设备,包括:若第二图像数据中的第二运动对象的第二速度大于车辆的当前速度,且第四距离小于第二预设距离,则将在第二盲区采集的当前图像数据发送给显示设备。
其中,车辆盲区还包括第三盲区,第三盲区设置于车辆的后侧;方法还包括:获取在第三盲区采集的第三图像数据;获取第三图像数据中的第三运动对象的第三速度;若第三图像数据中的第三运动对象的第三速度大于车辆的当前速度,则将在第三盲区采集的当前图像数据发送给显示设备。
其中,若第三图像数据中的第三运动对象的第三速度大于车辆的当前速度,则将在第三盲区采集的当前图像数据发送给显示设备,包括:若第三图像数据中的第三运动对象的第三速度大于车辆的当前速度,则获取车辆的第二转向数据;若未获取到车辆的第二转向数据,则将在第三盲区采集的当前图像数据发送给显示设备。
其中,该方法还包括:若第三盲区的第三运动对象移动至第一盲区或第二盲区,则获取第一盲区或第二盲区中第三运动对象的第四速度;若第一盲区或第二盲区中第三运动对象的第四速度大于车辆的当前速度,则将在第一盲区或第二盲区采集的当前图像数据发送给显示设备。
其中,获取车辆的第一转向数据,并获取第一转向数据对应的车 辆盲区,包括:获取车辆的前置摄像头采集的第四图像数据;获取第四图像数据中车道线与车辆的第一角度;若第一角度大于第一预设角度,则确认第一转向数据对应第一盲区;若第一角度小于第二预设角度,则确认第一转向数据对应第二盲区。
其中,获取第四图像数据中车道线与车辆的第一角度,包括:识别第四图像中的当前时刻的第一车道线以及前一时刻的第二车道线;计算第一车道线与第二车道线之间所形成的夹角的角度,并将夹角的角度作为第一角度。
其中,该方法还包括第一预设参数、第二预设参数和第三预设参数;若第一图像数据中的第一运动对象的第一速度大于车辆的当前速度,则将在第一盲区采集的当前图像数据发送给显示设备,包括:若第一图像数据中的第一运动对象的第一速度大于车辆的当前速度,则获取第一转向数据的连续时间,若第一转向数据的连续时间大于预设时间,则将在第一盲区采集的当前图像数据和第一预设参数发送给显示设备,以使显示设备按照第一预设参数进行配置;或,若第一图像数据中的第一运动对象的第一速度大于车辆的当前速度,则获取第一转向数据的连续时间,若第一转向数据的连续时间不大于预设时间,则计算出第一速度与当前速度的第一差值,若第一差值大于预设差值,则将在第一盲区采集的当前图像数据和第二预设参数发送给显示设备,以使显示设备按照第二预设参数进行配置,若第一差值不大于预设差值,则将第一盲区采集的当前图像数据和第三预设参数发送给显示设备,以使显示设备按照第三预设参数进行配置。
其中,该方法还包括:在车辆的当前档位为预设档位时,若检测到车辆的车门打开,则进行预警提醒。
本申请采用的另一种技术方案是提供一种车辆盲区的图像处理方法,该方法包括:车载装置接收图像处理装置发送的在车辆盲区采集的当前图像数据,其中,车载装置包括显示屏,车辆盲区包括第一盲区和第二盲区;若在车辆盲区采集的当前图像数据为在第一盲区采集的当前图像数据,则在显示屏上进行显示;若在车辆盲区采集的当前图像数据为在第一盲区和第二盲区采集的当前图像数据,则在显示屏上同时在显示第一盲区和第二盲区采集的当前图像数据。
其中,该方法还包括:若在车辆盲区采集的当前图像数据为在第二盲区采集的当前图像数据,则在显示屏上进行显示。
其中,该车辆盲区还包括第三盲区,方法还包括:若车辆盲区采集的当前图像数据为第三盲区采集的当前图像数据,则在显示屏上进行显示;若车辆盲区采集的当前图像数据为第一盲区、第二盲区和第三盲区采集的当前图像数据,则在显示屏上同时显示第一盲区、第二盲区和第三盲区采集的当前图像数据。
其中,车载装置接收图像处理装置发送的车辆盲区采集的当前图像数据,还包括:车载装置接收图像处理装置发送的在车辆盲区采集的当前图像数据和预设参数;若在车辆盲区采集的当前图像数据为在第一盲区采集的当前图像数据,则在显示屏上进行显示,包括:若在车辆盲区采集的当前图像数据为在第一盲区采集的当前图像数据,则基于预设参数,在显示屏上显示在第一盲区采集的当前图像数据,并将当前图像数据进行录制,并将录制的当前图像数据存储至服务器。
本申请采用的另一种技术方案是提供一种车辆盲区的图像处理方法,该方法包括:移动终端接收图像处理装置发送的在车辆盲区采集的当前图像数据,其中,车辆盲区包括第一盲区和第二盲区;若在车辆盲区采集的当前图像数据为在第一盲区采集的当前图像数据,则在移动终端的显示屏上进行显示;若在车辆盲区采集的当前图像数据为在第一盲区和第二盲区采集的当前图像数据,则在移动终端的显示屏上同时显示在第一盲区和第二盲区采集的当前图像数据。
其中,该方法还包括:若车辆盲区采集的当前图像数据为第二盲区采集的当前图像数据,则在显示屏上进行显示。
其中,车辆盲区还包括第三盲区,方法还包括:若在车辆盲区采集的当前图像数据为在第三盲区采集的当前图像数据,则在显示屏上进行显示;若在车辆盲区采集的当前图像数据为在第一盲区、第二盲区和第三盲区采集的当前图像数据,则在移动终端的显示屏上同时显示在第一盲区、第二盲区和第三盲区采集的当前图像数据。
其中,移动终端接收图像处理装置发送的在车辆盲区采集的当前图像数据,还包括:移动终端接收图像处理装置发送的在车辆盲区采集的当前图像数据和预设参数;若在车辆盲区采集的当前图像数据为在第一盲区采集的当前图像数据,则在移动终端的显示屏上进行显示,包括:若在车辆盲区采集的当前图像数据为在第一盲区采集的当前图像数据,则基于预设参数,在移动终端的显示屏上显示在第一盲区采集的当前图像数据,并将当前图像数据进行录制,并将录制的当前图像数据存储至服务器。
其中,该方法还包括:响应于第一触控指令,发送第一设置参数至车载装置和/或图像处理装置,以使车载装置和/或图像处理装置基于第一设置参数进行设置。
其中,该方法还包括:响应于第二触控指令,从本地存储或服务器中获取历史图像数据;播放历史图像数据。
本申请采用的另一种技术方案是提供一种图像处理装置,该图像处理装置包括处理器以及与处理器连接的存储器;其中,存储器用于存储程序数据,处理器用于执行程序数据,以实现上述技术方案提供的方法。
本申请采用的另一种技术方案是提供一种车载装置,该车载装置包括处理器以及与处理器连接的存储器;其中,存储器用于存储程序数据,处理器用于执行程序数据,以实现上述技术方案提供的方法。
本申请采用的另一种技术方案是提供一种移动终端,该移动终端包括处理器以及与处理器连接的存储器;其中,存储器用于存储程序数据,处理器用于执行程序数据,以实现上述技术方案提供的方法。
本申请采用的另一种技术方案是提供一种可读存储介质,该可读存储介质用于存储程序数据,程序数据在被处理器执行时,用于实现上述技术方案提供的任一方法。
本申请采用的另一种技术方案是提供一种车辆盲区的图像处理系统,该图像处理系统包括图像处理装置、车载装置和移动终端;其中,图像处理装置如上述技术方案提供的图像处理装置、车载装置如上述技术方案提供的车载装置和移动终端如上述技术方案提供的移动终端。
本申请的有益效果是:区别于现有技术的情况,本申请提供的车辆盲区的图像处理方法通过图像处理装置获取车辆的第一转向数据,并获取第一转向数据对应的车辆盲区;其中,车辆盲区包括第一盲区和第二盲区,第一盲区和第二盲区分别位于车辆左侧或右侧;若车辆盲区对应第一盲区,则获取在第一盲区采集的第一图像数据;获取第一图像数据中的第一运动对象的第一速度;若第一图像数据中的第一运动对象的第一速度大于车辆的当前速度,则将第一盲区采集的当前图像数据发送给显示设备;若第一图像数据中的第一运动对象的第一速度不大于车辆的当前速度,则将第一盲区和第二盲区采集的当前图像数据发送给显示设备。通过上述方式,一方面解决了现有盲区预警方案中无法进行精准提醒、多侧协同的问题,另一方面在人机交互的基础上实现了对车辆各后视镜盲区图像系统的主动配置,能够提高车辆驾驶安全性。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。其中:
图1是本申请提供的车辆盲区的图像处理方法第一实施例的流程示意图;
图2是本申请提供的图1中步骤13的具体流程示意图;
图3是本申请提供的显示设备的一显示示意图;
图4是本申请提供的车辆盲区的图像处理方法第二实施例的流程示意图;
图5是本申请提供的显示设备的又一显示示意图;
图6是本申请提供的车辆盲区的图像处理方法第三实施例的流程示意图;
图7是本申请提供的车道线第一比较示意图;
图8是本申请提供的车道线第二比较示意图;
图9是本申请提供的车辆盲区的图像处理方法的应用场景示意图;
图10是本申请提供的车辆盲区的图像处理方法第四实施例的流程示意图;
图11是本申请提供的车辆盲区的图像处理方法第五实施例的流程示意图;
图12是本申请提供的车辆盲区的图像处理方法中移动终端的第一显示界面示意图;
图13是本申请提供的车辆盲区的图像处理方法中移动终端的第二显示界面示意图;
图14是本申请提供的车辆盲区的图像处理方法中移动终端的第三显示界面示意图;
图15是本申请提供的图像处理装置一实施例的结构示意图;
图16是本申请提供的车载装置一实施例的结构示意图;
图17是本申请提供的移动终端一实施例的结构示意图;
图18是本申请提供的可读存储介质一实施例的结构示意图;
图19是本申请提供的车辆盲区的图像处理系统一实施例的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。可以理解的是,此处所描述的具体实施例仅用于解释本申请,而非对本申请的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与本申请相关的部分而非全部结构。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
在本文中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本申请的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域技术人员显式地和隐式地理解的是,本文所描述的实施例可以与其它实施例相结合。
参阅图1,图1是本申请提供的车辆盲区的图像处理方法第一实施例的流程示意图。该方法包括:
步骤11:图像处理装置获取车辆的第一转向数据,并获取第一转向数据对应的车辆盲区。
其中,车辆盲区包括第一盲区和第二盲区,第一盲区和第二盲区分别位于车辆左侧至左后侧或右侧至右后侧。
可以理解,在该车辆的第一盲区和第二盲区相应的设置有图像采集装置,该图像采集装置属于图像处理装置。通过图像采集装置来获取第一盲区和第二盲区相应的图像内容。
在一些实施例中,当车辆启动时,图像采集装置开始采集第一盲区和第二盲区的数据,当图像处理装置获取到第一转向数据时,获取第一转向数据对应的车辆盲区。
可选的,第一转向数据可以是转向信号如车辆驾驶人员人为操作的转向信号,也可以是方向盘的转向角度。在一应用场景中,第一盲区对应车辆左侧,第二盲区对应车辆右侧,若车辆驾驶人员操作了向左转的转向信号,则获取第一盲区的相关数据;若车辆驾驶人员操作了向右转的转向信号,则获取第二盲区的相关数据。
步骤12:若车辆盲区对应第一盲区,则获取在第一盲区采集的第一图像数据。
在一些实施例中,第一图像数据为预设时间长度的图像数据,该图像数据的结束时间段为当前时间点。
步骤13:获取第一图像数据中的第一运动对象的第一速度。
在一些实施例中,参阅图2,步骤13可以具体是如下步骤:
步骤131:获取第一图像数据中的第一运动对象在前一时刻与车辆的第一距离,以及获取第一运动对象在当前时刻与车辆的第二距离。
可选的,第一图像数据包括多个图像帧,检测多个图像帧中是否存在第一运动对象,若存在,则确认该图像帧为有效图像帧。因此会得到多个有效图像帧,获取第一运动对象在当前时刻的有效图像帧时,与车辆的第二距离,以及第一运动对象在当前时刻的有效图像帧的前一时刻的有效图像帧与车辆的第一距离。
步骤132:根据第一距离和第二距离计算出第一运动对象的第一速度。
可以理解,根据第一距离和第二距离,然后获取车辆的当前速度,以及上述两个有效图像帧之间的时间差值,可以计算出第一运动对象的第一速度。如,车辆的当前速度为V 1,第一距离为L 1,第二距离为L 2,第一距离与第二距离两个有效图像帧之间的时间差值为t,则第一速度
Figure PCTCN2020125090-appb-000001
步骤14:若第一图像数据中的第一运动对象的第一速度大于车辆的当前速度,则将在第一盲区采集的当前图像数据发送给显示设备。
显示设备可以是车载装置,也可以是移动终端。
可以理解,若第一图像数据中的第一运动对象的第一速度大于车辆的当前速度,则第一运动对象存在超越该车辆的情况,则此时进行转向操作,安全性较低。则将在第一盲区采集的当前图像数据发送给显示设备。
进一步,在一些实施例中,该显示设备属于车载装置,与图像处理装置通过车辆内置无线或者蓝牙或者该车辆的CAN总线连接。该显示设备还包括语音提醒功能,可在接收到第一盲区采集的当前图像数据时,进行语音提醒。如,提醒内容为“请注意,第一盲区内有危险的运动对象”。
进一步,将在第一盲区采集的当前图像数据发送给显示设备后,显示设备对在第一盲区采集的当前图像数据进行显示,并控制扬声器播放提示音,以提醒驾驶人员在第一盲区存在运动对象,同时驾驶人员可观看显示设备显示的图像数据进行相应驾驶调整。在一些实施例中,因根据上述方式能够求出第一运动对象在当前时刻的有效图像帧时,与车辆的第二距离,则步骤14可以为若第一图像数据中的第一运动对象的第一速度大于车辆的当前速度,且第二距离小于第一预设距离,则将在第一盲区采集的当前图像数据发送给显示设备。进一步,将在第一盲区采集的当前图像数据发送给显示设备后,显示设备对在第一盲区采集的当前图像数据进行显示,并控制扬声器播放提示音,以提醒驾驶人员在第一盲区存在运动对象,同时驾驶人员可观看显示设备显示的图像数据进行相应驾驶调整。
通过这种方式,能够进一步确认此时该车辆转向的危险系数,进而对驾驶人员进行提醒。
步骤15:若第一图像数据中的第一运动对象的第一速度不大于车辆的当前速度,则将第一盲区和第二盲区采集的当前图像数据发送给显示设备。
可以理解,若第一图像数据中的第一运动对象的第一速度不大于车辆的当前速度,则不存在安全性问题,则将第一盲区和第二盲区采集的当前图像数据发送给显示设备。显示设备在显示屏上二分割显示在第一盲区和第二盲区采集的当前图像数据,并分别录制二分割显示的在所述第一盲区和所述第二盲区采集的当前图像数据。
此时显示设备接到这些数据后,显示设备根据预设的设置,将第一盲区和第二盲区采集的当前图像数据在显示屏上同时显示。如图3所示,显示设备的显示屏左侧显示第一盲区图像,右侧显示第二盲区图像。
在一些实施例中,车辆在行驶过程中,图像处理装置实时获取第一盲区的图像数据和第二盲区的图像数据,并发送给显示设备,显示设备根据图3所示的方式,对接收到的第一盲区的图像数据和第二盲区的图像数据进行显示。在图像处理装置获取车辆的第一转向数据,若第一转向数据对应第一盲区,则获取在第一盲区采集的第一图像数据;获取第一图像数据中的第一运动对象的第一速度;若第一图像数据中的第一运动对象的第一速度大于车辆的当前速度,则将在第一盲区采集的当前图像数据发送给显示设备,显示设备在接收到第一盲区的当前图像数据时,将图3所示的显示方式进行切换,只显示此次接收到的第一盲区的当前图像数据,并进行语音提醒。在显示过程中,图像处理装置仍会实时获取第一盲区的当前图像数据中第一运动对象的实时速度数据,在第一运动对象的小于车辆的当前速度时,显示设备退出当前显示模式,转换为图3所示的显示模式。
在一些实施例中,若第一转向数据对应第二盲区,则获取在第二盲区采集的第二图像数据;获取第二图像数据中的第二运动对象的第二速度;若第二图像数据中的第二运动对象的第二速度大于车辆的当前速度,则将第二盲区采集的当前图像数据发送给显示设备。显示设备对在第二盲区采集的当前图像数据进行显示,并控制扬声器播放第一提示音,并对在第二盲区采集的当前图像数据进行录制存储。若第二图像数据中的第二运动对象的第二速度不大于车辆的当前速度,则将第一盲区和第二盲区采集的当前图像数据发送给显示设备。显示设备在显示屏上二分割显示在第一盲区和第二盲区采集的当前图像数据,并分别录制二分割显示的在所述第一盲区和所述第二盲区采集的当前图像数据。可以理解,当显示设备在接收到第二盲区的当前图像数据时,将图3的显示方式进行切换,只显示此次接收到的第二盲区的当前图像数据,并进行语音提醒。在显示过程中,图像处理装置仍会实时获取第二盲区的当前图像数据中第二运动对象的实时速度数据,在第二运动对象的速度数据小于车辆的当前速度时,显示设备退出当前显示模式,转换为图3所示的显示模式。
进一步,获取第二图像数据中的第二运动对象的第二速度可以为获取第二图像数据中的第二运动对象在前一时刻与车辆的第三距离,以及获取第二运动对象在当前时刻与车辆的第四距离;根据第三距离和第四距离计算出第二运动对象的第二速度。若第二图像数据中的第二运动对象的第二速度大于车辆的当前速度,且第四距离小于第二预设距离,则将第二盲区采集的当前图像数据发送给显示设备。显示设备对在第二盲区采集的当前图像数据进行显示,并控制扬声器播放第一提示音。
通过这种方式,能够进一步确认此时该车辆转向的危险系数,进 而对驾驶人员进行提醒。
在一些实施例中,若第一图像数据中的第一运动对象的第一速度大于车辆的当前速度,则获取第一转向数据的连续时间,若第一转向数据的连续时间大于预设时间,则将第一盲区采集的当前图像数据和第一预设参数发送给显示设备,以使显示设备按照第一预设参数进行配置。
可以理解,若第一转向数据的连续时间大于预设时间则可确认该车辆会进行转向驾驶,则将在第一盲区采集的当前图像数据和第一预设参数发送给显示设备,以使显示设备按照第一预设参数进行配置。如第一预设参数包括语音播报、录制当前图像数据、对当前图像数据在显示设备上进行放大播放、上传录制的图像数据。
在一些实施例中,若第一图像数据中的第一运动对象的第一速度大于车辆的当前速度,则获取第一转向数据的连续时间,若第一转向数据的连续时间不大于预设时间,则计算出第一速度与当前速度的第一差值,若第一差值大于预设差值,则将第一盲区采集的当前图像数据和第二预设参数发送给显示设备,以使显示设备按照第二预设参数进行配置;其中,第一预设参数与第二预设参数相同。若第一差值不大于预设差值,则将第一盲区采集的当前图像数据和第三预设参数发送给显示设备,以使显示设备按照第二预设参数进行配置。如第三预设参数包括录制当前图像数据、对当前图像数据在显示设备上进行放大播放、上传录制的图像数据。以第三预设数据为录制当前图像数据、对当前图像数据在显示设备上进行放大播放、上传录制的图像数据为例进行说明:显示设备在接收到第三预设参数和当前图像数据时,响应第三预设参数,对当前图像数据进行相应比例的放大显示,并对其进行录制,且上传录制的数据至服务器。通过上述方式,显示设备可通过图像处理装置发送的预设参数对显示设备进行相应设置,实现显示设备自适应配置,无需人为调节。
在一些实施例中,在车辆的当前档位为预设档位时,若检测到车辆的车门打开,则进行预警提醒。如该预设档位P档,车辆的当前档位也为P档,则检测到车辆的车门打开,则对车门对应的盲区进行运动对象检测,若存在运动对象,则进行预警提醒。能够提高车辆安全性。
在一些实施例中,在车辆的当前速度为0时,若检测到车辆的车门打开,则对车门对应的盲区进行运动对象检测,若存在运动对象,则进行预警提醒,能够提高车辆安全性。
区别于现有技术的情况,本实施例中提供的车辆盲区的图像处理方法通过图像处理装置获取车辆的第一转向数据,并获取第一转向数据对应的车辆盲区;其中,车辆盲区包括第一盲区和第二盲区,第一 盲区和第二盲区分别位于车辆左侧至左后侧或右侧至右后侧;若车辆盲区对应第一盲区,则获取在第一盲区采集的第一图像数据;获取第一图像数据中的第一运动对象的第一速度;若第一图像数据中的第一运动对象的第一速度大于车辆的当前速度,则将在第一盲区采集的当前图像数据发送给显示设备;若第一图像数据中的第一运动对象的第一速度不大于车辆的当前速度,则将在第一盲区和第二盲区采集的当前图像数据发送给显示设备。通过上述方式,一方面解决了现有盲区预警方案中无法进行精准提醒、多侧协同的问题,另一方面在人机交互的基础上实现了对车辆各后视镜盲区图像系统的主动配置,能够提高车辆驾驶安全性。
参阅图4,图4是本申请提供的车辆盲区的图像处理方法第二实施例的流程示意图。该方法包括:
步骤41:获取在第三盲区采集的第三图像数据。
在本实施例中,车辆盲区还包括第三盲区,第三盲区设置于该车辆的后侧。
步骤42:获取第三图像数据中的第三运动对象的第三速度。
步骤43:若第三图像数据中的第三运动对象的第三速度大于车辆的当前速度,则将在第三盲区采集的当前图像数据发送给显示设备。
在一些实施例中,若第三图像数据中的第三运动对象的第三速度大于车辆的当前速度,则获取车辆的第二转向数据;若未获取到车辆的第二转向数据,则将在第三盲区采集的当前图像数据发送给显示设备。可以理解,若未获取到车辆的第二转向数据,则可确认该车辆在直线行驶,则暂无转向可能,则将第三盲区采集的当前图像数据发送给显示设备,以提醒车辆驾驶人员确认是否需要转向对第三运动对象进行避让,以提高驾驶安全性。
在一应用场景中,车辆在行驶过程中,图像处理装置中的实时获取第一盲区、第二盲区和第三盲区的图像数据,并发送给显示设备,显示设备根据图5所示的方式,将显示设备的显示屏分为三个显示区域,对第一盲区、第二盲区和第三盲区的图像数据进行显示。在图像处理装置获取车辆的第一转向数据时,若第一转向数据对应第一盲区,则获取在第一盲区采集的第一图像数据;获取第一图像数据中的第一运动对象的第一速度;若第一图像数据中的第一运动对象的第一速度大于车辆的当前速度,则将在第一盲区采集的当前图像数据发送给显示设备,显示设备在接收到第一盲区的当前图像数据时,将图5的显示方式进行切换,只显示此次接收到的第一盲区的当前图像数据,并进行语音提醒。在显示过程中,图像处理装置仍会实时获取第一盲区的当前图像数据中第一运动对象的实时速度数据,在该速度数据小于车辆的当前速度或第一运动对象已在第一盲区中消失(如已超过本车 辆或因第一运动对象速度太小远离了第一盲区的采集范围)时,(或超过已方设备后,)显示设备退出当前显示模式,转换为图5所示的显示模式。若第一转向数据对应第二盲区,则获取在第二盲区采集的第二图像数据;获取第二图像数据中的第二运动对象的第二速度;若第二图像数据中的第二运动对象的第二速度大于车辆的当前速度,则将在第二盲区采集的当前图像数据发送给显示设备,显示设备在接收到第二盲区的当前图像数据时,将图5的显示方式进行切换,只显示此次接收到的第二盲区的当前图像数据,并进行语音提醒。在显示过程中,图像处理装置仍会实时获取第二盲区的当前图像数据中第二运动对象的实时速度数据,在该速度数据小于车辆的当前速度或第二运动对象已在第二盲区中消失(如已超过本车辆或因第二运动对象速度太小远离了第二盲区的采集范围)时,(或超过已方设备后,)显示设备退出当前显示模式,转换为图5所示的显示模式。若在第三盲区的第三图像数据中检测到第三运动对象,且第三运动对象的第三速度大于车辆的当前速度,则获取车辆的第二转向数据;若未获取到车辆的第二转向数据,则将第三盲区采集的当前图像数据发送给显示设备。显示设备在接收到第三盲区的当前图像数据时,将图5的显示方式进行切换,只显示此次接收到的第三盲区的当前图像数据,并进行提示音或语音提醒,以提醒车内人员后方存在速度较快的对象。在一些实施例中,进行提示音或语音提醒的同时进行显示设备屏幕录制,并将录制的图像数据上传至服务器或发送至移动终端。
在一些实施例中,若第三盲区的第三运动对象移动至第一盲区或第二盲区,则获取第一盲区或第二盲区中第三运动对象的第四速度;若第一盲区或第二盲区中第三运动对象的第四速度大于车辆的当前速度,则将第一盲区或第二盲区采集的当前图像数据发送给显示设备,以使显示设备按照上述方式进行显示接收到的当前图像,并对该当前图像进行录制。
可以理解,根据第三盲区中第三运动对象的移动,可以判断该与第三运动对象是否转向,若第三运动对象转向,且第三运动对象转向后第三盲区无运动对象,则可以向显示设备发送指令,使显示设备由图5的显示方式切换为图3的显示方式。
参阅图6,图6是本申请提供的车辆盲区的图像处理方法第三实施例的流程示意图。该方法包括:
步骤61:获取车辆的前置摄像头采集的第四图像数据。
可以理解,本实施例适用于当车辆转向时,图像处理装置无法从CAN线上获取到转向数据,或者用于在图像处理装置获取到转向数据时,进行进一步确认转向数据是否确认。
步骤62:获取第四图像数据中车道线与车辆的第一角度。
在一些实施例中,步骤62可以是识别第四图像中的当前时刻的第一车道线以及前一时刻的第二车道线;计算第一车道线与第二车道线之间所形成的夹角的角度,并将夹角的角度作为第一角度。
如图7所示,当前时刻的第一车道线为B1和B2;前一时刻的第二车道线为A1和A2,B1和A1之间所形成的夹角的角度为α。
如图8所示,当前时刻的第一车道线为B1和B2;前一时刻的第二车道线为A1和A2,B1和A1之间所形成的夹角的角度为β。
步骤63:若第一角度大于第一预设角度,则确认第一转向数据对应第一盲区。
结合图7和图8进行理解,将前一时刻的第二车道线为基准,当前时刻的第一车道线位于第二车道线的右侧,则他们之间的夹角的角度为正,则第一角度大于第一预设角度,则确认第一转向数据对应第一盲区。
如,第一角度为10度,第一预设角度为5度,则第一角度大于第一预设角度,则确认第一转向数据对应第一盲区。
步骤64:若第一角度小于第二预设角度,则确认第一转向数据对应第二盲区。
结合图7和图8进行理解,将前一时刻的第二车道线为基准,当前时刻的第一车道线位于第二车道线的左侧,则他们之间的夹角的角度为负,则第一角度小于第二预设角度,则确认第一转向数据对应第二盲区。
如,第一角度为-10度,第二预设角度为-5度,则第一角度小于第二预设角度,则确认第一转向数据对应第二盲区。
可以理解,第一盲区位于该车辆左侧至左后侧,第二盲区位于该车辆右侧至右后侧。在确认该车辆的转向数据后,根据转向数据对应的车辆盲区,按照上述其他实施例的方法进行工作。
参阅图9,对上述几个实施例进行说明:图9中的车辆C使用上述实施例中的方法。其中,车辆C行驶在3个车道的道路上,其中,3个车道分别为车道1,车道2,车道3。此时车辆C位于车道2。车道1上有车辆D,车道2上车辆C后还有车辆F,车道3上有车辆E。其中,车辆C左侧后视镜的可视区域为δ1区域,右侧后视镜的可视区域为δ2区域,第一盲区的可视区域为γ1区域,第二盲区的可视区域为γ2区域,第三盲区的可视区域为ε1区域。其中,第一盲区和第二盲区的区域可采集车辆对应左右两侧的所有区域,即180度范围,最远距离22米。第三盲区的区域可采集车辆后侧22米范围内的110度广角区域。当车辆C位于车道2时,第一盲区、第二盲区和第三盲区同时采集相应区域的图像数据,并发送至显示设备。若此时在车道C的第一盲区的可视区域出现了车辆D,则对车辆D进行识别,以判 断车辆D的速度是否大于车辆C当前速度,若是进行提醒。若在此时获取到车辆C转向车道1的转向数据时,判断车辆D的速度是否大于车辆C当前速度,若是则在显示设备上切换为单独显示第一盲区的图像数据,并进行提醒和图像数据保存,并上传至服务器。若此时在车道C的第二盲区的可视区域出现了车辆E,则对车辆E进行识别,以判断车辆E的速度是否大于车辆C当前速度,若是进行提醒。若在此时获取到车辆C转向车道3的转向数据时,判断车辆E的速度是否大于车辆C当前速度,若是则在显示设备上切换为单独显示第二盲区的图像数据,并进行提醒和图像数据保存,并上传至服务器。若此时在车道C的第三盲区的可视区域出现了车辆F,则对车辆F进行识别,以判断车辆F的速度是否大于车辆C当前速度,若是则在显示设备上切换为单独显示第三盲区的图像数据,并进行提醒和图像数据保存,并上传至服务器。
进一步,若车辆C在车道1行驶,则此时可停止第一盲区的图像采集;若车辆C转向车道3行驶,则此时可停止第二盲区的图像采集。显示设备则可只显示其余两个盲区的图像数据。
进一步,第三盲区的图像采集装置包括三个摄像头,将三个摄像头采集的数据合成发送至显示设备进行显示。
参阅图10,图10是本申请提供的车辆盲区的图像处理方法第四实施例的流程示意图。该方法包括:
步骤101:车载装置接收图像处理装置发送的在车辆盲区采集的当前图像数据。
在一些实施例中,图像处理装置响应上述任一实施例中的转向数据,并在盲区中的运动对象的速度大于该车辆的当前速度时,获取对应盲区的当前图像数据,发送至车载装置。
其中,车载装置与图像处理装置通过蓝牙或无线或车辆can总线连接。
步骤102:若在车辆盲区采集的当前图像数据为在第一盲区采集的当前图像数据,则在显示屏上进行显示。
在步骤102之前,车载装置按照预设配置显示对应多个盲区的图像数据,当接收到在第一盲区采集的当前图像数据时,则切换显示画面,单独播放在第一盲区采集的当前图像数据,并进行语音提醒。
步骤103:若在车辆盲区采集的当前图像数据为在第一盲区和第二盲区采集的当前图像数据,则在显示屏上同时显示在第一盲区和第二盲区采集的当前图像数据。
可以理解,若执行步骤103,则确认当前多个盲区中并无危险的运动对象。
在一些实施例中,若在车辆盲区采集的当前图像数据为在第二盲 区采集的当前图像数据,则在显示屏上进行显示。当接收到在第二盲区采集的当前图像数据时,则切换显示画面,单独播放在第二盲区采集的当前图像数据,并进行提醒。
在一些实施例中,车辆盲区还包括第三盲区,若在车辆盲区采集的当前图像数据为在第三盲区采集的当前图像数据,则在显示屏上进行显示。当接收到在第三盲区采集的当前图像数据时,则切换显示画面,单独播放在第三盲区采集的当前图像数据,并进行提醒。在一些实施例中,若在车辆盲区采集的当前图像数据为第一盲区、第二盲区和第三盲区采集的当前图像数据,则在显示屏上同时显示在第一盲区、第二盲区和第三盲区采集的当前图像数据。
在一些实施例中,车载装置还与车辆连接,可从车辆的CAN总线上获取相应信号,若获取到车门打开信号,则可确认检测车辆的车门打开。则向图像处理装置发送第一指令,以使图像处理装置确认当前时间车门对应的第一盲区或第二盲区内存在第四运动对象时,将第三提示音和在存在第四运动对象的第一盲区或第二盲区采集的当前图像数据发送至车载装置。接收第三提示音和在存在第四运动对象的第一盲区或第二盲区采集的当前图像数据,并在显示屏上播放第一盲区或第二盲区采集的当前图像数据和控制车载装置的扬声器播放第三提示音,以对车辆内的人员进行预警提示。举例说明:若车辆左侧的车门打开,向图像处理装置发送第一指令,图像处理装置根据第一指令判断当前时刻第一盲区中是否有运动对象,若有,则生成提示音将当前图像发送至车载装置,以使车载装置在显示屏上切换显示屏,单独播放当前图像数据和控制扬声器播放第三提示音,以对车辆内的人员进行预警提示。若车辆右侧的车门打开,向图像处理装置发送第一指令,图像处理装置根据第一指令判断当前时刻第二盲区中是否有运动对象,若有,则生成提示音将当前图像发送至车载装置,以使车载装置在显示屏上切换显示屏,单独播放当前图像数据和控制扬声器播放第三提示音,以对车辆内的人员进行预警提示。通过上述方式,能够对车辆内的人员进行提醒,以保证车辆内的人员人身安全,且能够减少交通事故的发生。
在一些实施例中,车载装置与移动终端连接,如蓝牙连接、无线连接。车载装置接收移动终端的发送的指令,根据这些指令进行相应配置。如为车载装置设置提醒提示音,设置显示模式,如2分屏、3分屏、4分屏。其中,2分屏用于显示两个盲区的图像数据。3分屏用于显示三个盲区的图像数据。4分屏用于显示4个图像数据,除了三个盲区的图像数据,还包括车辆前置摄像头采集前方的图像数据。
在一些实施例中,车载装置接收图像处理装置发送的在车辆盲区采集的当前图像数据和预设参数;若在车辆盲区采集的当前图像数据 为第一盲区采集的当前图像数据,则基于预设参数,在显示屏上显示在第一盲区采集的当前图像数据,并将当前图像数据进行录制,并将录制的当前图像数据存储至服务器。
进一步,车载装置包括多个拾音器,用于收集车辆的环境音。在显示屏上显示在第一盲区采集的当前图像数据,并将当前图像数据进行录制,以及通过多个拾音器收集车辆的当前环境音,并将录制的当前图像数据和当前环境音存储至服务器。实现全方位的声音采集,使图像数据具有环境音,能够在回放图像数据时最大程度的还原录制时的场景。
区别于现有技术的情况,本实施例中提供的车辆盲区的图像处理方法通过车载装置接收图像处理装置发送的在车辆盲区采集的当前图像数据,其中,车载装置包括显示屏,车辆盲区包括第一盲区和第二盲区;若在车辆盲区采集的当前图像数据为在第一盲区采集的当前图像数据,则在显示屏上进行显示;若在车辆盲区采集的当前图像数据为在第一盲区和第二盲区采集的当前图像数据,则在显示屏上同时显示在第一盲区和第二盲区采集的当前图像数据。通过上述方式,一方面解决了现有盲区预警方案中无法进行精准提醒、多侧协同的问题,另一方面在人机交互的基础上实现了对车辆各后视镜盲区图像系统的主动配置,能够提高车辆驾驶安全性,并提高用户体验。
参阅图11,图11是本申请提供的车辆盲区的图像处理方法第五实施例的流程示意图。该方法包括:
步骤111:移动终端接收图像处理装置发送的车辆盲区采集的当前图像数据。
在一些实施例中,图像处理装置响应上述任一实施例中的转向数据,并在盲区中的运动对象的速度大于该车辆的当前速度时,获取对应盲区的当前图像数据,发送至移动终端。
在一些实施例中,该车辆还包括一车载装置,与图像处理装置连接,图像处理装置获取在车辆盲区采集的当前图像数据可发给车载装置和移动终端,以使车载装置上的显示屏和移动终端上的显示屏同时实时显示。
其中,移动终端与图像处理装置通过蓝牙或无线连接。
步骤112:若在车辆盲区采集的当前图像数据为在第一盲区采集的当前图像数据,则在移动终端的显示屏上进行显示。
在步骤112之前,移动终端按照预设配置显示对应多个盲区的图像数据,当接收到在第一盲区采集的当前图像数据时,则切换显示画面,单独播放在第一盲区采集的当前图像数据,并进行提醒。
步骤113:若车辆盲区采集的当前图像数据为在第一盲区和第二盲区采集的当前图像数据,则在移动终端的显示屏上同时显示第一盲 区和第二盲区采集的当前图像数据。
可以理解,若执行步骤113,则确认当前多个盲区中并无危险的运动对象。
在一些实施例中,若在车辆盲区采集的当前图像数据为在第二盲区采集的当前图像数据,则在移动终端的显示屏上进行显示。当接收到在第二盲区采集的当前图像数据时,则切换显示画面,单独播放在第二盲区采集的当前图像数据,并进行提醒。
在一些实施例中,车辆盲区还包括第三盲区,若在车辆盲区采集的当前图像数据为在第三盲区采集的当前图像数据,则在移动终端的显示屏上进行显示。当接收到在第三盲区采集的当前图像数据时,则切换显示画面,单独播放在第三盲区采集的当前图像数据,并进行提醒。
在一些实施例中,若在车辆盲区采集的当前图像数据为在第一盲区、第二盲区和第三盲区采集的当前图像数据,则在移动终端的显示屏上同时显示在第一盲区、第二盲区和第三盲区采集的当前图像数据。
在一些实施例中,移动终端接收图像处理装置发送的在车辆盲区采集的当前图像数据和预设参数;若在车辆盲区采集的当前图像数据为在第一盲区采集的当前图像数据,则基于预设参数,在移动终端的显示屏上显示第一盲区采集的当前图像数据,并将当前图像数据进行录制,并将录制的当前图像数据存储至服务器。进一步,可控制控制车载装置进行环境音拾取,并同步上传至服务器。
在一些实施例中,响应于第一触控指令,发送第一设置参数至车载装置和/或图像处理装置,以使车载装置和/或图像处理装置基于第一设置参数进行设置。结合图12进行说明:如图12,可在移动终端设置进行选择,如录音是否开启,音量的调节、警告声音的选择、警告提示级别、前摄像头是否开启、车道偏移提醒等多种功能的设置。移动终端响应于第一触控指令,发送这些第一设置参数至车载装置和/或图像处理装置,以使车载装置和/或图像处理装置基于第一设置参数进行设置。
其中,移动终端与车载装置可通过数据线连接。用户在移动终端上进行参数设置,车载装置可同步响应该参数,进而完成相应设置。
在一些实施例中,响应于第二触控指令,从本地存储或服务器中获取历史图像数据;播放历史图像数据。结合图13和图14进行说明:如图13展示了不同状态下录制的图像数据,并将其分为本地视频和云端视频。点击本地视频,则会出现如图14所示的多个视频文件。用户可对这些视频文件进行删除、移动、移动后删除源文件等操作。在移动终端中将通过这个方式,可回放历史图像数据以及对历史图像数据进行整理,可为后续的系统升级提供素材。能够在回放图像数据 时最大程度的还原录制时的场景。
在一些实施例中,移动终端接收图像处理装置发送的第四提示音和在第一盲区或第二盲区采集的当前图像数据,并在移动终端的显示屏上播放在第一盲区或第二盲区采集的当前图像数据和控制扬声器播放第四提示音,以对车辆内的人员进行预警提示;其中,第四提示音由车载装置检测车辆的车门打开,且图像处理装置确认当前时间车门对应的第一盲区或第二盲区内存在第五运动对象时产生。并对当前图像数据进行录制上传服务器。这些图像数据可在发生交通事故时作为资料示出,有助于交通事故的责任划分。
区别于现有技术的情况,本实施例中提供的车辆盲区的图像处理方法通过移动终端接收图像处理装置发送的在车辆盲区采集的当前图像数据,其中,车辆盲区包括第一盲区和第二盲区;若在车辆盲区采集的当前图像数据为在第一盲区采集的当前图像数据,则在移动终端的显示屏上进行显示;若在车辆盲区采集的当前图像数据为在第一盲区和第二盲区采集的当前图像数据,则在移动终端的显示屏上同时显示在第一盲区和第二盲区采集的当前图像数据。通过上述方式,一方面解决了现有盲区预警方案中无法进行精准提醒、多侧协同的问题,另一方面在人机交互的基础上实现了对车辆各后视镜盲区图像系统的主动配置,能够提高车辆驾驶安全性,并提高用户体验。
参阅图15,图15是本申请提供的图像处理装置一实施例的结构示意图。该图像处理装置150包括处理器151以及与处理器151连接的存储器152;其中,存储器152用于存储程序数据,处理器151用于执行程序数据,以实现以下方法:
图像处理装置获取车辆的第一转向数据,并获取第一转向数据对应的车辆盲区;其中,车辆盲区包括第一盲区和第二盲区,第一盲区和第二盲区分别位于车辆左侧至左后侧或右侧至右后侧;若车辆盲区对应第一盲区,则获取在第一盲区采集的第一图像数据;获取第一图像数据中的第一运动对象的第一速度;若第一图像数据中的第一运动对象的第一速度大于车辆的当前速度,则将在第一盲区采集的当前图像数据发送给显示设备;若第一图像数据中的第一运动对象的第一速度不大于车辆的当前速度,则将在第一盲区和第二盲区采集的当前图像数据发送给显示设备。
可以理解,处理器151用于执行程序数据,还用于实现上述任一实施例中图像处理装置执行的方法。
参阅图16,图16是本申请提供的车载装置一实施例的结构示意图。该车载装置160包括处理器161以及与处理器161连接的存储器162;其中,存储器162用于存储程序数据,处理器161用于执行程序数据,以实现以下方法:
车载装置接收图像处理装置发送的在车辆盲区采集的当前图像数据,其中,车载装置包括显示屏,车辆盲区包括第一盲区和第二盲区;若在车辆盲区采集的当前图像数据为在第一盲区采集的当前图像数据,则在显示屏上进行显示;若在车辆盲区采集的当前图像数据为在第一盲区和第二盲区采集的当前图像数据,则在显示屏上同时在显示第一盲区和第二盲区采集的当前图像数据。
可以理解,处理器161用于执行程序数据,还用于实现上述任一实施例中车载装置执行的方法。
参阅图17,图17是本申请提供的移动终端一实施例的结构示意图。该移动终端170包括处理器171以及与处理器171连接的存储器172;其中,存储器172用于存储程序数据,处理器171用于执行程序数据,以实现以下方法:
移动终端接收图像处理装置发送的在车辆盲区采集的当前图像数据,其中,车辆盲区包括第一盲区和第二盲区;若在车辆盲区采集的当前图像数据为在第一盲区采集的当前图像数据,则在移动终端的显示屏上进行显示;若在车辆盲区采集的当前图像数据为在第一盲区和第二盲区采集的当前图像数据,则在移动终端的显示屏上同时显示在第一盲区和第二盲区采集的当前图像数据。
可以理解,处理器171用于执行程序数据,还用于实现上述任一实施例中移动终端执行的方法。
参阅图18,图18是本申请提供的可读存储介质一实施例的结构示意图。该可读存储介质180用于存储程序数据181,程序数据181在被处理器执行时,用于实现以下方法:
图像处理装置获取车辆的第一转向数据,并获取第一转向数据对应的车辆盲区;其中,车辆盲区包括第一盲区和第二盲区,第一盲区和第二盲区分别位于车辆左侧至左后侧或右侧至右后侧;若车辆盲区对应第一盲区,则获取在第一盲区采集的第一图像数据;获取第一图像数据中的第一运动对象的第一速度;若第一图像数据中的第一运动对象的第一速度大于车辆的当前速度,则将在第一盲区采集的当前图像数据发送给显示设备;若第一图像数据中的第一运动对象的第一速度不大于车辆的当前速度,则将在第一盲区和第二盲区采集的当前图像数据发送给显示设备;或,
车载装置接收图像处理装置发送的在车辆盲区采集的当前图像数据,其中,车载装置包括显示屏,车辆盲区包括第一盲区和第二盲区;若在车辆盲区采集的当前图像数据为在第一盲区采集的当前图像数据,则在显示屏上进行显示;若在车辆盲区采集的当前图像数据为在第一盲区和第二盲区采集的当前图像数据,则在显示屏上同时显示在第一盲区和第二盲区采集的当前图像数据;或,
移动终端接收图像处理装置发送的在车辆盲区采集的当前图像数据,其中,车辆盲区包括第一盲区和第二盲区;若在车辆盲区采集的当前图像数据为在第一盲区采集的当前图像数据,则在移动终端的显示屏上进行显示;若在车辆盲区采集的当前图像数据为在第一盲区和第二盲区采集的当前图像数据,则在移动终端的显示屏上同时显示在第一盲区和第二盲区采集的当前图像数据。
可以理解,程序数据181在被处理器执行时,还用于实现上述任一实施例方法。
参阅图19,图19是本申请提供的车辆盲区的图像处理系统一实施例的结构示意图。该图像处理系统190包括图像处理装置191、车载装置192和移动终端193;
其中,图像处理装置191如上述任一实施例中的图像处理装置、车载装置192如上述任一实施例中的车载装置和移动终端193如上述任一实施例中的移动终端。
可以理解,图像处理装置191、车载装置192和移动终端193可用于实现上述任一实施例对应的方法。
在本申请所提供的几个实施方式中,应该理解到,所揭露的方法以及设备,可以通过其它的方式实现。例如,以上所描述的设备实施方式仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施方式方案的目的。
另外,在本申请各个实施方式中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
上述其他实施方式中的集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(processor)执行本申请各个实施方式所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、 只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述仅为本申请的实施方式,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。
本申请实施例还揭示了:
A1.一种车辆盲区的图像处理方法,所述方法包括:
图像处理装置获取所述车辆的第一转向数据,并获取所述第一转向数据对应的车辆盲区;其中,所述车辆盲区包括第一盲区和第二盲区,所述第一盲区和所述第二盲区分别位于所述车辆左侧至左后侧或右侧至右后侧;
若所述车辆盲区对应所述第一盲区,则获取在所述第一盲区采集的第一图像数据;
获取所述第一图像数据中的第一运动对象的第一速度;
若所述第一图像数据中的第一运动对象的第一速度大于所述车辆的当前速度,则将在所述第一盲区采集的当前图像数据发送给显示设备;
若所述第一图像数据中的第一运动对象的第一速度不大于所述车辆的当前速度,则将在所述第一盲区和所述第二盲区采集的当前图像数据发送给所述显示设备。
A2.根据A1所述的方法,所述方法还包括:
若所述车辆盲区对应所述第二盲区,则获取在所述第二盲区采集的第二图像数据;
获取所述第二图像数据中的第二运动对象的第二速度;
若所述第二图像数据中的第二运动对象的第二速度大于所述车辆的当前速度,则将在所述第二盲区采集的当前图像数据发送给所述显示设备;
若所述第二图像数据中的第二运动对象的第二速度不大于所述车辆的当前速度,则将在所述第一盲区和所述第二盲区采集的当前图像数据发送给所述显示设备。
A3.根据A1所述的方法,
所述获取所述第一图像数据中的第一运动对象的第一速度,包括:
获取所述第一图像数据中的所述第一运动对象在前一时刻与所述车辆的第一距离,以及获取所述第一运动对象在当前时刻与所述车辆的第二距离;
根据所述第一距离和所述第二距离计算出所述第一运动对象的 第一速度。
A4.根据A3所述的方法,
所述若所述第一图像数据中的第一运动对象的第一速度大于所述车辆的当前速度,则将在所述第一盲区采集的当前图像数据发送给显示设备,包括:
若所述第一图像数据中的第一运动对象的第一速度大于所述车辆的当前速度,且所述第二距离小于第一预设距离,则将在所述第一盲区采集的当前图像数据发送给显示设备。
A5.根据A2所述的方法,
所述获取所述第二图像数据中的第二运动对象的第二速度,包括:
获取所述第二图像数据中的所述第二运动对象在前一时刻与所述车辆的第三距离,以及获取所述第二运动对象在当前时刻与所述车辆的第四距离;
根据所述第三距离和所述第四距离计算出所述第二运动对象的第二速度。
A6.根据A5所述的方法,
所述若所述第二图像数据中的第二运动对象的第二速度大于所述车辆的当前速度,则将在所述第二盲区采集的当前图像数据发送给所述显示设备,包括:
若所述第二图像数据中的第二运动对象的第二速度大于所述车辆的当前速度,且所述第四距离小于第二预设距离,则将在所述第二盲区采集的当前图像数据发送给显示设备。
A7.根据A1所述的方法,所述车辆盲区还包括第三盲区,所述第三盲区设置于所述车辆的后侧;
所述方法还包括:
获取在所述第三盲区采集的第三图像数据;
获取所述第三图像数据中的第三运动对象的第三速度;
若所述第三图像数据中的第三运动对象的第三速度大于所述车辆的当前速度,则将在所述第三盲区采集的当前图像数据发送给显示设备。
A8.根据A7所述的方法,
所述若所述第三图像数据中的第三运动对象的第三速度大于所述车辆的当前速度,则将在所述第三盲区采集的当前图像数据发送给显示设备,包括:
若所述第三图像数据中的第三运动对象的第三速度大于所述车辆的当前速度,则获取所述车辆的第二转向数据;
若未获取到所述车辆的所述第二转向数据,则将在所述第三盲区采集的当前图像数据发送给显示设备。
A9.根据A7所述的方法,所述方法还包括:
若所述第三盲区的第三运动对象移动至所述第一盲区或所述第二盲区,则获取所述第一盲区或所述第二盲区中所述第三运动对象的第四速度;
若所述第一盲区或所述第二盲区中所述第三运动对象的第四速度大于所述车辆的当前速度,则将在所述第一盲区或所述第二盲区采集的当前图像数据发送给显示设备。
A10.根据A4所述的方法,
所述获取所述车辆的第一转向数据,并获取所述第一转向数据对应的车辆盲区,包括:
获取所述车辆的前置摄像头采集的第四图像数据;
获取所述第四图像数据中车道线与所述车辆的第一角度;
若所述第一角度大于第一预设角度,则确认所述第一转向数据对应所述第一盲区;
若所述第一角度小于第二预设角度,则确认所述第一转向数据对应所述第二盲区。
A11.根据A10所述的方法,
所述获取所述第四图像数据中车道线与所述车辆的第一角度,包括:
识别所述第四图像中的当前时刻的第一车道线以及前一时刻的第二车道线;
计算所述第一车道线与所述第二车道线之间所形成的夹角的角度,并将所述夹角的角度作为所述第一角度。
A12.根据A1所述的方法,所述方法还包括第一预设参数、第二预设参数和第三预设参数;
所述若所述第一图像数据中的第一运动对象的第一速度大于所述车辆的当前速度,则将在所述第一盲区采集的当前图像数据发送给显示设备,包括:
若所述第一图像数据中的第一运动对象的第一速度大于所述车辆的当前速度,则获取所述第一转向数据的连续时间,若所述第一转向数据的连续时间大于预设时间,则将在所述第一盲区采集的当前图像数据和第一预设参数发送给显示设备,以使所述显示设备按照所述第一预设参数进行配置;或,
若所述第一图像数据中的第一运动对象的第一速度大于所述车辆的当前速度,则获取所述第一转向数据的连续时间,若所述第一转向数据的连续时间不大于预设时间,则计算出所述第一速度与所述当前速度的第一差值,若所述第一差值大于预设差值,则将在所述第一盲区采集的当前图像数据和第二预设参数发送给显示设备,以使所述 显示设备按照所述第二预设参数进行配置,若所述第一差值不大于所述预设差值,则将所述第一盲区采集的当前图像数据和第三预设参数发送给显示设备,以使所述显示设备按照所述第三预设参数进行配置。
A13.根据A1-A12任一项所述的方法,所述方法还包括:
在所述车辆的当前档位为预设档位时,若检测到所述车辆的车门打开,则进行预警提醒。
B14.一种车辆盲区的图像处理方法,所述方法包括:
车载装置接收图像处理装置发送的在所述车辆盲区采集的当前图像数据,其中,所述车载装置包括显示屏,所述车辆盲区包括第一盲区和第二盲区;
若在所述车辆盲区采集的当前图像数据为在所述第一盲区采集的当前图像数据,则在所述显示屏上进行显示;
若在所述车辆盲区采集的当前图像数据为在所述第一盲区和所述第二盲区采集的当前图像数据,则在所述显示屏上同时显示在所述第一盲区和所述第二盲区采集的当前图像数据。
B15.根据B14所述的方法,所述方法还包括:
若在所述车辆盲区采集的当前图像数据为在所述第二盲区采集的当前图像数据,则在所述显示屏上进行显示。
B16.根据B14所述的方法,所述车辆盲区还包括第三盲区,所述方法还包括:
若所述车辆盲区采集的当前图像数据为所述第三盲区采集的当前图像数据,则在所述显示屏上进行显示;
若所述车辆盲区采集的当前图像数据为所述第一盲区、所述第二盲区和所述第三盲区采集的当前图像数据,则在所述显示屏上同时显示所述第一盲区、所述第二盲区和所述第三盲区采集的当前图像数据。
B17.根据B14所述的方法,
所述车载装置接收图像处理装置发送的所述车辆盲区采集的当前图像数据,还包括:
车载装置接收图像处理装置发送的在所述车辆盲区采集的当前图像数据和预设参数;
所述若在所述车辆盲区采集的当前图像数据为在所述第一盲区采集的当前图像数据,则在所述显示屏上进行显示,包括:
若在所述车辆盲区采集的当前图像数据为在所述第一盲区采集的当前图像数据,则基于预设参数,在所述显示屏上显示在所述第一盲区采集的当前图像数据,并将所述当前图像数据进行录制,并将录制的当前图像数据存储至服务器。
C18.一种车辆盲区的图像处理方法,所述方法包括:
移动终端接收图像处理装置发送的在所述车辆盲区采集的当前 图像数据,其中,所述车辆盲区包括第一盲区和第二盲区;
若在所述车辆盲区采集的当前图像数据为在所述第一盲区采集的当前图像数据,则在所述移动终端的显示屏上进行显示;
若在所述车辆盲区采集的当前图像数据为在所述第一盲区和所述第二盲区采集的当前图像数据,则在所述移动终端的显示屏上同时显示在所述第一盲区和所述第二盲区采集的当前图像数据。
C19.根据C18所述的方法,所述方法还包括:
若所述车辆盲区采集的当前图像数据为所述第二盲区采集的当前图像数据,则在所述显示屏上进行显示。
C20.根据C18所述的方法,所述车辆盲区还包括第三盲区,所述方法还包括:
若在所述车辆盲区采集的当前图像数据为在所述第三盲区采集的当前图像数据,则在所述显示屏上进行显示;若在所述车辆盲区采集的当前图像数据为在所述第一盲区、所述第二盲区和所述第三盲区采集的当前图像数据,则在所述移动终端的显示屏上同时显示在所述第一盲区、所述第二盲区和所述第三盲区采集的当前图像数据。
C21.根据C18所述的方法,
所述移动终端接收图像处理装置发送的在所述车辆盲区采集的当前图像数据,还包括:
移动终端接收图像处理装置发送的在所述车辆盲区采集的当前图像数据和预设参数;
所述若在所述车辆盲区采集的当前图像数据为在所述第一盲区采集的当前图像数据,则在所述移动终端的显示屏上进行显示,包括:
若在所述车辆盲区采集的当前图像数据为在所述第一盲区采集的当前图像数据,则基于预设参数,在所述移动终端的显示屏上显示在所述第一盲区采集的当前图像数据,并将所述当前图像数据进行录制,并将录制的当前图像数据存储至服务器。
C22.根据C18所述的方法,所述方法还包括:
响应于第一触控指令,发送第一设置参数至车载装置和/或图像处理装置,以使所述车载装置和/或所述图像处理装置基于所述第一设置参数进行设置。
C23.根据C18所述的方法,所述方法还包括:
响应于第二触控指令,从本地存储或服务器中获取历史图像数据;播放所述历史图像数据。
D24.一种图像处理装置,所述图像处理装置包括处理器以及与所述处理器连接的存储器;
其中,所述存储器用于存储程序数据,所述处理器用于执行所述程序数据,以实现如A1-A13任一项所述的方法。
E25.一种车载装置,所述车载装置包括处理器以及与所述处理器连接的存储器;
其中,所述存储器用于存储程序数据,所述处理器用于执行所述程序数据,以实现如B14-B17任一项所述的方法。
F26.一种移动终端,所述移动终端包括处理器以及与所述处理器连接的存储器;
其中,所述存储器用于存储程序数据,所述处理器用于执行所述程序数据,以实现如C18-C23任一项所述的方法。
G27.一种可读存储介质,所述可读存储介质用于存储程序数据,所述程序数据在被处理器执行时,用于实现如A1-A13、或B14-B17或C18-C23任一项所述的方法。
H28.一种车辆盲区的图像处理系统,所述图像处理系统包括图像处理装置、车载装置和移动终端;
其中,所述图像处理装置如D24所述的图像处理装置、所述车载装置如E25所述的车载装置和所述移动终端如F26所述的移动终端。

Claims (10)

  1. 一种车辆盲区的图像处理方法,其特征在于,所述方法包括:
    图像处理装置获取所述车辆的第一转向数据,并获取所述第一转向数据对应的车辆盲区;其中,所述车辆盲区包括第一盲区和第二盲区,所述第一盲区和所述第二盲区分别位于所述车辆左侧至左后侧或右侧至右后侧;
    若所述车辆盲区对应所述第一盲区,则获取在所述第一盲区采集的第一图像数据;
    获取所述第一图像数据中的第一运动对象的第一速度;
    若所述第一图像数据中的第一运动对象的第一速度大于所述车辆的当前速度,则将在所述第一盲区采集的当前图像数据发送给显示设备;
    若所述第一图像数据中的第一运动对象的第一速度不大于所述车辆的当前速度,则将在所述第一盲区和所述第二盲区采集的当前图像数据发送给所述显示设备。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    若所述车辆盲区对应所述第二盲区,则获取在所述第二盲区采集的第二图像数据;
    获取所述第二图像数据中的第二运动对象的第二速度;
    若所述第二图像数据中的第二运动对象的第二速度大于所述车辆的当前速度,则将在所述第二盲区采集的当前图像数据发送给所述显示设备;
    若所述第二图像数据中的第二运动对象的第二速度不大于所述车辆的当前速度,则将在所述第一盲区和所述第二盲区采集的当前图像数据发送给所述显示设备。
  3. 根据权利要求1所述的方法,其特征在于,
    所述获取所述第一图像数据中的第一运动对象的第一速度,包括:
    获取所述第一图像数据中的所述第一运动对象在前一时刻与所述车辆的第一距离,以及获取所述第一运动对象在当前时刻与所述车辆的第二距离;
    根据所述第一距离和所述第二距离计算出所述第一运动对象的第一速度。
  4. 一种车辆盲区的图像处理方法,其特征在于,所述方法包括:
    车载装置接收图像处理装置发送的在所述车辆盲区采集的当前图像数据,其中,所述车载装置包括显示屏,所述车辆盲区包括第一盲区和第二盲区;
    若在所述车辆盲区采集的当前图像数据为在所述第一盲区采集的当前图像数据,则在所述显示屏上进行显示;
    若在所述车辆盲区采集的当前图像数据为在所述第一盲区和所述第二盲区采集的当前图像数据,则在所述显示屏上同时显示在所述第一盲区和所述第二盲区采集的当前图像数据。
  5. 一种车辆盲区的图像处理方法,其特征在于,所述方法包括:
    移动终端接收图像处理装置发送的在所述车辆盲区采集的当前图像数据,其中,所述车辆盲区包括第一盲区和第二盲区;
    若在所述车辆盲区采集的当前图像数据为在所述第一盲区采集的当前图像数据,则在所述移动终端的显示屏上进行显示;
    若在所述车辆盲区采集的当前图像数据为在所述第一盲区和所述第二盲区采集的当前图像数据,则在所述移动终端的显示屏上同时显示在所述第一盲区和所述第二盲区采集的当前图像数据。
  6. 一种图像处理装置,其特征在于,所述图像处理装置包括处理器以及与所述处理器连接的存储器;
    其中,所述存储器用于存储程序数据,所述处理器用于执行所述程序数据,以实现如权利要求1-3任一项所述的方法。
  7. 一种车载装置,其特征在于,所述车载装置包括处理器以及与所述处理器连接的存储器;
    其中,所述存储器用于存储程序数据,所述处理器用于执行所述程序数据,以实现如权利要求4所述的方法。
  8. 一种移动终端,其特征在于,所述移动终端包括处理器以及与所述处理器连接的存储器;
    其中,所述存储器用于存储程序数据,所述处理器用于执行所述程序数据,以实现如权利要求5所述的方法。
  9. 一种可读存储介质,其特征在于,所述可读存储介质用于存储程序数据,所述程序数据在被处理器执行时,用于实现如权利要求1-3、或权利要求4或权利要求5任一项所述的方法。
  10. 一种车辆盲区的图像处理系统,其特征在于,所述图像处理系统包括图像处理装置、车载装置和移动终端;
    其中,所述图像处理装置如权利要求6所述的图像处理装置、所述车载装置如权利要求7所述的车载装置和所述移动终端如权利要求8所述的移动终端。
PCT/CN2020/125090 2020-07-23 2020-10-30 车辆盲区的图像处理方法、系统及相关装置 WO2022016731A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010716076.X 2020-07-23
CN202010716076.XA CN111845783B (zh) 2020-07-23 2020-07-23 车辆盲区的图像处理方法、系统及相关装置

Publications (1)

Publication Number Publication Date
WO2022016731A1 true WO2022016731A1 (zh) 2022-01-27

Family

ID=72949407

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/125090 WO2022016731A1 (zh) 2020-07-23 2020-10-30 车辆盲区的图像处理方法、系统及相关装置

Country Status (2)

Country Link
CN (1) CN111845783B (zh)
WO (1) WO2022016731A1 (zh)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160196748A1 (en) * 2015-01-02 2016-07-07 Atieva, Inc. Automatically Activated Blind Spot Camera System
CN106740470A (zh) * 2016-11-21 2017-05-31 奇瑞汽车股份有限公司 一种基于全景影像系统的盲区监测方法及系统
CN107161081A (zh) * 2017-05-11 2017-09-15 重庆长安汽车股份有限公司 一种右侧盲区图像自动打开系统及方法
CN107776489A (zh) * 2016-08-26 2018-03-09 比亚迪股份有限公司 车辆及其全景影像的显示方法和显示系统
CN109204141A (zh) * 2018-09-19 2019-01-15 深圳市众鸿科技股份有限公司 车辆行驶过程中的预警方法与装置
CN109591698A (zh) * 2017-09-30 2019-04-09 上海欧菲智能车联科技有限公司 盲区检测系统、盲区检测方法和车辆
CN109952231A (zh) * 2016-12-30 2019-06-28 金泰克斯公司 具有按需侦察视图的全显示镜
CN111845557A (zh) * 2020-07-23 2020-10-30 深圳市健创电子有限公司 车辆驾驶的安全预警方法、系统及相关装置

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5803757B2 (ja) * 2012-03-13 2015-11-04 トヨタ自動車株式会社 周辺監視装置及び周辺監視方法
CN103552506B (zh) * 2013-11-06 2017-09-29 武汉双微电气股份有限公司 汽车太阳能环景监视安全预警系统
CN104670113A (zh) * 2013-11-29 2015-06-03 青岛永通电梯工程有限公司 一种机动车后视盲区显示方法
JP6020507B2 (ja) * 2014-04-14 2016-11-02 トヨタ自動車株式会社 車両用画像表示装置及び車両用画像表示方法
CN105857315B (zh) * 2016-04-28 2018-03-06 重庆长安汽车股份有限公司 主动式盲区监测系统及方法

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160196748A1 (en) * 2015-01-02 2016-07-07 Atieva, Inc. Automatically Activated Blind Spot Camera System
CN107776489A (zh) * 2016-08-26 2018-03-09 比亚迪股份有限公司 车辆及其全景影像的显示方法和显示系统
CN106740470A (zh) * 2016-11-21 2017-05-31 奇瑞汽车股份有限公司 一种基于全景影像系统的盲区监测方法及系统
CN109952231A (zh) * 2016-12-30 2019-06-28 金泰克斯公司 具有按需侦察视图的全显示镜
CN107161081A (zh) * 2017-05-11 2017-09-15 重庆长安汽车股份有限公司 一种右侧盲区图像自动打开系统及方法
CN109591698A (zh) * 2017-09-30 2019-04-09 上海欧菲智能车联科技有限公司 盲区检测系统、盲区检测方法和车辆
CN109204141A (zh) * 2018-09-19 2019-01-15 深圳市众鸿科技股份有限公司 车辆行驶过程中的预警方法与装置
CN111845557A (zh) * 2020-07-23 2020-10-30 深圳市健创电子有限公司 车辆驾驶的安全预警方法、系统及相关装置

Also Published As

Publication number Publication date
CN111845783A (zh) 2020-10-30
CN111845783B (zh) 2022-04-26

Similar Documents

Publication Publication Date Title
CN109552315B (zh) 全视野摄像头主机控制系统
JP4933669B2 (ja) 車載用画像表示装置
US8218007B2 (en) Camera system for a vehicle and method for controlling a camera system
DE19546391C2 (de) Bewegliche interakitv eingesetzte Arbeitsstation
JP3372944B2 (ja) 監視システム
JP5495071B2 (ja) 車両周辺監視装置
WO2012172923A1 (ja) 車両周辺監視装置
JP2007052719A5 (zh)
WO2022016730A1 (zh) 车辆驾驶的安全预警方法、系统及相关装置
JP4760562B2 (ja) 車両用周辺情報提示装置及び車両用周辺情報提示方法
JP2003158736A (ja) 監視システム
CN110809136A (zh) 一种高清全景系统
WO2022016731A1 (zh) 车辆盲区的图像处理方法、系统及相关装置
CN203580780U (zh) 内后视镜
WO2023221998A1 (zh) 车载流媒体显示系统和方法
WO2023284748A1 (zh) 一种辅助驾驶系统及车辆
CN116101168A (zh) 外后视镜系统及其控制方法、车辆
CN216184804U (zh) 一种辅助驾驶系统及车辆
CN116039538A (zh) 一种车辆的旋转式摄像头的控制方法、装置和车辆
JP6234701B2 (ja) 車両用周囲モニタ装置
JP6372556B2 (ja) 車載用画像表示装置
WO2021131481A1 (ja) 表示装置、表示方法及び表示プログラム
WO2024051614A1 (zh) 一种辅助驾驶方法及相关装置
WO2023221118A1 (zh) 信息处理方法及装置、电子设备及存储介质
US20240236273A9 (en) Vehicle occupant security systems and methods

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20946299

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 26/06/2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20946299

Country of ref document: EP

Kind code of ref document: A1