CN114084068A - Video display method, device and equipment for vehicle blind area - Google Patents

Video display method, device and equipment for vehicle blind area Download PDF

Info

Publication number
CN114084068A
CN114084068A CN202111333119.7A CN202111333119A CN114084068A CN 114084068 A CN114084068 A CN 114084068A CN 202111333119 A CN202111333119 A CN 202111333119A CN 114084068 A CN114084068 A CN 114084068A
Authority
CN
China
Prior art keywords
camera
vehicle
image
information
motor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111333119.7A
Other languages
Chinese (zh)
Other versions
CN114084068B (en
Inventor
徐英豪
路锦文
王兴龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guoqi Intelligent Control Beijing Technology Co Ltd
Original Assignee
Guoqi Intelligent Control Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guoqi Intelligent Control Beijing Technology Co Ltd filed Critical Guoqi Intelligent Control Beijing Technology Co Ltd
Priority to CN202111333119.7A priority Critical patent/CN114084068B/en
Publication of CN114084068A publication Critical patent/CN114084068A/en
Application granted granted Critical
Publication of CN114084068B publication Critical patent/CN114084068B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/12Mirror assemblies combined with other articles, e.g. clocks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/12Mirror assemblies combined with other articles, e.g. clocks
    • B60R2001/1253Mirror assemblies combined with other articles, e.g. clocks with cameras, video cameras or video screens
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/10Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used
    • B60R2300/105Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used using multiple cameras
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/80Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
    • B60R2300/802Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for monitoring and displaying vehicle exterior blind spot views

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Mechanical Engineering (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The present disclosure provides a video display method, device and equipment for vehicle blind areas, which relates to vehicle driving technology, and comprises: acquiring environmental information of a vehicle acquired by a sensor; if the camera needs to be started according to the environment information, starting the first camera, the second camera and the third camera; the first camera, the second camera and the third camera are respectively positioned on the left rear-view mirror, the right rear-view mirror and the tail part of the vehicle; acquiring a first image, a second image and a third image which are respectively acquired by a first camera, a second camera and a third camera; and generating and displaying the vehicle environment video according to the first image, the second image and the third image. According to the scheme provided by the disclosure, three cameras respectively positioned on a left rear-view mirror and a right rear-view mirror of a vehicle and on the tail of the vehicle can be automatically started according to the environmental information of the vehicle acquired by a sensor; a vehicle environment video may be generated and displayed from the images acquired from the three cameras. The driver can see the road conditions of the left side, the right side and the rear side of the vehicle under the severe weather conditions of rain, snow and the like, and the driving safety is guaranteed.

Description

Video display method, device and equipment for vehicle blind area
Technical Field
The present disclosure relates to vehicle driving technologies, and in particular, to a method, an apparatus, and a device for displaying a video of a vehicle blind area.
Background
At present, when a vehicle runs under severe weather conditions such as rain and snow, the side windshield and the external rearview mirror of the vehicle have water mist and water drops, so that a driver cannot identify a rear vehicle or a side obstacle, and particularly, the risk of accidents is increased if the vehicle stops at the side or changes lanes.
In the prior art, the defogging function of the air conditioner can be started, so that water fog and water drops on the side windshield can be removed; and starting the heating function of the outer rearview mirror, and removing water mist and water drops on the outer rearview mirror.
However, the above-mentioned method still causes water mist or water drops on the side windshield, and then the driver still can't see outside rear-view mirror through the side windshield to cause the driving risk, lead to the traffic accident.
Disclosure of Invention
The utility model provides a video display method, device and equipment of vehicle blind area to still can cause water smoke or water droplet on the side shield glass among the solution prior art, and then the driver still can't see through the side shield glass and see outside rear-view mirror, thereby cause the driving risk, lead to the problem of traffic accident.
According to a first aspect of the present disclosure, there is provided a video display method of a vehicle blind area, including:
acquiring environmental information of a vehicle acquired by a sensor;
if the camera needs to be started according to the environment information, starting a first camera, a second camera and a third camera; wherein the first camera is located on a left side rearview mirror of the vehicle, the second camera is located on a right side rearview mirror of the vehicle, and the third camera is located on a rear portion of the vehicle;
acquiring a first image acquired by a first camera, a second image acquired by a second camera and a third image acquired by a third camera;
and generating and displaying a vehicle environment video according to the first image, the second image and the third image.
According to a second aspect of the present disclosure, there is provided a video display device for a vehicle blind area, comprising:
the environment information acquisition unit is used for acquiring the environment information of the vehicle acquired by the sensor;
the starting unit is used for starting the first camera, the second camera and the third camera if the situation that the cameras need to be started is determined according to the environment information; wherein the first camera is located on a left side rearview mirror of the vehicle, the second camera is located on a right side rearview mirror of the vehicle, and the third camera is located on a rear portion of the vehicle;
the image acquisition unit is used for acquiring a first image acquired by the first camera, a second image acquired by the second camera and a third image acquired by the third camera;
and the video generation unit is used for generating and displaying the vehicle environment video according to the first image, the second image and the third image.
According to a third aspect of the present disclosure, there is provided an electronic device comprising a memory and a processor; wherein,
the memory for storing a computer program;
the processor is configured to read the computer program stored in the memory, and execute the video display method of the vehicle blind area according to the first aspect.
According to a fourth aspect of the present application, there is provided a vehicle on which the electronic apparatus, the sensor, the first camera, the second camera, and the third camera as described in the third aspect are provided; wherein the first camera is located on a left side rearview mirror of the vehicle, the second camera is located on a right side rearview mirror of the vehicle, and the third camera is located on a rear portion of the vehicle;
the sensor, the first camera, the second camera and the third camera are respectively electrically connected with the electronic equipment.
According to a fifth aspect of the present application, there is provided a computer-readable storage medium having stored therein computer-executable instructions that, when executed by a processor, implement the video display method of the vehicle blind area according to the first aspect.
According to a sixth aspect of the present application, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method of video display of a vehicle blind area according to the first aspect.
The video display method, device and equipment for the vehicle blind area provided by the disclosure comprise the following steps: acquiring environmental information of a vehicle acquired by a sensor; if the camera needs to be started according to the environment information, starting a first camera, a second camera and a third camera; the first camera is positioned on a left side rearview mirror of the vehicle, the second camera is positioned on a right side rearview mirror of the vehicle, and the third camera is positioned on the tail of the vehicle; acquiring a first image acquired by a first camera, a second image acquired by a second camera and a third image acquired by a third camera; and generating and displaying the vehicle environment video according to the first image, the second image and the third image. The scheme provided by the disclosure comprises a video display method of a vehicle blind area, which can automatically start three cameras respectively positioned on a left rear-view mirror, a right rear-view mirror and a vehicle tail part of a vehicle according to environment information of the vehicle, which is acquired by a sensor positioned on the vehicle; a vehicle environment video may be generated and displayed from the images acquired from the three cameras. The driver can still see the road conditions of the left side and the right side of the vehicle and the rear side of the vehicle under the severe weather conditions of rain, snow and the like, and the driving safety is guaranteed.
Drawings
Fig. 1 is a flowchart illustrating a video display method of a vehicle blind area according to an exemplary embodiment of the present disclosure;
FIG. 2 is a schematic flow chart diagram illustrating a method for video display of vehicle blind spots in accordance with another exemplary embodiment of the present disclosure;
FIG. 3 is a block diagram of a video display device for vehicle blind spots shown in an exemplary embodiment of the present disclosure;
fig. 4 is a block diagram of a video display apparatus for a vehicle blind area according to another exemplary embodiment of the present disclosure;
fig. 5 is a block diagram of an electronic device shown in an exemplary embodiment of the present disclosure.
Detailed Description
When a vehicle runs under severe weather conditions such as rain and snow, the side windshield and the external rearview mirror of the vehicle have water mist and water drops, so that a driver cannot identify a vehicle coming from the rear or a side obstacle, and particularly, the risk of accidents is increased if the vehicle is parked beside or turns lanes. In the prior art, the defogging function of the air conditioner can be started, so that water fog and water drops on the side windshield can be removed; and starting the heating function of the outer rearview mirror, and removing water mist and water drops on the outer rearview mirror.
However, the above-mentioned method still causes water mist or water drops on the side windshield, and then the driver still can't see outside rear-view mirror through the side windshield to cause the driving risk, lead to the traffic accident.
In order to solve the technical problem, the scheme provided by the disclosure comprises a video display method for a vehicle blind area, which can automatically start three cameras respectively positioned on a left rear-view mirror and a right rear-view mirror of a vehicle and on the tail part of the vehicle according to environment information of the vehicle, which is acquired by a sensor positioned on the vehicle; a vehicle environment video may be generated and displayed from the images acquired from the three cameras. The driver can still see the road conditions of the left side and the right side of the vehicle and the rear side of the vehicle under the severe weather conditions of rain, snow and the like, and the driving safety is guaranteed. Fig. 1 is a flowchart illustrating a video display method for a vehicle blind area according to an exemplary embodiment of the present disclosure.
As shown in fig. 1, the video display method for a vehicle blind area provided by the present disclosure includes:
101. and acquiring the environmental information of the vehicle acquired by the sensor.
Illustratively, the method provided by the present disclosure may be performed by an electronic device having computing capabilities, such as an in-vehicle unit. The electronic equipment can acquire the environmental information of the vehicle acquired by the sensor and determine to start the camera according to the acquired environmental information of the vehicle acquired by the sensor; image information collected by the camera can be obtained; the vehicle environment video may be generated and displayed based on the acquired image information.
The sensor may include, for example, a rainfall sensor, a humidity sensor, and a temperature sensor. In particular, the sensor may be located on the vehicle.
The rain sensor can be located, for example, behind the front windshield of the vehicle. The environmental information of the vehicle may include rainfall information on a front windshield of the vehicle collected by a rainfall sensor.
Wherein, two humidity sensors can be preset, which are respectively positioned outside and inside the vehicle. The environmental information of the vehicle may further include humidity inside and outside the vehicle collected by the two humidity sensors.
Wherein two temperature sensors may be preset, respectively located outside and inside the vehicle. The environmental information of the vehicle may further include temperatures inside and outside the vehicle collected by the two temperature sensors.
Specifically, after the vehicle is started, the sensor starts to acquire the environmental information of the vehicle, and the electronic device may acquire the environmental information of the vehicle acquired by the sensor.
102. If the camera needs to be started according to the environment information, starting a first camera, a second camera and a third camera; the first camera is positioned on a left side rearview mirror of the vehicle, the second camera is positioned on a right side rearview mirror of the vehicle, and the third camera is positioned on the tail of the vehicle.
Illustratively, if the environmental information of the vehicle acquired by the electronic device reaches a preset condition, the camera is determined to be turned on. The preset conditions are preset according to actual conditions. Specifically, when the electronic device detects that the acquired environmental information of the vehicle reaches a preset condition, a starting instruction can be sent to the camera, and the camera is started after receiving the starting instruction sent by the electronic device.
Furthermore, a central control display screen in the vehicle can be used as the display device, when the environmental information of the vehicle acquired by the electronic device reaches a preset condition and the camera is determined to be started, the central control display screen can switch the working mode, convert the current working model into a background to continue working, and enter a display mode of an Around View Monitor (AVM) for displaying the vehicle environmental video.
Alternatively, if the sensor is a rain sensor, wherein the rain sensor may be located behind the front windscreen of the vehicle. The environmental information of the vehicle collected by the sensor includes rainfall information on a front windshield of the vehicle. The preset condition may be a preset rainfall threshold. And if the rainfall information acquired by the rainfall sensor and acquired by the electronic equipment is greater than a preset rainfall threshold, determining to start the camera.
Optionally, the sensors may also include two temperature sensors located outside and inside the vehicle, respectively, and two humidity sensors located outside and inside the vehicle, respectively.
The environmental information of the vehicle collected by the sensors may further include temperature information inside and outside the vehicle collected by the two temperature sensors and humidity information inside and outside the vehicle collected by the two humidity sensors.
The preset conditions may further include a preset temperature difference threshold value and a preset humidity difference threshold value.
If the rainfall information acquired by the rainfall sensor acquired by the electronic equipment is less than or equal to a preset rainfall threshold, the numerical value of the temperature difference between the inside and the outside of the vehicle represented by the temperature information between the inside and the outside of the vehicle is greater than the preset temperature threshold, and the numerical value of the humidity difference between the inside and the outside of the vehicle represented by the humidity information between the inside and the outside of the vehicle is greater than the preset humidity threshold, the camera is determined to be required to be started. By the mode, even in rainy and snowy weather, when the condition of water mist or water drops exists on the side windshield due to humidity difference and temperature difference inside and outside the vehicle, a driver can still check the condition outside the vehicle according to image information acquired by the three cameras, and driving safety is guaranteed.
Optionally, a reversing camera preset on the vehicle may be used as the third camera.
103. And acquiring a first image acquired by the first camera, a second image acquired by the second camera and a third image acquired by the third camera.
Illustratively, the first camera is located on a left side rear view mirror of the vehicle, and the first camera can acquire the environment video information on the left side and the left back of the vehicle after being turned on. Wherein the environmental video information on the left side of the vehicle may be composed of a plurality of frames of consecutive first images.
The second camera is located on the right side rear-view mirror of vehicle, can gather the environment video information of vehicle right side and right back after the second camera is opened. Wherein the environmental video information on the right side of the vehicle may be composed of a plurality of frames of consecutive second images.
The third camera is located the afterbody of vehicle, can gather the environment video information behind the vehicle after the third camera is opened. Wherein the environment video information behind the vehicle may be composed of a plurality of frames of consecutive third images.
The electronic device can acquire a first image acquired by the first camera, a second image acquired by the second camera and a third image acquired by the third camera.
104. And generating and displaying the vehicle environment video according to the first image, the second image and the third image.
Specifically, the electronic device can splice the acquired first image, the acquired second image and the acquired third image collected at the same time into one image, and the spliced image of each frame can form a vehicle environment video. A display device in the electronic device may display the vehicle environment video for viewing by the driver. The display device may be, for example, a central control display screen in a vehicle.
Optionally, the electronic device may first pre-process the acquired first image, second image, and third image acquired at the same time, and then extract first matching information of the pre-processed first image, second matching information of the pre-processed second image, and third matching information of the pre-processed third image; matching the preprocessed first image, the preprocessed second image and the preprocessed third image based on the first matching information, the second matching information and the third matching information to obtain a matched image; then fusing the matched images to obtain fused images; the fused images of the frames can form a vehicle environment video.
In an implementation mode, the electronic device may not splice the image information collected by the three cameras, and may display the image information collected by the three cameras in a split screen manner by using the display device.
Further, after the electronic device acquires the end information, a shutdown instruction may be sent to the first camera, the second camera, and the third camera, respectively, where the shutdown instruction is used to instruct the first camera, the second camera, and the third camera to shut down.
Optionally, after the electronic device obtains the end information, the central control display screen in the electronic device may switch the working mode, end the AVM display mode, and enter the working mode before the AVM display mode.
Fig. 2 is a flowchart illustrating a video display method for a vehicle blind area according to another exemplary embodiment of the present disclosure.
As shown in fig. 2, the video display method for a vehicle blind area provided by the present disclosure includes:
201. and acquiring the environmental information of the vehicle acquired by the sensor.
Exemplarily, the principle and implementation of step 201 are similar to those of step 101, and are not described again.
202. If the camera needs to be started according to the environment information, starting a first camera, a second camera and a third camera; the first camera is positioned on a left side rearview mirror of the vehicle, the second camera is positioned on a right side rearview mirror of the vehicle, and the third camera is positioned on the tail of the vehicle.
In one example, step 202 includes the following two implementations.
In a first implementation manner of step 202, the sensor includes a rainfall sensor, and the environmental information is rainfall information; and if the numerical value represented by the rainfall information is larger than the preset rainfall threshold value, determining that the camera needs to be started, and starting the first camera, the second camera and the third camera.
Wherein the rain sensor may be located behind the front windscreen of the vehicle. The rainfall information may be rainfall information on the front windshield of the vehicle.
The preset rainfall threshold may be a rainfall value preset according to an actual situation.
Specifically, after the electronic device determines that the obtained numerical value represented by the rainfall information is larger than a preset rainfall threshold value, the camera needs to be started, and starting instructions are sent to the first camera, the second camera and the third camera respectively, and the first camera, the second camera and the third camera are started respectively after receiving the starting instructions.
In the second implementation manner of step 202, the sensors further include a humidity sensor and a temperature sensor, and then the environmental information further includes temperature information inside and outside the vehicle, which is acquired by the temperature sensor; humidity information inside and outside the vehicle collected by the humidity sensor; if the fact that the numerical value represented by the rainfall information is smaller than or equal to the preset rainfall threshold value, the numerical value represented by the temperature information inside and outside the vehicle and representing the temperature difference inside and outside the vehicle is larger than the preset temperature threshold value, and the numerical value represented by the humidity information inside and outside the vehicle and representing the humidity difference inside and outside the vehicle is larger than the preset humidity threshold value is determined, the camera needs to be started, and the first camera, the second camera and the third camera are started.
Wherein the sensors may further include two temperature sensors located respectively outside and inside the vehicle, and two humidity sensors located respectively outside and inside the vehicle.
The environmental information of the vehicle collected by the sensors may further include temperature information inside and outside the vehicle collected by the two temperature sensors and humidity information inside and outside the vehicle collected by the two humidity sensors.
The preset conditions may further include a preset temperature difference threshold value and a preset humidity difference threshold value. The preset temperature difference threshold value can be a temperature difference value between the inside and the outside of the vehicle, which is preset according to actual conditions. The preset humidity difference threshold may be a humidity difference between the inside and the outside of the vehicle, which is preset according to an actual situation.
If the rainfall information acquired by the rainfall sensor acquired by the electronic equipment is less than or equal to a preset rainfall threshold, the numerical value of the temperature difference between the inside and the outside of the vehicle represented by the temperature information between the inside and the outside of the vehicle is greater than the preset temperature threshold, and the numerical value of the humidity difference between the inside and the outside of the vehicle represented by the humidity information between the inside and the outside of the vehicle is greater than the preset humidity threshold, the camera is determined to be required to be started. By the mode, even in rainy and snowy weather, when water mist or water drops exist on the side windshield due to humidity difference and temperature difference inside and outside the vehicle, a driver can still check the conditions outside the vehicle through image information collected by the three cameras, and driving safety is guaranteed.
In one example, "turning on the first camera, the second camera, and the third camera" specifically includes the following processes:
sending a first rotation instruction to a first motor corresponding to the first camera, and sending a second rotation instruction to a second motor corresponding to the second camera; the first motor is positioned on a left side rearview mirror of the vehicle, the first rotation instruction is used for indicating the first motor to rotate, and the first motor is used for controlling the first camera to rotate to a preset direction; the second motor is located the right side rear-view mirror of vehicle, and the second rotation instruction is used for instructing the second motor rotation, and the second motor is used for controlling the second camera rotation to predetermine the direction.
And respectively sending a starting instruction to the first camera, the second camera and the third camera, wherein the starting instruction is used for indicating the first camera, the second camera and the third camera to be started.
For example, the first motor and the first camera may be mechanically connected; the second motor and the second camera may be mechanically connected.
The preset direction can be a preset lens orientation direction of the camera. For example, the preset direction of the first camera may be the left rear of the vehicle; the preset direction of the second camera may be a right rear of the vehicle.
Specifically, after the electronic device determines to start the camera, the electronic device may send a first rotation instruction, a second rotation instruction, a start instruction, and a start instruction to the first motor, the second motor, the first camera, the second camera, and the third camera, respectively.
203. And acquiring a first image acquired by the first camera, a second image acquired by the second camera and a third image acquired by the third camera.
Exemplarily, step 203 is similar to step 103 in principle and implementation, and is not described again.
204. And respectively carrying out image preprocessing on the first image, the second image and the third image to obtain a preprocessed first image, a preprocessed second image and a preprocessed third image.
For example, the electronic device may perform image stitching on the first image, the second image, and the third image acquired at the same time. The image stitching technology is mainly divided into three main steps: image preprocessing, image registration, image fusion and boundary smoothing.
The image preprocessing mainly refers to geometric distortion correction, noise point suppression and the like of the image, so that the first image, the second image and the third image have no obvious geometric distortion.
Image splicing is carried out under the condition that the image quality is not ideal, and some mismatching is easily caused if image preprocessing is not carried out. The image preprocessing is mainly used for preparing for the next image registration, so that the image quality can meet the requirement of the image registration.
In particular, a filter can be provided, which is used to calculate an estimate of the real image from the geometrically distorted image and to maximally approximate the real image according to a predefined error criterion.
Specifically, the noise point of the image may be suppressed by using a mean filtering or a median filtering.
205. Extracting first matching information of the preprocessed first image, second matching information of the preprocessed second image and third matching information of the preprocessed third image, and matching the preprocessed first image, the preprocessed second image and the preprocessed third image based on the first matching information, the second matching information and the third matching information to obtain a matched image, wherein the matched image comprises image information of the first image, the preprocessed second image and the preprocessed third image.
Illustratively, the image registration mainly refers to extracting matching information in the preprocessed first image, the preprocessed second image and the preprocessed third image, and finding the best match in the extracted matching information to complete the alignment between the images. The success or failure of image stitching is mainly the registration of images.
Specifically, the first preprocessed image, the second preprocessed image, and the third preprocessed image may be matched by using a ratio matching method.
206. Fusing the matched images to obtain fused images; and the fused images of the frames form a vehicle environment video.
The image fusion means that after the image matching is completed, the images are stitched.
In one example, step 206 includes the following process: and carrying out image splicing on the matched images, and carrying out smoothing treatment on image connection boundaries in the matched images to obtain fused images.
Illustratively, the borders of the image stitching may be smoothed to make the stitching transition naturally. Because any two adjacent images cannot be completely identical in acquisition conditions, some image characteristics which should be identical, such as illumination characteristics of the images, cannot be completely identical in the two images. An image stitching gap is generated when an image region of one image is transited to an image region of another image due to some related characteristics in the images. The image fusion is to make the splicing gap between the images unobvious and the splicing more natural.
207. After the end information is acquired, a shutdown instruction is sent to the first camera, the second camera and the third camera respectively, and the shutdown instruction is used for indicating the first camera, the second camera and the third camera to be shut down.
For example, the electronic device may obtain the ending instruction in various ways. Optionally, the electronic device may obtain the ending instruction by receiving a voice instruction. Optionally, the electronic device may preset a threshold of the environmental information of the vehicle collected by the sensor, and may generate the end instruction when it is determined that the environmental information reaches the preset threshold. Optionally, the electronic device may obtain the end instruction according to an operation of turning off the AVM display mode by the driver.
208. Sending a first reset instruction to the first motor and sending a second reset instruction to the second motor; the first reset instruction is used for indicating the first motor to reset, and the second reset instruction is used for indicating the second motor to reset.
For example, after the electronic device obtains the end instruction, the electronic device may simultaneously send a shutdown instruction to the first camera, the second camera, and the third camera, respectively send a first reset instruction to the first motor, and send a second reset instruction to the second motor.
The first camera, the second camera and the third camera can respectively receive a shutdown instruction sent by the electronic device and execute shutdown. The first motor can receive a first reset instruction and execute reset; the second motor may receive a second reset command and perform a reset.
Furthermore, the first motor and the first camera can be mechanically connected, and the first camera can be controlled to rotate in the resetting process of the first motor. For example, after receiving the first reset instruction, the first motor may control the first camera to rotate until the lens faces the ground.
Furthermore, the second motor and the second camera can be mechanically connected, and the second camera can be controlled to rotate in the resetting process of the second motor. For example, after receiving the second reset instruction, the second motor may control the second camera to rotate until the lens faces the ground.
Optionally, after the electronic device obtains the end information, the central control display screen in the electronic device may switch the working mode, end the AVM display mode, and enter the working mode before the AVM display mode.
Fig. 3 is a block diagram of a video display apparatus for a vehicle blind area according to an exemplary embodiment of the present disclosure.
As shown in fig. 3, the present disclosure provides a video display device 300 for a vehicle blind area, including:
an environmental information obtaining unit 310, configured to obtain environmental information of the vehicle collected by the sensor;
the starting unit 320 is configured to start the first camera, the second camera and the third camera if it is determined that the cameras need to be started according to the environment information; the first camera is positioned on a left side rearview mirror of the vehicle, the second camera is positioned on a right side rearview mirror of the vehicle, and the third camera is positioned on the tail of the vehicle;
the image acquiring unit 330 is configured to acquire a first image acquired by a first camera, a second image acquired by a second camera, and a third image acquired by a third camera;
and a video generating unit 340 for generating and displaying the vehicle environment video according to the first image, the second image and the third image.
Fig. 4 is a block diagram of a video display apparatus for a vehicle blind area according to another exemplary embodiment of the present disclosure.
As shown in fig. 4, on the basis of the above-mentioned embodiment, in the video display apparatus 400 for vehicle blind area provided by the present application,
optionally, the starting unit 320 is configured to use the sensor as a rainfall sensor, and use the environmental information as rainfall information; determining to start the camera according to the environment information, comprising: if the numerical value represented by the rainfall information is larger than a preset rainfall threshold value, the camera is determined to be required to be started;
optionally, the starting unit 320 may be further configured to use a humidity sensor as the sensor, and use humidity information as the environment information; determining to start the camera according to the environment information, comprising: if the value represented by the humidity information is larger than a preset humidity threshold value, the camera is determined to be required to be started;
optionally, the starting unit 320 may be further configured to use a temperature sensor as the sensor, and use temperature information as the environmental information; determining to start the camera according to the environment information, comprising: and if the numerical value represented by the temperature information is smaller than the preset temperature threshold value, determining that the camera needs to be started.
The starting unit 320 may be further configured to send a first rotation instruction to a first motor corresponding to the first camera, and send a second rotation instruction to a second motor corresponding to the second camera; the first motor is positioned on a left side rearview mirror of the vehicle, the first rotation instruction is used for indicating the first motor to rotate, and the first motor is used for controlling the first camera to rotate to a preset direction; the second motor is positioned on the right side rearview mirror of the vehicle, the second rotation instruction is used for indicating the second motor to rotate, and the second motor is used for controlling the second camera to rotate to a preset direction;
and respectively sending a starting instruction to the first camera, the second camera and the third camera, wherein the starting instruction is used for indicating the first camera, the second camera and the third camera to be started.
The starting unit 320 may be further configured to send a shutdown instruction to the first camera, the second camera, and the third camera after the end information is obtained, where the shutdown instruction is used to instruct the first camera, the second camera, and the third camera to shutdown;
sending a first reset instruction to the first motor and sending a second reset instruction to the second motor; the first reset instruction is used for indicating the first motor to reset, and the second reset instruction is used for indicating the second motor to reset.
In the video display apparatus 400 for a blind area of a vehicle provided by the present application, the video generating unit 340 further includes:
an image preprocessing module 341, configured to perform image preprocessing on the first image, the second image, and the third image, respectively, to obtain a preprocessed first image, a preprocessed second image, and a preprocessed third image;
the image registration module 342 is configured to extract first matching information of the preprocessed first image, second matching information of the preprocessed second image, and third matching information of the preprocessed third image, and perform matching processing on the preprocessed first image, the preprocessed second image, and the preprocessed third image based on the first matching information, the second matching information, and the third matching information to obtain a matched image, where the matched image includes image information of the first image, the preprocessed second image, and the preprocessed third image;
the image fusion module 343 is configured to perform fusion processing on the matched image to obtain a fused image; and the fused images of the frames form a vehicle environment video.
The image fusion module 343 is specifically configured to perform image stitching on the matched image, and perform smoothing on a boundary where the images in the matched image are connected, so as to obtain a fused image.
Fig. 5 is a block diagram of an electronic device shown in an exemplary embodiment of the present disclosure.
As shown in fig. 5, the electronic device provided in this embodiment includes:
a memory 501; a processor 502; and a computer program; wherein the computer program is stored in the memory 501 and configured to be executed by the processor 502 to implement any of the methods of transmitting data packets as above.
The embodiment also provides a vehicle, wherein the vehicle is provided with the electronic equipment, the sensor, the first camera, the second camera and the third camera as shown in fig. 5; the first camera is positioned on a left side rearview mirror of the vehicle, the second camera is positioned on a right side rearview mirror of the vehicle, and the third camera is positioned on the tail of the vehicle;
the sensor, the first camera, the second camera and the third camera are respectively electrically connected with the electronic equipment.
In one implementation, the sensor is a rain sensor, or a humidity sensor, or a temperature sensor.
In one implementation manner, a first motor and a second motor are further arranged on the vehicle; wherein the first motor is located on a left side rearview mirror of the vehicle and the second motor is located on a right side rearview mirror of the vehicle.
The present embodiment also provides a computer-readable storage medium having stored thereon a computer program which is executed by a processor to implement any of the above-described video display methods for a vehicle blind area.
The present embodiment also provides a computer program product comprising a computer program that, when executed by a processor, implements any of the above-described methods for video display of a vehicle blind area.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (14)

1. A video display method for a vehicle blind area is characterized by comprising the following steps:
acquiring environmental information of a vehicle acquired by a sensor;
if the camera needs to be started according to the environment information, starting a first camera, a second camera and a third camera; wherein the first camera is located on a left side rearview mirror of the vehicle, the second camera is located on a right side rearview mirror of the vehicle, and the third camera is located on a rear portion of the vehicle;
acquiring a first image acquired by a first camera, a second image acquired by a second camera and a third image acquired by a third camera;
and generating and displaying a vehicle environment video according to the first image, the second image and the third image.
2. The method of claim 1, wherein the sensor comprises a rain sensor, and the environmental information comprises rain information; determining that the camera needs to be started according to the environment information, comprising: and if the numerical value represented by the rainfall information is larger than a preset rainfall threshold value, determining that the camera needs to be started.
3. The method of claim 2, wherein the sensors further comprise a temperature sensor, a humidity sensor; the environmental information further includes the internal and external temperature information of the vehicle collected by the temperature sensor; and the humidity sensor collects the humidity information inside and outside the vehicle; determining that the camera needs to be started according to the environment information, comprising:
and if the numerical value represented by the rainfall information is determined to be smaller than or equal to a preset rainfall threshold value, the numerical value of the temperature difference between the inside and the outside of the vehicle represented by the inside and outside temperature information of the vehicle is larger than the preset temperature threshold value, and the numerical value of the humidity difference between the inside and the outside of the vehicle represented by the inside and outside humidity information of the vehicle is larger than the preset humidity threshold value, the camera is determined to be required to be started.
4. The method of claim 1, wherein turning on the first camera, the second camera, and the third camera comprises:
sending a first rotation instruction to a first motor corresponding to the first camera, and sending a second rotation instruction to a second motor corresponding to the second camera; the first motor is positioned on a left side rearview mirror of the vehicle, the first rotation instruction is used for indicating the first motor to rotate, and the first motor is used for controlling the first camera to rotate to a preset direction; the second motor is positioned on a right side rearview mirror of the vehicle, the second rotation instruction is used for indicating the second motor to rotate, and the second motor is used for controlling the second camera to rotate to a preset direction;
and sending a starting instruction to the first camera, the second camera and the third camera respectively, wherein the starting instruction is used for indicating the first camera, the second camera and the third camera to be started.
5. The method of claim 4, further comprising:
after the end information is acquired, respectively sending a shutdown instruction to the first camera, the second camera and the third camera, wherein the shutdown instruction is used for indicating the first camera, the second camera and the third camera to be shut down;
sending a first reset instruction to the first motor and sending a second reset instruction to the second motor; the first reset instruction is used for indicating the first motor to reset, and the second reset instruction is used for indicating the second motor to reset.
6. The method according to any one of claims 1-5, wherein generating and displaying a vehicle environment video from the first image, the second image, and the third image comprises:
respectively carrying out image preprocessing on the first image, the second image and the third image to obtain a preprocessed first image, a preprocessed second image and a preprocessed third image;
extracting first matching information of a preprocessed first image, second matching information of a preprocessed second image and third matching information of a preprocessed third image, and matching the preprocessed first image, the preprocessed second image and the preprocessed third image based on the first matching information, the second matching information and the third matching information to obtain a matched image, wherein the matched image comprises image information of the first image, the preprocessed second image and the preprocessed third image;
fusing the matched images to obtain fused images; and the fused images of the frames form the vehicle environment video.
7. The method according to claim 6, wherein the fusing the matching processed images to obtain fused images comprises:
and carrying out image splicing on the matched images, and carrying out smoothing treatment on image connection boundaries in the matched images to obtain the fused images.
8. A video display device for a vehicle blind area, comprising:
the environment information acquisition unit is used for acquiring the environment information of the vehicle acquired by the sensor;
the starting unit is used for starting the first camera, the second camera and the third camera if the situation that the cameras need to be started is determined according to the environment information; wherein the first camera is located on a left side rearview mirror of the vehicle, the second camera is located on a right side rearview mirror of the vehicle, and the third camera is located on a rear portion of the vehicle;
the image acquisition unit is used for acquiring a first image acquired by the first camera, a second image acquired by the second camera and a third image acquired by the third camera;
and the video generation unit is used for generating and displaying the vehicle environment video according to the first image, the second image and the third image.
9. An electronic device, comprising: a processor, and a memory communicatively coupled to the processor;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored by the memory to implement the method of any of claims 1-7.
10. A vehicle, characterized in that the electronic device, the sensor, the first camera, the second camera and the third camera of claim 9 are provided on the vehicle; wherein the first camera is located on a left side rearview mirror of the vehicle, the second camera is located on a right side rearview mirror of the vehicle, and the third camera is located on a rear portion of the vehicle;
the sensor, the first camera, the second camera and the third camera are respectively electrically connected with the electronic equipment.
11. The method of claim 10, wherein the sensor is a rain sensor, or a humidity sensor, or a temperature sensor.
12. The method according to claim 10 or 11, wherein a first electric machine and a second electric machine are also provided on the vehicle; wherein the first motor is located on a left side rearview mirror of the vehicle and the second motor is located on a right side rearview mirror of the vehicle.
13. A computer-readable storage medium having computer-executable instructions stored thereon which, when executed by a processor, implement the method of any one of claims 1-7.
14. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, carries out the method of any one of the preceding claims 1-7.
CN202111333119.7A 2021-11-11 2021-11-11 Video display method, device and equipment for dead zone of vehicle Active CN114084068B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111333119.7A CN114084068B (en) 2021-11-11 2021-11-11 Video display method, device and equipment for dead zone of vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111333119.7A CN114084068B (en) 2021-11-11 2021-11-11 Video display method, device and equipment for dead zone of vehicle

Publications (2)

Publication Number Publication Date
CN114084068A true CN114084068A (en) 2022-02-25
CN114084068B CN114084068B (en) 2024-08-20

Family

ID=80299906

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111333119.7A Active CN114084068B (en) 2021-11-11 2021-11-11 Video display method, device and equipment for dead zone of vehicle

Country Status (1)

Country Link
CN (1) CN114084068B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117201748A (en) * 2023-11-06 2023-12-08 博泰车联网(南京)有限公司 Video stream display method, device, equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE29913882U1 (en) * 1999-08-09 1999-12-23 Fischer, Wolfgang, 63538 Großkrotzenburg On-board camera system for motor vehicles
CN104875681A (en) * 2015-06-16 2015-09-02 四川长虹佳华信息产品有限责任公司 Dynamic vehicle-mounted camera control method based on application scenarios
US20170210292A1 (en) * 2016-01-26 2017-07-27 Jonathan Allen Back-Up Panorama Camera
CN208021327U (en) * 2018-03-27 2018-10-30 山东华宇工学院 A kind of automotive internal electronic rearview system
CN109624853A (en) * 2018-12-12 2019-04-16 北京汽车集团有限公司 Extend the method and apparatus at vehicle visual angle
CN110300285A (en) * 2019-07-17 2019-10-01 北京智行者科技有限公司 Panoramic video acquisition method and system based on unmanned platform

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE29913882U1 (en) * 1999-08-09 1999-12-23 Fischer, Wolfgang, 63538 Großkrotzenburg On-board camera system for motor vehicles
CN104875681A (en) * 2015-06-16 2015-09-02 四川长虹佳华信息产品有限责任公司 Dynamic vehicle-mounted camera control method based on application scenarios
US20170210292A1 (en) * 2016-01-26 2017-07-27 Jonathan Allen Back-Up Panorama Camera
CN208021327U (en) * 2018-03-27 2018-10-30 山东华宇工学院 A kind of automotive internal electronic rearview system
CN109624853A (en) * 2018-12-12 2019-04-16 北京汽车集团有限公司 Extend the method and apparatus at vehicle visual angle
CN110300285A (en) * 2019-07-17 2019-10-01 北京智行者科技有限公司 Panoramic video acquisition method and system based on unmanned platform

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117201748A (en) * 2023-11-06 2023-12-08 博泰车联网(南京)有限公司 Video stream display method, device, equipment and storage medium
CN117201748B (en) * 2023-11-06 2024-01-26 博泰车联网(南京)有限公司 Video stream display method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN114084068B (en) 2024-08-20

Similar Documents

Publication Publication Date Title
CN107444263B (en) Display device for vehicle
CN102883925B (en) Mist on windshield and the detection method of raindrop and drive assistance device
CN109435852A (en) A kind of panorama type DAS (Driver Assistant System) and method for large truck
CN105763854A (en) Omnidirectional imaging system based on monocular camera, and imaging method thereof
CN114084068B (en) Video display method, device and equipment for dead zone of vehicle
CN110481432A (en) A column dynamic visual system
CN107856608A (en) 3D looks around view angle switch method and device, 3D viewing systems and Vehicular system
CN114500771B (en) Vehicle-mounted imaging system
CN114937090A (en) Intelligent electronic front and rear view mirror system
CN109624853A (en) Extend the method and apparatus at vehicle visual angle
CN114368342A (en) Driving assistance method, storage medium, and vehicle
US11584299B2 (en) Driver-assisting system for an industrial vehicle
CN216611052U (en) Electronic rearview mirror based on AI discernment
KR102010407B1 (en) Smart Rear-view System
CN112744159B (en) Vehicle, rear view display device thereof, and control method for rear view display device
CN113071414A (en) Intelligent vehicle sun shield system and image acquisition method thereof
CN114025129A (en) Image processing method and system and motor vehicle
JP6922700B2 (en) Raindrop detector and wiper control device
CN107222670A (en) One kind drives camera system and its method and device
CN212163446U (en) Vehicle-mounted panoramic image device, system and vehicle
CN220798404U (en) Vehicle-mounted camera
CN112124199B (en) Imaging method and imaging system for vehicle running road conditions and vehicle
CN217994290U (en) Display system of vehicle and vehicle
CN118747787A (en) Real-time display method and device for vehicle chassis image, vehicle and storage medium
CN107222720A (en) A kind of vehicle camera system and its method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant