WO2022204854A1 - 一种盲区图像获取方法及相关终端装置 - Google Patents

一种盲区图像获取方法及相关终端装置 Download PDF

Info

Publication number
WO2022204854A1
WO2022204854A1 PCT/CN2021/083514 CN2021083514W WO2022204854A1 WO 2022204854 A1 WO2022204854 A1 WO 2022204854A1 CN 2021083514 W CN2021083514 W CN 2021083514W WO 2022204854 A1 WO2022204854 A1 WO 2022204854A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
frame
image
images
historical driving
Prior art date
Application number
PCT/CN2021/083514
Other languages
English (en)
French (fr)
Inventor
王笑悦
张峻豪
黄为
张宇腾
彭惠东
陈晓丽
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN202180001469.5A priority Critical patent/CN113228135B/zh
Priority to PCT/CN2021/083514 priority patent/WO2022204854A1/zh
Publication of WO2022204854A1 publication Critical patent/WO2022204854A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/167Driving aids for lane monitoring, lane changing, e.g. blind spot detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Definitions

  • the present application relates to the technical field of terminals, and in particular, to a method for acquiring a blind spot image and a related terminal device.
  • the camera system on the terminal can provide a wider perspective for the terminal during driving through visual information, and provide users with more intuitive and accurate information about the surrounding environment of the terminal.
  • the surrounding environment of the vehicle can be obtained by processing the images around the vehicle collected by the surround-view camera at the same time, but the blind spot under the vehicle is invisible in the existing ordinary vehicle-mounted panoramic surround-view system.
  • the bottom of the vehicle is prone to scratches, and the stones are squeezed by the tires and attack the chassis. If the camera is only installed in the blind area at the bottom of the terminal, the camera will be damaged and the vision of the camera will be blocked due to different road conditions, so that the visual information of the blind area at the bottom of the terminal cannot be collected.
  • Embodiments of the present application provide a method for obtaining a blind spot image and a related terminal device, which can accurately obtain an image of a blind spot area at the bottom of the terminal.
  • the blind-spot image acquisition method provided by the present application may be executed by an electronic device, a blind-spot image acquisition device, or the like.
  • An electronic device refers to an electronic device that can be abstracted into a computer system and supports the function of processing images, and may also be referred to as an image processing device.
  • the blind spot image acquisition device may be the whole machine of the electronic device, or part of the electronic device, such as a chip that supports image processing functions and supports the function of blind spot image acquisition, such as a system chip or an image chip.
  • the system-on-a-chip is also called a system-on-chip, or a SoC chip.
  • the blind spot image acquisition device may be a related device such as an on-board computer in a smart vehicle, or a system chip or an image acquisition chip that can be installed in a computer system or an image processing system of an intelligent terminal.
  • an embodiment of the present application provides a method for acquiring a blind spot image, which may include:
  • the target distance is less than or equal to a preset distance threshold
  • the filling pixel point of each target point in the multiple target points wherein, the filling pixel point is the pixel point with the nearest shooting time corresponding to the target point in the multi-frame historical driving images;
  • the blind area image is output based on the filled pixel points corresponding to the multiple target points.
  • the pixel point with the closest shooting time is selected from the multi-frame driving images as the target point in the blind area.
  • the filled pixel points are obtained to obtain the blind area image.
  • the pixels of the most recent shooting time are selected as the filling pixels of the target points in the blind area, which can maximize the obtained blind area images of the blind area with high definition and reduce the probability of obstruction by obstacles. Therefore, using a specific method to fill the blind area under the vehicle allows the driver to observe the position of the car in multiple directions, preventing tire wear and chassis scratching damage, so that the user can better observe the information around the terminal, the tires and the bottom of the terminal. , assist to stop driving, avoid terminal loss accidents to the greatest extent, and greatly improve the driving experience and driving safety.
  • the multiple frames of historical driving images are m frames of historical driving images; when the target distance is less than or equal to a preset distance threshold, according to the pose relationship and the current frame
  • the blind spot position information of the blind spot area corresponding to the driving image, and the filling pixel points of each target point in the multiple target points in the blind spot area are obtained from the multi-frame historical driving images, including: when the target distance is less than or equal to When the distance threshold is preset, the m frames of historical driving images are sorted according to the shooting time, and m is an integer greater than 1;
  • the set of target pixels obtained from the historical driving image of the frame is used as the set of filling pixels corresponding to the multiple target points, wherein the set of target pixels includes the filling pixels corresponding to the multiple target points, and the xth
  • the method further includes: when the target pixel point set is a filling pixel point set corresponding to some target points, sequentially acquiring the target pixel point set from the x+1th frame of historical driving images , as the set of filling pixel points corresponding to the remaining part of the target points, until the multiple target points in the blind zone area are filled.
  • the implementation of the embodiment of the present application after sorting by time, selects the historical driving image of the latest shooting time from the multiple frames of historical driving images to fill the blind area area, which can effectively avoid problems such as occlusion caused by frame delay caused by obstacles, and The image quality of the final blind area image is relatively clear, which improves the display effect of the blind area at the bottom of the terminal.
  • the target points are divided into a first type target point and a second type target point, and the target pixel point set is the target pixel of the first type in the xth frame of the historical driving image.
  • the method further includes: when the target distance is greater than the preset distance threshold, according to the pose relationship and the blind spot position information of the blind spot area corresponding to the current frame of the driving image, The filling pixel point of each target point in the multiple target points in the blind area is obtained from the multiple frames of historical driving images; wherein, the x+1th frame of the historical driving image includes the same pixel point as the When the pixel points of the same target point in the x frame historical driving image, the filling pixel point corresponding to the same target point is the target point corresponding to the x+1th frame historical driving image and the xth frame historical driving image The pixels corresponding to the same target point in a frame of historical driving images with a large number of .
  • the method before the determining the pose relationship between the current frame of driving images and one or more frames of historical driving images, the method further includes: acquiring the current frame of driving images and all the historical driving images through a multi-camera camera. Describe multiple frames of historical driving images.
  • the driving images can be acquired through multi-camera cameras, which greatly improves the clarity of the acquired blind spot images and improves the driving safety. It should be noted that the installation position of the camera can be installed in the forward direction of the target terminal according to the driving requirements of the terminal.
  • the method before the determining the pose relationship between the current frame of driving images and one or more frames of historical driving images, the method further includes: obtaining, through a monocular camera, the relationship between the current frame of driving images and all the historical driving images. Describe multiple frames of historical driving images.
  • the driving image can be obtained through the monocular camera, which greatly relieves the hardware requirements of the blind spot image obtaining method, reduces the difficulty of popularization, and improves the driving safety. It should be noted that the installation position of the camera can be installed in the forward direction of the target terminal according to the driving requirements.
  • the method before the determining the pose relationship between the current frame of driving images and one or more frames of historical driving images, the method further includes: obtaining, through a monocular camera, the relationship between the current frame of driving images and all the historical driving images. obtaining the speed information of the target terminal; based on the speed information, obtaining distance information between adjacent frames of driving images; based on the distance information, determining the depth estimation of the monocular camera a size value, where the size value is used to indicate the size of the unit length during depth estimation; the determining the pose relationship between the current frame of driving images and the multi-frame historical driving images includes: based on the size value, determining the The pose relationship between the current frame of driving images and the multiple frames of historical driving images.
  • the determining the pose relationship between the current frame of driving images and the multi-frame historical driving images includes: performing image feature detection on the current frame of driving images and the multi-frame historical driving images , obtain the image feature points between the current frame of driving images and the multi-frame historical driving images, where the image feature points are the common viewpoints between the current frame of driving images and the multi-frame historical driving images; based on The image feature points and size values determine the pose relationship between the current frame of driving images and the multi-frame historical driving images.
  • the pose relationship between the multiple frames of traveling images is determined by the feature points between the multiple frames of traveling images, so that the accuracy of determining the pose relationship can be improved, thereby improving the acquisition efficiency of blind spot images.
  • an embodiment of the present application provides a blind spot image acquisition device, including:
  • the first determination unit is used to determine the pose relationship between the current frame of driving images and the multi-frame historical driving images, wherein the current frame of driving images is a driving image captured at the current time, and the multi-frame historical driving images are The driving image taken before the current time, the driving image is the surrounding environment image in the forward direction of the target terminal;
  • a second determining unit configured to determine the target distance between the obstacle and the target terminal
  • the first acquisition unit is configured to, when the target distance is less than or equal to a preset distance threshold, obtain information from the multi-frame historical driving images according to the pose relationship and the blind spot position information of the blind spot area corresponding to the current frame of driving images. Obtain the filling pixel point of each target point in the multiple target points in the blind zone area; wherein, the filling pixel point is the pixel point with the nearest shooting time corresponding to the target point in the multi-frame historical driving image;
  • An output unit configured to output the blind area image based on the filling pixel points corresponding to the multiple target points.
  • the multiple frames of historical driving images are m frames of historical driving images
  • the first acquiring unit is specifically configured to: when the target distance is less than or equal to a preset distance threshold, The m frames of historical driving images are sorted according to the shooting time, and m is an integer greater than 1; according to the pose relationship and the blind area position information of the blind area corresponding to the current frame of driving images, the target pixel is obtained from the xth frame of historical driving images point set, as the set of filling pixel points corresponding to the multiple target points, wherein the set of target pixel points includes the filling pixel points corresponding to the multiple target points, and the xth frame of historical driving image is the m
  • the first acquisition unit is further specifically configured to: when the target pixel point set is a set of filling pixel points corresponding to some target points, sequentially from the x+1th frame of the historical driving image A set of target pixel points is obtained from the above, as the set of filling pixel points corresponding to the remaining part of the target points, until the multiple target points in the blind zone area are filled.
  • the target points are divided into a first type target point and a second type target point, and the target pixel point set is the target pixel of the first type in the xth frame of the historical driving image.
  • the first obtaining unit is further configured to: when the target distance is greater than the preset distance threshold, corresponding to the blind area according to the pose relationship and the current frame of the driving image
  • the blind spot position information of obtains the filling pixel points of the blind spot area from the multi-frame historical driving images; wherein, the x+1th frame of the historical driving image includes the same as the xth frame of the historical driving image.
  • the filling pixel points corresponding to the same target point are the x+1th frame of historical driving images and the xth frame of historical driving images corresponding to the target point in a larger number of historical frames. Pixel points corresponding to the same target point in the driving image.
  • the device further includes: a second acquisition unit, configured to acquire, through a monocular camera, before determining the pose relationship between the current frame of driving images and one or more frames of historical driving images The current frame of driving images and the multi-frame historical driving images; or, through a multi-eye camera, the current frame of driving images and the multi-frame historical driving images are acquired.
  • a second acquisition unit configured to acquire, through a monocular camera, before determining the pose relationship between the current frame of driving images and one or more frames of historical driving images The current frame of driving images and the multi-frame historical driving images; or, through a multi-eye camera, the current frame of driving images and the multi-frame historical driving images are acquired.
  • the apparatus further includes: a third acquisition unit, configured to acquire the current frame of driving images and the multi-frame historical driving images by using a monocular camera before determining the pose relationship between the current frame of driving images and the multi-frame historical driving images.
  • frame driving images and the multi-frame historical driving images obtain speed information of the target terminal; obtain distance information between adjacent frames of driving images based on the speed information; determine the monocular based on the distance information
  • the size value of the depth estimation of the camera the size value is used to indicate the size of the unit length during depth estimation; the first determining unit is specifically configured to: based on the size value, determine the current frame of the driving image and the Describe the pose relationship between multiple frames of historical driving images.
  • the first determining unit is specifically configured to: perform image feature detection on the current frame of driving images and the multi-frame historical driving images, and obtain the current frame of driving images and the Image feature points between multiple frames of historical driving images, the image feature points are the common viewpoints between the current frame of driving images and the multiple frames of historical driving images; based on the image feature points and size values, determine the The pose relationship between the current frame of driving images and the multiple frames of historical driving images.
  • an embodiment of the present application provides an apparatus, which may include a processor, where the processor is configured to:
  • the target distance is less than or equal to a preset distance threshold
  • the filling pixel point of each target point in the multiple target points wherein, the filling pixel point is the pixel point with the nearest shooting time corresponding to the target point in the multi-frame historical driving images;
  • the blind area image is output based on the filled pixel points corresponding to the multiple target points.
  • the multiple frames of historical driving images are m frames of historical driving images; the processor is specifically configured to: when the target distance is less than or equal to a preset distance threshold, convert the m
  • the frames of historical driving images are sorted according to the shooting time, and m is an integer greater than 1; according to the pose relationship and the blind area position information of the blind area corresponding to the current frame of driving images, the target pixel set is obtained from the xth frame of historical driving images , as the set of filling pixel points corresponding to the multiple target points, wherein the set of target pixel points includes the filling pixel points corresponding to the multiple target points, and the xth frame of the historical driving image is the history of the m frames
  • the processor is further configured to: when the target pixel set is a set of filling pixels corresponding to some target points, sequentially acquire target pixels from the x+1th frame of historical driving images The point set is used as the set of filling pixel points corresponding to the remaining part of the target points, until the multiple target points in the blind area area are filled.
  • the target points are divided into a first type target point and a second type target point, and the target pixel point set is the target pixel of the first type in the xth frame of the historical driving image.
  • the processor is also used for:
  • the interior of the blind area is obtained from the multiple frames of historical driving images.
  • the processor is further configured to obtain the current frame of driving images through a multi-camera camera before determining the pose relationship between the current frame of driving images and one or more frames of historical driving images with the multi-frame historical driving image.
  • the processor is further configured to: before determining the pose relationship between the current frame of driving images and the multi-frame historical driving images, obtain the current frame of driving images and all historical driving images through a monocular camera. obtaining the speed information of the target terminal; based on the speed information, obtaining distance information between adjacent frames of driving images; based on the distance information, determining the depth estimation of the monocular camera a size value, where the size value is used to indicate the size of a unit length during depth estimation; the processor is specifically configured to: based on the size value, determine the distance between the current frame of driving images and the multiple frames of historical driving images pose relationship.
  • the processor is specifically configured to: perform image feature detection on the current frame of driving images and the multi-frame historical driving images, and obtain the current frame of driving images and the multi-frame historical driving images Image feature points between the driving images, the image feature points are the common viewpoints between the current frame of driving images and the multiple frames of historical driving images; based on the image feature points and size values, determine the current frame The pose relationship between the driving image and the multiple frames of historical driving images.
  • an embodiment of the present application provides an electronic device, which may include a processor and a memory, wherein the memory is used for storing a program code for acquiring a blind spot image, and the processor is used for calling the program code for acquiring an image in the blind spot to implement:
  • the target distance is less than or equal to a preset distance threshold
  • the filling pixel point of each target point in the multiple target points wherein, the filling pixel point is the pixel point with the nearest shooting time corresponding to the target point in the multi-frame historical driving images;
  • the blind area image is output based on the filled pixel points corresponding to the multiple target points.
  • the multiple frames of historical driving images are m frames of historical driving images; the processor is further configured to call the blind spot image acquisition program code to execute: when the target distance is less than or equal to a predetermined distance When setting the distance threshold, the m frames of historical driving images are sorted according to the shooting time, and m is an integer greater than 1; A set of target pixels is obtained from the historical driving image as the set of filling pixels corresponding to the multiple target points, wherein the set of target pixels includes the filling pixels corresponding to the multiple target points, and the xth frame
  • the processor is further configured to call the blind area image acquisition program code to execute: when the target pixel point set is a set of filling pixel points corresponding to some target points, sequentially from the xth The set of target pixel points obtained from the +1 frame of historical driving images is used as the set of filling pixel points corresponding to the remaining part of the target points, until the multiple target points in the blind area area are filled.
  • the target points are divided into a first type target point and a second type target point, and the target pixel point set is the target pixel of the first type in the xth frame of the historical driving image.
  • the processor is further configured to call the blind spot image acquisition program code to execute: when the target distance is greater than the preset distance threshold, according to the pose relationship and the The blind spot position information of the blind spot area corresponding to the current frame of driving images is obtained, and the filling pixel points of each target point in the multiple target points in the blind spot area are obtained from the multi-frame historical driving images; wherein, in the xth When the +1 frame of the historical driving image includes the pixel points of the same target point as the xth frame of the historical driving image, the filling pixel corresponding to the same target point is the x+1th frame of the historical driving image and the Pixel points corresponding to the same target point in a frame of historical driving image with a larger number corresponding to the target point in the xth frame of historical driving image.
  • the processor is further configured to call the blind spot image acquisition program code to execute: determine the pose relationship between the current frame of driving images and one or more frames of historical driving images, and use a single The current frame of driving images and the multi-frame historical driving images are obtained by using an eye camera; or, the current frame of driving images and the multi-frame historical driving images are obtained by using a multi-eye camera.
  • the processor is further configured to call the blind spot image acquisition program code to execute: before determining the pose relationship between the current frame of driving images and the multi-frame historical driving images, use the monocular camera , obtain the current frame of driving images and the multi-frame historical driving images; obtain the speed information of the target terminal; based on the speed information, obtain distance information between adjacent frames of driving images; based on the distance information, Determine the size value of the depth estimation of the monocular camera, where the size value is used to indicate the size of the unit length during depth estimation; the processor is specifically configured to call the blind spot image acquisition program code to execute: based on the The size value is used to determine the pose relationship between the current frame of driving images and the multiple frames of historical driving images.
  • the processor is specifically configured to call the blind spot image acquisition program code to execute: perform image feature detection on the current frame of driving images and the multi-frame historical driving images, and obtain the Image feature points between the current frame of driving images and the multi-frame historical driving images, the image feature points being the common viewpoints between the current frame of driving images and the multi-frame historical driving images; based on the image features point and size value to determine the pose relationship between the current frame of driving images and the multi-frame historical driving images.
  • an embodiment of the present application provides a computer storage medium for storing computer software instructions used for the blind spot image acquisition method provided in the above-mentioned first aspect, which includes a program designed to execute the above-mentioned aspect.
  • an embodiment of the present application provides a computer program, where the computer program includes instructions, when the computer program is executed by a computer, the computer can execute the process performed by the method for obtaining a blind spot image in the first aspect.
  • an embodiment of the present application provides an intelligent vehicle, including an image processing system, wherein the image processing system is configured to perform corresponding functions in the blind spot image acquisition method provided in the first aspect.
  • the present application provides a chip system
  • the chip system includes a processor for supporting an electronic device to implement the functions involved in the above-mentioned first aspect, for example, generating or processing the above-mentioned blind spot image acquisition methods involved in the image acquisition method. information.
  • the chip system further includes a memory for storing necessary program instructions and data of the data sending apparatus.
  • the chip system may be composed of chips, or may include chips and other discrete devices.
  • the pixel point with the closest shooting time is selected from the multi-frame driving images as the filling of the target point in the blind area. pixel points to obtain a blind spot image.
  • the pixels of the most recent shooting time are selected as the filling pixels of the target points in the blind area, which can maximize the obtained blind area images of the blind area with high definition and reduce the probability of obstruction by obstacles. Therefore, using a specific method to fill the blind area under the vehicle allows the driver to observe the position of the car in multiple directions, preventing tire wear and chassis scratching damage, so that the user can better observe the information around the terminal, the tires and the bottom of the terminal. , assist to stop driving, avoid terminal loss accidents to the greatest extent, and greatly improve the driving experience and driving safety.
  • FIG. 1 is a functional block diagram of an intelligent vehicle 001 provided by an embodiment of the present application.
  • FIG. 2 is a schematic structural diagram of a computing device in an intelligent vehicle provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of the architecture of a blind spot image acquisition system provided by an embodiment of the present application.
  • FIG. 4 is a schematic flowchart of a method for acquiring a blind spot image provided by an embodiment of the present application.
  • FIG. 5 is a schematic flowchart of determining a pose relationship between multiple frames of driving images according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a scene when a vehicle is driving according to an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a target point and a filling pixel point provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of filling a blind area provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of another blind area filling provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of distribution of various first-type target points and second-type target points provided by an embodiment of the present application.
  • FIG. 11 is a schematic diagram of various blind zone regions provided by an embodiment of the present application.
  • FIG. 12 is a schematic diagram of a scenario of a target vehicle provided in an embodiment of the present application in an application scenario.
  • FIG. 13 is a schematic flowchart of a method for acquiring a blind spot image in an application scenario provided by an embodiment of the present application.
  • FIG. 14 is a schematic diagram of a blind spot image that can be provided by one frame of historical driving images according to an embodiment of the present application.
  • FIG. 15 is a schematic diagram of a blind spot image that can be provided by a multi-frame historical driving image provided by an embodiment of the present application.
  • FIG. 16 is a schematic structural diagram of an apparatus for acquiring a blind spot image provided by an embodiment of the present application.
  • FIG. 17 is a schematic structural diagram of another device for acquiring an image of a blind spot provided by an embodiment of the present application.
  • terminals, smart terminals, target terminals, terminal devices, etc. involved in the embodiments of the present application may include, but are not limited to, vehicles, movable robots, movable terminal devices, and the like.
  • an intelligent vehicle is taken as an example below to describe one of the terminal devices installed with the blind spot image acquisition system based on the embodiments of the present application.
  • FIG. 1 is a functional block diagram of an intelligent vehicle 001 provided by an embodiment of the present application.
  • the intelligent vehicle 001 may be configured in a fully or partially autonomous driving mode.
  • the intelligent vehicle 001 can control itself while in an autonomous driving mode, and can determine the current state of the vehicle and its surrounding environment through human operation, determine the possible behavior of at least one other vehicle in the surrounding environment, and determine the other A confidence level corresponding to the likelihood that the vehicle will perform a possible action, the intelligent vehicle 001 is controlled based on the determined information.
  • the intelligent vehicle 001 may be placed to operate without human interaction.
  • Intelligent vehicle 001 may include various subsystems, such as travel system 202 , sensor system 204 , control system 206 , one or more peripherals 208 and power supply 210 , computer system 212 , and user interface 216 .
  • intelligent vehicle 001 may include more or fewer subsystems, and each subsystem may include multiple elements. Additionally, each of the subsystems and elements of the intelligent vehicle 001 may be wired or wirelessly interconnected.
  • Travel system 202 may include components that provide powered motion for intelligent vehicle 001 .
  • travel system 202 may include engine 218 , energy source 219 , transmission 220 , and wheels/tires 221 .
  • the engine 218 may be an internal combustion engine, an electric motor, an air compression engine, or other types of engine combinations, such as a gasoline engine and electric motor hybrid engine, an internal combustion engine and an air compression engine hybrid engine.
  • Engine 218 converts energy source 219 into mechanical energy.
  • Examples of energy sources 219 include gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, and other sources of electricity. Energy source 219 may also provide energy to other systems of intelligent vehicle 001 .
  • Transmission 220 may transmit mechanical power from engine 218 to wheels 221 .
  • Transmission 220 may include a gearbox, a differential, and a driveshaft.
  • transmission 220 may also include other devices, such as clutches.
  • the drive shafts may include one or more axles that may be coupled to one or more wheels 221 .
  • Sensor system 204 may include several sensors that sense information about the environment surrounding intelligent vehicle 001 .
  • the sensor system 204 may include a positioning system 222 (the positioning system may be a global positioning system (GPS) system, a Beidou system or other positioning systems), an inertial measurement unit (IMU) 224, a radar 226 , a laser rangefinder 228 and a camera 230 .
  • the sensor system 204 may also include sensors that monitor the internal systems of the smart vehicle 001 (eg, an in-vehicle air quality monitor, a fuel gauge, an oil temperature gauge, etc.). Sensor data from one or more of these sensors can be used to detect objects and their corresponding characteristics (position, shape, orientation, velocity, etc.). This detection and identification is a critical function for the safe operation of the autonomous intelligent vehicle 001.
  • the positioning system 222 may be used to estimate the geographic location of the intelligent vehicle 001 .
  • the IMU 224 is used to sense position and orientation changes of the intelligent vehicle 001 based on inertial acceleration.
  • IMU 224 may be a combination of an accelerometer and a gyroscope.
  • IMU 224 may be used to measure the curvature of smart vehicle 001.
  • Radar 226 may utilize radio signals to sense objects within the surrounding environment of intelligent vehicle 001 .
  • radar 226 may be used to sense the speed and/or heading of objects.
  • the radar 226 may be used to detect obstacles in the forward direction of the intelligent vehicle 001 .
  • obstacles for example: static or dynamic obstacles. It can also be used to obtain the position information, speed information, moving direction information, etc. of obstacles, so as to assist the safe driving of the intelligent vehicle 001 and improve the visual effect of the surrounding environment.
  • the laser rangefinder 228 may utilize laser light to sense objects in the environment in which the intelligent vehicle 001 is located.
  • the laser rangefinder 228 may include one or more laser sources, laser scanners, and one or more detectors, among other system components.
  • the laser rangefinder 228 can assist the radar 226 to detect obstacles in the forward direction of the intelligent vehicle 001, obtain the distance information between the position of the obstacle and the intelligent vehicle 001, etc., to Assist the safe driving of the intelligent vehicle 001 and improve the visual effect of the surrounding environment.
  • Camera 230 may be used to capture multiple images of the surrounding environment of intelligent vehicle 001 .
  • Camera 230 may be a still camera or a video camera.
  • the camera 230 may also constitute a 360-degree around view monitor (AVM) system, which is used to monitor obstacles and road conditions around the smart vehicle.
  • the camera 230 may include a plurality of in-vehicle cameras of different or the same specifications, such as: a wide-angle camera, a telephoto camera, a fisheye camera, a standard camera, a zoom camera, and the like.
  • the camera 230 can also be divided into a monocular camera and a multi-eye camera, which are respectively applied to different driving scenarios.
  • the camera 230 may include a fisheye camera, a wide-angle camera, and other types of monocular cameras, which are used to photograph the surrounding environment in the forward direction of the intelligent vehicle, and then determine the speed of the vehicle by acquiring the speed information of the vehicle.
  • the size value of the depth estimation of the camera 230 and then determine the pose relationship between the multiple frames of driving images; finally, according to the pose relationship between the multiple frames of driving images, the image corresponding to the blind spot of the vehicle is spliced and generated to assist the driver to better Realize driving, parking, meeting and other driving operations.
  • the size value of the depth estimation of the camera 230 can be determined through the driving images captured by different purpose cameras, so as to determine the pose relationship between the multiple frames of driving images; finally, according to the multiple frames
  • the pose relationship between the driving images is stitched to generate images of the corresponding blind spots of the vehicle, so as to assist the driver to drive the vehicle better.
  • Control system 206 controls the operation of the intelligent vehicle 001 and its components.
  • Control system 206 may include various elements including steering system 232 , throttle 234 , braking unit 236 , sensor fusion algorithms 238 , computer vision system 240 , route control system 242 , and obstacle avoidance system 244 .
  • Steering system 232 is operable to adjust the heading of intelligent vehicle 001 .
  • it may be a steering wheel system.
  • the throttle 234 is used to control the operating speed of the engine 218 and thus the speed of the intelligent vehicle 001 .
  • the braking unit 236 is used to control the deceleration of the intelligent vehicle 001 .
  • the braking unit 236 may use friction to slow the wheels 221 .
  • the braking unit 236 may convert the kinetic energy of the wheels 221 into electrical current.
  • the braking unit 236 may also take other forms to slow down the wheels 221 to control the speed of the smart vehicle 001.
  • Computer vision system 240 is operable to process and analyze images captured by camera 230 in order to identify objects and/or features in the environment surrounding intelligent vehicle 001 .
  • the objects and/or features may include traffic signals, road boundaries and obstacles.
  • Computer vision system 240 may use object recognition algorithms, structure from motion (SFM) algorithms, video tracking, and other computer vision techniques. In some embodiments, computer vision system 240 may be used to map the environment, track objects, estimate the speed of objects, and the like.
  • the computer vision system 240 can convert the images captured by the multiple vehicle-mounted cameras into images in the world coordinate system based on the camera parameters of the multiple vehicle-mounted cameras on the intelligent vehicle, and obtain the corresponding driving of the intelligent vehicle in the world coordinate system. image.
  • the computer vision system 240 may also obtain the filling pixels of the blind spot image according to different strategies according to the distance between the obstacle and the intelligent vehicle 001 .
  • a blind spot image of the vehicle's blind spot area corresponding to the current frame image is obtained.
  • the blind spot image acquisition method can effectively avoid problems such as obstruction caused by frame delay, and can also effectively avoid the problems of image dislocation and brightness inconsistency caused by multi-image stitching, and improve the display effect of the blind spot under the vehicle.
  • the blind spot image acquisition method can effectively avoid problems such as obstruction caused by frame delay, and can also effectively avoid the problems of image dislocation and brightness inconsistency caused by multi-image stitching, and improve the display effect of the blind spot under the vehicle.
  • the route control system 242 is used to determine the travel route of the intelligent vehicle 001 .
  • route control system 242 may combine data from sensors 238, GPS 222, and one or more predetermined maps to determine a driving route for intelligent vehicle 001.
  • Obstacle avoidance system 244 is used to identify, evaluate and avoid or otherwise overcome potential obstacles in the environment of intelligent vehicle 001 .
  • control system 206 may additionally or alternatively include components other than those shown and described. Alternatively, some of the components shown above may be reduced.
  • Peripherals 208 may include a wireless communication system 246 , an onboard computer 248 , a microphone 250 and/or a speaker 252 .
  • peripherals 208 provide a means for a user of intelligent vehicle 001 to interact with user interface 216 .
  • the onboard computer 248 may provide information to the user of the smart vehicle 001 .
  • User interface 216 may also operate on-board computer 248 to receive user input.
  • the onboard computer 248 can be operated via a touch screen.
  • peripheral device 208 may provide a means for intelligent vehicle 001 to communicate with other devices located within the vehicle.
  • Microphone 250 may receive audio (eg, voice commands or other audio input) from the user of intelligent vehicle 001 .
  • the microphone 250 can also collect the noise of various devices in the smart vehicle 001 when working.
  • the speaker 252 can output various desired sound wave signals to the smart vehicle 001 .
  • speaker 252 may be an electro-acoustic transducer that converts electrical signals into acoustic signals.
  • Wireless communication system 246 may communicate wirelessly with one or more devices, either directly or via a communication network.
  • wireless communication system 246 may use 3G cellular communications such as code division multiple access (CDMA), evolution-data optimized (EVDO), global system for mobile communications (GSM) )/general packet radio service (GPRS), or cellular communications such as long time evolution (LTE). Or 5G cellular communications.
  • CDMA code division multiple access
  • EVDO evolution-data optimized
  • GSM global system for mobile communications
  • GPRS general packet radio service
  • LTE long time evolution
  • 5G cellular communications 5G cellular communications.
  • the wireless communication system 246 may communicate with a wireless local area network (WLAN) using WiFi.
  • the wireless communication system 246 may communicate directly with the device using an infrared link, Bluetooth, or Zig Bee.
  • wireless communication system 246 may include one or more dedicated short range communications (DSRC) devices, which may include a combination of vehicle and/or roadside stations. public and/or private data communications between them.
  • DSRC dedicated short range communications
  • the noise signal collected by the microphone 250 may be sent to the processor 213 through a wireless communication system.
  • Power supply 210 may provide power to various components of intelligent vehicle 001 .
  • the power source 210 may be a rechargeable lithium-ion or lead-acid battery.
  • One or more battery packs of such batteries may be configured as a power source to provide power to various components of the intelligent vehicle 001 .
  • power source 210 and energy source 219 may be implemented together, such as in some all-electric vehicles.
  • Computer system 212 may include at least one processor 213 that executes instructions 215 stored in a non-transitory computer-readable medium such as memory 214 .
  • Computer system 212 may also be multiple computing devices that control individual components or subsystems of intelligent vehicle 001 in a distributed fashion.
  • the processor 213 may be any conventional processor, such as a commercially available central processing unit (CPU). Alternatively, the processor may be a dedicated device such as an application specific integrated circuit (ASIC) or other hardware-based processor.
  • FIG. 1 functionally illustrates a processor, memory, and other elements of a computer in the same block, one of ordinary skill in the art will understand that the processor, or memory may actually include a processor, or memory that may or may not be stored in the same block. multiple processors, or memories, within a physical enclosure.
  • the memory may be a hard drive or other storage medium located within an enclosure other than a computer.
  • reference to a processor or computer will be understood to include reference to a collection of processors or computers or memories that may or may not operate in parallel.
  • some components such as the steering and deceleration components may each have their own processor that only performs computations related to component-specific functions .
  • the processor 213 may be located remotely from the vehicle and in wireless communication with the vehicle. In other aspects, some of the processes described herein are performed on a processor disposed within the vehicle while others are performed by a remote processor, including taking steps necessary to perform a single maneuver.
  • the processor 213 is configured to determine the pose relationship between the current frame of driving images and one or more frames of historical driving images; determine the target distance between the obstacle and the target vehicle; according to the pose relationship and the blind spot position information of the blind spot area corresponding to the current frame driving image, and obtain the filling pixel points of each target point in the multiple target points in the blind spot area from the multi-frame historical driving images; wherein, when the target distance is less than or equal to the preset distance When the threshold is set, the filled pixel is the pixel with the closest shooting time corresponding to the target point in the multi-frame historical driving images; when the target distance is greater than the preset distance threshold, the x+1th frame of the historical driving image includes the same distance as the xth frame of the historical driving image.
  • the pixel points of the same target point in the image, the filling pixel corresponding to the same target point is the same target point in the historical driving image of the x+1th frame of historical driving image and the number of corresponding target points in the xth frame of historical driving image. corresponding pixel point; output the blind area image.
  • the filling pixel point acquisition strategy, and the specific calculation method for the processor 213 to determine the pose relationship between the current frame of driving images and one or more frames of historical driving images please refer to the following system and method embodiments. The relevant descriptions are not repeated here.
  • memory 214 may include instructions 215 (eg, program logic) executable by processor 213 to perform various functions of intelligent vehicle 001 , including those described above.
  • the memory 214 may also contain additional instructions, including sending data to, receiving data from, interacting with, and/or controlling one or more of the propulsion system 202 , the sensor system 204 , the control system 206 , and the peripherals 208 . instruction.
  • the memory 214 may also store data in this embodiment of the present application, such as: multi-frame driving images captured by multiple on-board cameras in the smart vehicle, camera parameters of each on-board camera in the smart vehicle, and blind spot positions of the smart vehicle information, blind spot shape information, etc. and other such vehicle data.
  • data such as: multi-frame driving images captured by multiple on-board cameras in the smart vehicle, camera parameters of each on-board camera in the smart vehicle, and blind spot positions of the smart vehicle information, blind spot shape information, etc. and other such vehicle data.
  • data such as: multi-frame driving images captured by multiple on-board cameras in the smart vehicle, camera parameters of each on-board camera in the smart vehicle, and blind spot positions of the smart vehicle information, blind spot shape information, etc. and other such vehicle data.
  • Such information may be used by intelligent vehicle 001 and/or computer system 212 during operation of intelligent vehicle 001 in vehicle blind spot image acquisition.
  • User interface 216 for providing information to or receiving information from a user of intelligent vehicle 001 .
  • user interface 216 may include one or more input/output devices within the set of peripheral devices 208 , such as wireless communication system 246 , onboard computer 248 , microphone 250 and speaker 252 .
  • Computer system 212 may control functions of intelligent vehicle 001 based on input received from various subsystems (eg, travel system 202 , sensor system 204 , and control system 206 ) and from user interface 216 .
  • computer system 212 may utilize input from control system 206 in order to control steering unit 232 to avoid obstacles detected by sensor system 204 and obstacle avoidance system 244 .
  • computer system 212 is operable to provide control over many aspects of intelligent vehicle 001 and its subsystems.
  • one or more of these components described above may be installed or associated with the intelligent vehicle 001 separately.
  • memory 214 may exist partially or completely separate from intelligent vehicle 001 .
  • the above-described components may be communicatively coupled together in a wired and/or wireless manner.
  • FIG. 1 should not be construed as a limitation on the embodiments of the present application.
  • An autonomous vehicle traveling on the road such as the above intelligent vehicle 001, can identify the distance between obstacles in the forward direction of the intelligent vehicle and the intelligent vehicle, and the distance can determine the selection of the current blind spot image acquisition strategy.
  • the intelligent vehicle 001 or a computing device associated with the intelligent vehicle 001 may be based on the characteristics of the identified objects and the state of the surrounding environment (eg, static or dynamic objects in a parking lot, etc.) to predict the behavior of the identified objects.
  • each identified object is dependent on the behavior of the other, so it is also possible to predict the behavior of a single identified object by considering all identified objects together.
  • the intelligent vehicle 001 is able to adjust its speed based on the predicted behavior of the identified object.
  • the self-driving car can determine what steady state the vehicle will need to adjust to (eg, accelerate, decelerate, or stop) based on the predicted behavior of the object.
  • other factors may also be considered to determine the speed of the intelligent vehicle 001, such as the lateral position of the intelligent vehicle 001 in the road on which it is traveling, the curvature of the road, the proximity of static and dynamic objects, and the like.
  • the computing device may also provide instructions to modify the steering angle of the intelligent vehicle 001 so that the self-driving car follows a given trajectory and/or maintains contact with objects in the vicinity of the self-driving car ( For example, safe lateral and longitudinal distances for cars in adjacent lanes on the road.
  • the above-mentioned intelligent vehicles 001 can be various vehicles with on-board cameras, such as cars, trucks, motorcycles, buses, recreational vehicles, playground vehicles, construction devices, trams, golf carts, trains, and trolleys. Make special restrictions.
  • FIG. 1 is only an exemplary implementation in the embodiments of the present application, and the smart vehicles in the embodiments of the present application include but are not limited to the above structures.
  • FIG. 2 is a schematic structural diagram of a computing device in an intelligent vehicle provided by an embodiment of the present application.
  • the device 203 is coupled to the system bus 205 .
  • the processor 203 may be one or more processors, wherein each processor may include one or more processor cores, which is equivalent to the processor 213 shown in FIG. 1 above.
  • the memory 235 can store relevant data information.
  • the memory 235 is coupled to the system bus 205 and is equivalent to the memory 214 shown in FIG. 1 above.
  • a video adapter 207 which can drive a display 209, is coupled to the system bus 205.
  • the system bus 205 is coupled to an input/output (I/O) bus 213 through a bus bridge 201 .
  • I/O interface 215 is coupled to the I/O bus.
  • I/O interface 215 communicates with various I/O devices, such as input device 217 (eg, keyboard, mouse, touch screen, etc.), media tray 221, (eg, compact disc read-only) memory, CD-ROM), multimedia interface, etc.).
  • Transceiver 223 which can transmit and/or receive radio communication signals
  • camera 255 which can capture sceneries and dynamic digital video images
  • USB universal serial bus
  • the interface connected to the I/O interface 215 may be a USB interface.
  • the processor 203 may be any conventional processor, including a reduced instruction set computing (reduced instruction set computing, RISC) processor, a complex instruction set computing (complex instruction set computer, CISC) processor or a combination of the above.
  • the processor may be a special purpose device such as an application specific integrated circuit ASIC.
  • the processor 203 may be a neural network processor or a combination of a neural network processor and the above-mentioned conventional processors.
  • the processor 203 can determine the pose relationship between the current frame of driving images and one or more frames of historical driving images; determine the target distance between the obstacle and the target vehicle; according to the pose relationship and the current frame of driving images Corresponding to the blind spot location information of the blind spot area, the filling pixel points of each target point in the multiple target points in the blind spot area are obtained from the multi-frame historical driving images; wherein, when the target distance is less than or equal to the preset distance threshold, the filling pixel points are is the pixel with the nearest shooting time corresponding to the target point in the multi-frame historical driving images; when the target distance is greater than the preset distance threshold, the x+1th historical driving image includes the same target point as the xth frame of the historical driving image.
  • Pixel point, the filling pixel point corresponding to the same target point is the pixel point corresponding to the same target point in the historical driving image of the x+1th frame of historical driving image and the number of corresponding target points in the xth frame of historical driving image; output; the blind spot image.
  • Network interface 229 is a hardware network interface, such as a network card.
  • the network 227 may be an external network, such as the Internet, or an internal network, such as an Ethernet network or a virtual private network (VPN).
  • the network 227 may also be a wireless network, such as a WiFi network, a cellular network, and the like.
  • the transceiver 223 (which can transmit and/or receive radio communication signals), can transmit and/or receive radio communication signals through, but is not limited to, second generation (2th generation, 2G) mobile communication Generation, 4G) mobile communication network, fifth generation (5th generation, 5G) mobile communication network and other wireless communication methods, it can also be dedicated short range communications (DSRC), or long-term evolution-vehicle ( Long term evolution-vehicle, LTE-V) and other technologies, its main function is to receive the information data sent by the external device, and send the information data back to the external device for storage and analysis when the vehicle is driving on the target road section.
  • DSRC dedicated short range communications
  • LTE-V Long term evolution-vehicle
  • the hard drive interface 231 is coupled to the system bus 205 .
  • the hardware driver interface 231 is connected to the hard disk drive 233 .
  • System memory 235 is coupled to system bus 205 . Data running in system memory 235 may include operating system OS 237 and application programs 243 of computer system 212 .
  • Memory 235 is coupled to system bus 205 .
  • the operating system includes a shell 239 and a kernel 241.
  • the shell 239 is an interface between the user and the kernel of the operating system.
  • the shell is the outermost layer of the operating system.
  • the shell manages the interaction between the user and the operating system: waiting for user input; interpreting user input to the operating system; and processing various operating system output.
  • Kernel 241 consists of those parts of the operating system that manage memory, files, peripherals, and system resources. Interacting directly with hardware, the operating system kernel typically runs processes and provides inter-process communication, providing CPU time slice management, interrupts, memory management, I/O management, and more.
  • the application program 243 includes programs related to controlling the acquisition of blind spot images, for example, a program for managing vehicle cameras to acquire driving images, a program for calculating the pose relationship between multiple frames of driving images, and filtering out some or all of the driving images from the multi-frame driving images to A program to obtain a blind spot image of a vehicle's blind spot area, etc.
  • Application 243 also exists on the system of software deployment server 249 .
  • the computer system 212 may download the application program 243 from the software deployment server 249 when the relevant program 247 for blind spot image acquisition needs to be performed.
  • the application program 243 selects multiple frames of driving images to fill the images of the blind area of the current frame of the driving image, so as to avoid the phenomenon of incomplete filling of the blind area using a single driving image.
  • the filling pixel point according to the distance between the obstacle and the target in the intelligent vehicle; if the target distance is less than or equal to the preset distance threshold, the pixel point with the closest shooting time is selected as the filling pixel point, which effectively avoids the number of frames.
  • the selected fill image is a driving image with the latest shooting time, the final obtained blind spot area has the best clarity.
  • the pixels in the driving images of historical frames with many corresponding target points are selected as filling pixels, which effectively avoids the problems of image dislocation and brightness inconsistency caused by multi-image stitching, and improves the blind spot under the vehicle. display effect.
  • the selection of filling pixels in two different cases is different, which greatly improves the probability of obtaining a complete, clear and accurate blind spot image, and ensures the driving safety of intelligent vehicles.
  • using a specific method to fill the blind spot can allow the driver to observe the position of the car in multiple directions, prevent tire wear and damage to the chassis, and also allow the driver to better observe the information around the car, tires and bottom of the car. Assist parking, avoid vehicle loss accidents to the greatest extent, and improve driving experience and driving safety.
  • Sensor 253 is associated with computer system 212 .
  • Sensor 253 is used to detect the environment around computer system 212 .
  • the sensor 253 can detect animals, cars, obstacles and crosswalks, etc. Further sensors can also detect the environment around the above-mentioned animals, cars, obstacles and crosswalks, such as: the environment around animals, for example, animals appear around other animals, weather conditions, ambient light levels, etc.
  • the sensor may be a camera, an infrared sensor, a chemical detector, or the like.
  • the structure of the blind spot image acquisition device in FIG. 2 is only an exemplary implementation in the embodiment of the present application, and the structure of the blind spot image acquisition device applied to the intelligent vehicle in the embodiment of the present application includes but is not limited to the above. structure.
  • FIG. 3 is a schematic diagram of the architecture of a blind spot image acquisition system provided by an embodiment of the present application.
  • the blind spot image acquisition system architecture includes a data loading module (equivalent to the sensor system 204 shown in Figure 1 above), an image processing module, a dynamic frame selection module, and may also include a splicing and optimization module and a display module.
  • the image processing module and the dynamic frame selection module are both equivalent to the computer vision system shown in FIG. 1 above.
  • the data loading module is equivalent to the sensor system 204 shown in FIG. 1 above. It can be used to obtain the distance information between the obstacle and the smart terminal; it can also be used to obtain the current frame driving image and one or more frames of historical driving images; it can also be used to obtain the speed information of the smart terminal, etc. For example, it can be responsible for obtaining the image data of the multi-channel fisheye cameras of the surround view system on the terminal at the same time, and obtain the speed data from the vehicle bus, and obtain the data of other sensors that provide terminal status or speed information under the condition of other sensors, such as Combined positioning system and installed binocular camera. It can be understood that when the terminal device is a vehicle, the functions of the relevant components in the sensor module can be referred to the relevant description of the sensor system 204 in the intelligent vehicle architecture shown in FIG.
  • the image processing module is equivalent to the above-mentioned computer vision system shown in FIG. 1 or the above-mentioned computer system 212 shown in FIG. 1 .
  • the image processing module can be used to determine the pose relationship between the current frame of driving images and one or more frames of historical driving images. For example, it is responsible for calculating the multi-frame front-view or rear-view fisheye images (which can be determined by the forward direction of the smart terminal) or the installed binocular camera. relative pose relationship.
  • the dynamic frame selection module is equivalent to the computer vision system shown in FIG. 1 above or the computer system 212 shown in FIG. 1 above.
  • the dynamic frame selection module can be used to filter out the driving images that can fill the blind area image from one or more frames of historical driving images, and the specific screening method can refer to the relevant description of the following method embodiments. Not described for the time being. For example: responsible for the multi-frame images with known relative pose relationship and the current blind spot position, firstly arrange the multi-frame images in chronological order, and obtain the dynamic splicing range of the current blind spot for the different frame images that have been sorted through fast edge judgment , so as to filter out the target frame image to stitch the blind area image.
  • the splicing and optimization module is used for splicing, optimizing and outputting the blind spot image corresponding to the current frame driving image of the intelligent terminal according to the filtered target frame image. For example, it is responsible for calculating the splicing range of different frame images corresponding to the moment, completing the generation and splicing of blind area images, and normalizing and adjusting the brightness of multiple frames of images to generate a complete blind area image with relatively uniform brightness.
  • the display module can be responsible for displaying the image of the blind area at the bottom of the smart terminal.
  • FIG. 4 is a schematic flowchart of a method for obtaining a blind spot image provided by an embodiment of the present application.
  • the blind spot image obtaining method can be applied to the above-mentioned intelligent vehicle in FIG. 1 , wherein the intelligent vehicle 001 can be used to support And execute the method flow steps S301-S306 shown in FIG. 4 , which will be described below with reference to FIG. 4 .
  • the method may include the following steps S301-S306.
  • Step S301 Acquire the current frame of driving images and multiple frames of historical driving images.
  • the blind spot image acquiring device acquires a current frame of driving images and multiple frames of historical driving images, wherein the current frame of driving images is a driving image captured at the current time, and the multiple frames of historical driving images are captured before the current time
  • the driving image is the surrounding environment image in the forward direction of the target terminal (such as a vehicle, etc.).
  • the multiple frames of historical driving images may also be referred to as historical frame driving images.
  • the multi-frame driving images can be obtained through the camera on the terminal.
  • the target terminal is an intelligent vehicle, and the target vehicle is moving forward, the acquired front environment image is the driving image of the target vehicle; when the target vehicle is reversing, the acquired rear environmental image is the driving image of the target vehicle .
  • the bird's-eye view of the vehicle that can be acquired is the driving image of the target vehicle.
  • the environment image refers to an image that includes the surrounding driving environment, such as the road surface on which the terminal is traveling, surrounding obstacles, and the like when the terminal is driving. It should also be noted that the relevant descriptions of the target vehicle, the smart vehicle, etc. mentioned in the embodiments of the present application are equivalent to the smart vehicle shown in FIG. 1 above.
  • the current frame of driving images and the multi-frame historical driving images are acquired through a monocular camera; or the current frame of driving images and the multi-frame historical driving images are acquired through a multi-camera camera.
  • the driving image of the target vehicle in the forward direction can be obtained through the driving recorder of the smart vehicle.
  • the driving image in the forward direction of the target vehicle can be obtained through the binocular camera of the smart vehicle.
  • This kind of driving images can be obtained by both monocular cameras and multi-eye cameras, which greatly relieves the hardware requirements of the blind spot image acquisition method, reduces the difficulty of popularization, and improves driving safety. It should be noted that the installation position of the camera can be installed in the forward direction of the target terminal according to the driving requirements.
  • Step S302 Determine the pose relationship between the current frame of driving images and one or more frames of historical driving images.
  • the blind spot image acquisition device determines the pose relationship between the current frame of driving images and one or more frames of historical driving images, wherein the current frame of driving images is a driving image captured at the current time, and the multi-frame historical driving images
  • the image is a driving image taken before the current time
  • the driving image is an image of the surrounding environment in the forward direction of the target terminal.
  • the pose relationship refers to the pose relationship between the current frame of driving images and each frame of historical driving images in the multi-frame historical driving images, and may also refer to the target terminal and the shooting history when the current frame of driving images is captured.
  • the position of the blind spot corresponding to the target terminal in the previously captured historical driving image when the current frame of driving image is captured can be determined.
  • the pose relationship includes rotation information (eg, rotation angle) and translation information (eg, translation distance).
  • image feature detection is performed on the current frame of driving images and the multi-frame historical driving images, and image feature points between the current frame of driving images and the multi-frame historical driving images are acquired, and the image features
  • the point is the common viewpoint between the current frame of driving images and the multi-frame historical driving images; based on the image feature points and size values, determine the distance between the current frame of driving images and the multi-frame historical driving images. pose relationship.
  • the size value is used to indicate the size of the unit pixel length of the driving image during depth estimation.
  • the feature point may be a common viewpoint in the driving image, that is, the pixel point corresponding to the target object included in both the current frame image and the historical driving image.
  • the feature points generally have size invariance and rotation invariance.
  • the target object can generally be selected such as roadside signs, red street lights, trees, roadblocks, etc., and then select its corresponding pixel points as the feature points of the image. It should also be noted that in the driving scene of sharp turns and big turns, if the pose relationship between the current frame driving image A and the historical driving image B whose shooting time is far away from the current frame driving image is determined, you can first determine the pose relationship.
  • the pose relationship determines the pose relationship between the current frame driving image A and the historical driving image B; wherein the shooting time of the historical driving image C is between the current time and the time point when the historical driving image B was shot.
  • FIG. 5 is a schematic flowchart of determining a pose relationship between multiple frames of driving images according to an embodiment of the present application.
  • the first step image feature detection; the image feature detection is to detect the features in the driving image, identify the road features, environmental features, obstacle features, etc. included in the image, so as to facilitate the selection of the above features.
  • Feature points Feature points.
  • the second step image feature point matching; image feature point matching is to match the same feature points between different driving images to determine the pose relationship between the driving images.
  • the third step depth estimation; it is to estimate the distance between the object and the vehicle camera, so as to determine the size of each unit pixel in the camera coordinate system on the driving image corresponding to the world coordinate system after the shooting.
  • the fourth step is to match the pixel point and the world point for pose estimation; that is, to match the pixel point in each driving image with the point in the world coordinate system to determine the same world point (in the world coordinate system) shot at two different shooting times.
  • the pose relationship between the points and then determine the pose relationship between the corresponding pixel points.
  • Step 5 Reprojection error optimization; compare the pose relationship between multiple frames of driving images, optimize the result, reduce the error, and finally output the pose relationship.
  • the pose relationship between the multiple frames of traveling images is determined by the feature points between the multiple frames of traveling images, which can improve the accuracy of determining the pose relationship, thereby improving the acquisition efficiency of blind spot images.
  • the pre-calibrated camera external parameters and the camera internal parameters given by the camera module manufacturer can also use the pre-calibrated camera external parameters and the camera internal parameters given by the camera module manufacturer to perform distortion correction on the driving image of the surround view system on the terminal, obtain the distortion-corrected image, and analyze the The central area is cropped to obtain the part with better image quality.
  • the camera internal parameters include the relationship between the coordinate system of the image captured by the camera and the camera coordinate system
  • the camera external parameters include the relationship between the camera coordinate system and the world coordinate system.
  • Perform image feature detection on the image sequence after distortion correction and trimming (the image sequence includes the current frame driving image and one or more frames of historical driving images), and match the detected image features on the image sequence, and obtain the result by matching. Correspondence of the same feature point on a series of driving images.
  • the relative pose relationship between image sequences is obtained, and the camera projection model is used for optimization.
  • the scale of the pose in the pose relationship is initialized by the speed signal provided by the vehicle CAN bus. This method of determining the pose relationship by using multiple frames of driving images will be more accurate, and the accuracy of determining the position of the blind spot in the driving images will be improved.
  • the The method further includes: acquiring speed information of the target terminal; acquiring distance information between adjacent frames of driving images based on the speed information; determining a size value of the depth estimation of the monocular camera based on the distance information , the size value is used to indicate the size of the unit length during depth estimation; the determining the pose relationship between the current frame of driving images and one or more frames of historical driving images includes: based on the size value, determining the current The pose relationship between a frame of driving images and one or more historical driving images.
  • Step S303 Determine the target distance between the obstacle and the target terminal.
  • the blind spot image acquisition device determines the target distance between the obstacle and the target terminal.
  • the obstacle may be an obstacle in the forward direction of the target terminal.
  • FIG. 6 is a schematic diagram of a driving scene of a vehicle according to an embodiment of the present application.
  • the target vehicle A and the target vehicle B are driving on the road, wherein the target vehicle A is driving in front of the target vehicle B, and there are trees on the left and right sides of the road.
  • the target vehicle A there is no obstacle (static or dynamic) in its forward direction, which can be understood as the target distance between the obstacle and the target vehicle is the maximum value and is greater than the preset threshold.
  • the target distance between the obstacle and the target vehicle B is the distance between the target vehicle A and the target vehicle B, and the target vehicle A is the target vehicle B moving forward Obstacles in the direction (dynamic).
  • step S303, step S301 and step S302 is not specifically limited in this embodiment of the present application.
  • the target distance between the obstacle and the target terminal may be determined first, and then the pose relationship between the multiple frames of driving images may be determined.
  • Step S304 when the target distance is less than or equal to the preset distance threshold, obtain each of the multiple target points in the blind area from the multi-frame historical driving images according to the pose relationship and the blind area position information of the blind area corresponding to the current frame of the driving image.
  • the padding pixel of the target point when the target distance is less than or equal to the preset distance threshold, obtain each of the multiple target points in the blind area from the multi-frame historical driving images according to the pose relationship and the blind area position information of the blind area corresponding to the current frame of the driving image. The padding pixel of the target point.
  • the blind spot image acquisition device when it is determined that the target distance between the obstacle and the target terminal is less than or equal to the preset distance threshold, the blind spot image acquisition device, according to the pose relationship and the blind spot position information of the blind spot area corresponding to the current frame of the driving image, from The filling pixel points of each target point in the multiple target points in the blind area are obtained from the multi-frame historical driving images.
  • the filling pixel point is the pixel point corresponding to the target point in the multi-frame historical driving image with the nearest shooting time.
  • the blind spot area includes a plurality of target points, and for each target point in the plurality of target points in the blind spot area, the pixel point with the closest shooting time is selected from the multi-frame historical driving images as the filling of the target point. pixel.
  • FIG. 7 is a schematic diagram of a target point and a filling pixel point provided by an embodiment of the present application.
  • the blind spot area includes multiple target points, and each target point can correspond to a certain pixel point in the driving image.
  • the blind area image can be obtained by outputting the filling pixels corresponding to all the target points.
  • the blind spot image acquisition device selects the pixel point with the closest shooting time corresponding to the target point from the multi-frame historical driving images as the blind spot. Fill pixel points of the area to obtain the blind area image of the blind area.
  • the pixel point with the closest shooting time is selected from the multiple frames of historical driving images as the filling pixel point of the target point. That is, it can be understood that when multiple frames of historical driving images all include pixel values of the same pixel point, a frame of historical driving image whose shooting time is closest to the current time is selected to provide the pixel value of the pixel point.
  • the pixels with the closest shooting time are selected to fill the target points in the blind area, which can effectively avoid the problems of obstruction caused by the delay of the frame number, and the final image quality of the blind area is relatively clear and improved. Display effect of the blind area at the bottom of the terminal.
  • the size of the preset distance threshold can be determined based on the size of the blind area of the target terminal. When the distance between the obstacle and the target terminal exceeds the size of the blind area, the probability of blocking the blind area by the obstacle is small; when the distance between the obstacle and the target terminal is less than When the size of the blind spot is larger, the probability of obstruction to the blind spot is relatively high. Therefore, in the actual driving process, the size of the preset distance threshold can be determined based on the blind spot size of the target terminal.
  • the multiple frames of historical driving images are m frames of historical driving images; when the target distance is less than or equal to a preset distance threshold, the blind spot area corresponding to the current frame of driving images according to the pose relationship
  • the blind spot location information obtaining the filling pixel points of each target point in the multiple target points in the blind spot area from the multi-frame historical driving images, including: when the target distance is less than or equal to a preset distance threshold , sort the m frames of historical driving images according to the shooting time, and m is an integer greater than 1;
  • the set of target pixel points includes the filling pixel points corresponding to the target point in the xth frame of the historical driving image. Sorting the multi-frame historical driving images according to the shooting time; according to the pose relationship and the blind spot position information of the blind area corresponding to the current frame driving image, it is determined from the sorted multi-frame historical driving images that include and target A frame of driving image whose point corresponds to the pixel point and whose shooting time is the most recent; acquires a set of pixel points in the driving image, and fills it in the blind area. It should be noted that, in the embodiment of the present application, the pixel value of the filling pixel is filled at the target point to obtain an image of the blind area.
  • the method further includes: when the set of target pixels is a set of filling pixels corresponding to some target points, sequentially acquiring a set of target pixels from the x+1th frame of historical driving images as the remaining part of the target. The set of filling pixel points corresponding to the points is completed until the multiple target points in the blind area area are filled.
  • the target pixel set in the above driving image cannot completely fill the blind area corresponding to the current frame of the driving image, that is, there is an unfilled area in the blind area, it is necessary to obtain the current unfilled area in the blind area, and update the position information corresponding to the unfilled area to blind spot position information; according to the pose relationship and the updated blind spot position information, determine the target pixel point from the x+1th frame historical driving image set, fill the unfilled area in the blind area area; traverse the multiple frames of historical driving images after the x+1th frame of historical driving images in turn, and obtain a set of target pixel points until the blind area area is filled, The blind spot image is obtained.
  • the historical frames are sorted in chronological order, and both the xth frame and the x+1th frame of the historical frame can provide partial blind area areas, among which there are overlapping areas and separate coverage areas.
  • the area (target pixel set) that can be provided by the current historical frame is calculated.
  • the image with the latest shooting time is selected to fill the blind area, which can effectively avoid problems such as obstruction caused by frame delay, and the final image quality of the blind area is relatively clear, which improves the display of the blind area under the vehicle. Effect.
  • FIG. 8 is a schematic diagram of filling a blind area provided by an embodiment of the present application.
  • the first frame of historical driving images, the second frame of historical driving images, the third frame of historical driving images, and the fourth frame of historical driving images can provide some or all of the blind spot images, That is, some or all of the filling pixel points corresponding to the target points in the blind zone area can be provided, such as four parts A, B, C, and D, respectively.
  • the first to fourth frames of historical driving images are sorted in descending order of shooting time, that is, the time for shooting the first frame of historical driving images is later than that of the second frame of historical driving images.
  • the first three frames of historical driving images can provide images of all blind spot areas, and the shooting time is closer to the current time. Therefore, the fourth frame Even if the historical driving image can provide all the pixels of the blind area, it needs to be discarded.
  • Step S305 when the target distance is greater than the preset distance threshold, according to the pose relationship and the blind spot position information of the blind spot area corresponding to the current frame of the driving image, obtain each target point of the multiple target points in the blind spot area from the multi-frame historical driving images. the padding pixels.
  • the blind spot image acquisition device when it is determined that the target distance between the obstacle and the target terminal is greater than the preset distance threshold, the blind spot image acquisition device, according to the pose relationship and the blind spot position information of the blind spot area corresponding to the current frame of the driving image, from the Obtain the filling pixel points of the blind area area from the multi-frame historical driving images; wherein, when the x+1 th frame of historical driving images includes pixels of the same target point as in the xth frame of historical driving images, the The filling pixel corresponding to the same target point is the same target point in the x+1th frame historical driving image and a frame of historical driving image corresponding to the target point in the xth frame historical driving image. corresponding pixels.
  • the number corresponding to the target points in the x+1th frame of historical driving images is greater than the number of corresponding target points in the xth frame of historical driving images, and the x+1th frame of historical driving images includes the same
  • the target point that is different from the x+1th frame historical driving image and the xth frame historical driving image according to the pose relationship and the blind area position information of the blind area corresponding to the current frame driving image, from Filling pixels corresponding to different target points obtained in the x+1 th frame of historical driving image and the x th frame of historical driving image are filled into the blind area.
  • the historical frames are sorted in chronological order, and both historical frames x and x+1 can provide part of the blind area, in which there are overlapping areas (target areas corresponding to the second set of pixels) and their separate coverage area (target point area corresponding to the first pixel point set).
  • the area that can be provided by the current history frame is calculated with the left and right edge endpoints starting from the dead zone.
  • the coverage of the subsequent historical frame includes part and all of the area of the previous historical frame, modify the area of the previous historical frame to the range not covered by the subsequent historical frame.
  • the bottom blind area of the terminal is filled in all areas. When the range provided by the subsequent frame is large enough, this solution may fill all the terminal bottom blind areas from a certain subsequent frame.
  • the first image index point collection is the image turbulence point corresponding to the target point with the closest shooting time
  • the second image index point collection includes the x+1 th frame in the half-image of the historical line. Refers to the image index point corresponding to the same target point in the xth frame of the historical driving image.
  • the blind spot image acquisition device selects the corresponding target point from every two adjacent frames of historical driving images of the multiple frames of historical driving images.
  • a frame of driving image with many pixels provides the filling pixels of the same target point to obtain the blind spot image of the blind spot area.
  • the multiple frames of historical driving images include filling pixels of the same target point
  • a frame of driving images that contains a larger number of filling pixels among the multiple frames of historical driving images is selected to provide the pixel points.
  • selecting a larger area of images to fill the blind area can reduce the number of splicing times and splicing of blind area images as much as possible, and can effectively avoid the problems of image dislocation and brightness inconsistency caused by multi-image splicing, and improve the bottom of the vehicle.
  • the display effect of the blind spot can reduce the number of splicing times and splicing of blind area images as much as possible, and can effectively avoid the problems of image dislocation and brightness inconsistency caused by multi-image splicing, and improve the bottom of the vehicle.
  • FIG. 9 is another schematic diagram of filling a blind area provided by an embodiment of the present application.
  • the first frame of historical driving images, the second frame of historical driving images, the third frame of historical driving images, and the fourth frame of historical driving images can provide some or all of the blind spot images, That is, some or all of the filling pixel points corresponding to the target points in the blind zone area can be provided, such as four parts A, B, C, and D, respectively.
  • the first to fourth frames of historical driving images are sorted in descending order of shooting time, that is, the time for shooting the first frame of historical driving images is later than that of the second frame of historical driving images.
  • a frame of historical driving images with a larger blind area is selected to provide filling pixels for the target point.
  • the first frame of historical driving images After determining that the first frame of historical driving images is only part of the blind area, start to traverse the second frame of historical driving images after the first frame of historical driving images; confirm that the pixels corresponding to area B in the second frame of historical driving images are the targets of the blind area. Fill the pixel points corresponding to the point, and obtain the position information and area information of the B area at the same time, compare the position and area of the A area and the B area, and determine that the B area can provide a larger area of the blind area, then further judge the A area and the B area.
  • the pixel value of the same pixel point can be provided (that is, whether there is an overlapping area between the A area and the B area, or whether the second frame of historical driving image contains the second pixel point set)
  • the B area can provide the same pixel point as the A area.
  • the pixel value that is, the overlapping area existing between the B area and the A area is the A area. Therefore, at this time, the image of the blind spot area is provided by the first pixel point set (filled pixels provided by the B area alone) and the second pixel point set (filled pixels in the A area) provided by the second frame of historical driving images. That is, the blind area provided by the B area alone is filled, the blind area provided by the A area is discarded, and the blind area image provided by the B area is filled into the blind area corresponding to the driving image of the current frame.
  • the pixels corresponding to the image of the C area are the filling pixels corresponding to some target points in the blind area.
  • the position information and area information of the C area are obtained, and the positions and areas of the C area and the B area are compared, and the third frame of the historical driving image is determined.
  • the area of the blind area that can be provided by the C area is larger. It is found that the C area can provide the same pixel value as the B area, and at the same time, the C area and the B area also have separate blind areas.
  • the image of the blind spot area is provided by the first set of pixels provided by the second frame of historical driving images (compared to the padding pixels provided by the C area alone in area B) and the first set of pixels provided by the third frame of historical driving images.
  • the pixel point set (compared to the filling pixel points provided by the B area alone in the C area) and the second pixel point set (the C area can provide the same pixel points as the B area). That is, after comparing the C area and the B area, retain some of the blind area areas provided by the B area alone, discard the overlapping areas in the B area that overlap with the C area, and combine them with the C area. Filling the blind area image provided by the blind area image provided by the partial area of the B area that overlaps with the C area and the blind area image provided by the C area into the blind area corresponding to the current frame of the driving image.
  • the image of the blind area is provided by the second frame of historical driving image.
  • the remaining part of the area and the C area provided by the third frame of historical driving images are stitched together.
  • the blind spot images in the first three frames of historical driving images can fill all the blind spots, even if the fourth frame of historical driving images can provide images of all blind spots, it needs to be discarded.
  • the number of corresponding target points in the xth frame of historical driving images is equivalent to the corresponding number of pixels in the xth frame of historical driving images and the target points in the blind area; it can also be understood that the xth frame of historical The size of the corresponding blind area in the driving image. Similarly, the number of corresponding target points in the x+1th frame of historical driving images can be determined.
  • the first pixel point set of the x+1th frame of the historical driving image is equivalent to the pixel point set corresponding to the target point whose shooting time is closest to the current time, which is equivalent to the pixel point set obtained for the first time; +1 frame of the second pixel point set of the historical driving image, which is equivalent to the pixel point set corresponding to the same target point in the xth frame historical driving image, equivalent to the same pixel point set as the xth frame historical driving image, also equivalent to , the same blind area corresponding to the xth frame of historical driving images in the x+1th frame of historical driving images.
  • the number of corresponding target points in the x+1th frame of historical driving images is less than or equal to the number of corresponding target points in the xth frame of historical driving images, and the x+1th frame of historical
  • the same target point of the xth frame of the historical driving image is selected from the multiple frames of historical driving images
  • the corresponding set of pixel points is used as the set of filling pixel points corresponding to the same target point.
  • the first pixel point set in the xth frame of the historical driving image is used as the padding pixel point of the target point. This can not only ensure the clarity of the final blind area image, but also reduce the number of stitching.
  • the target points are divided into first-type target points and second-type target points
  • the target pixel point set is the filling pixel points corresponding to the first-type target points and all the target pixels in the multi-frame historical driving images.
  • the filled pixel points corresponding to the second type of target points are all the set of pixel points on the line segment where the endpoints are, wherein the first type of target points and the second type of target points are respectively located on different boundaries of the blind zone area, and in the In the blind area, the first type of target points and the second type of target points correspond one-to-one and are distributed axially symmetrically.
  • the filling pixel points corresponding to the first type of target points and the second type of target points respectively may be determined first, so as to reduce the amount of calculation in practical applications.
  • the shape and size of the blind zone is related to the shape and size of the target terminal.
  • the first type target points and the second type target points can be symmetrically distributed, corresponding to the left boundary and the left boundary of the blind area boundary respectively. right border.
  • FIG. 10 is a schematic diagram of the distribution of various first-type target points and second-type target points provided by the embodiment of the present application.
  • the shape of the blind area is a rectangle
  • the first type target point and the second type target point may correspond to points on the left and right sides of the rectangle, respectively.
  • the target pixel point set of the driving image when reconfirming the target pixel point set of the driving image, it is the set of all pixel points on the line segment with the first type target point and the corresponding second type target point as endpoints, if there is only the left endpoint (No. One type of target point), if there is no corresponding right endpoint (the second type of target point) (as shown in (1) in Figure 10), it can be considered that the driving image does not include the target pixel set, nor does it Include target pixels. Therefore, it is determined that the driving image does not include the target pixel set that does not fill the blind area.
  • the shape of the blind area is a triangle, passing through the middle point of the triangle, taking the forward direction of the terminal as the positive direction, the first type of target point and the second type of target point respectively correspond to the triangle The boundaries of its left and right sides of the region.
  • the triangular blind area corresponding to the driving image includes a set of target pixels and a set of non-target pixels.
  • the non-target pixel point set corresponds to only the left endpoint (the first type of target point), and there is no corresponding right endpoint (the second type of target point), it can be considered that the non-target pixel point set is all
  • the corresponding pixels do not meet the requirements.
  • the above only obtains the set of pixel points that exist at the same time as the first type of target point and the corresponding second type of target point, so that the final stitched and filled blind spot image is composed of strip-shaped areas, reducing the corresponding pixel points in the multi-frame driving image of the blind area.
  • the set is the difficulty of splicing under irregular shapes.
  • the pixel points on the same line segment as the first type of target point and the corresponding second type of target point are obtained, The calculation amount in practical applications is greatly reduced, the efficiency of blind spot image acquisition is improved, and the delay time is reduced.
  • FIG. 11 is a schematic diagram of various blind area regions provided by an embodiment of the present application.
  • the shape of the blind area is a circle, an ellipse or an irregular shape, take the forward direction of the target terminal as the positive direction, and try to bisect the blind area into the left and right parts with equal perimeters as much as possible.
  • a part of the boundary is the boundary corresponding to the first type of pixels, and the other part of the boundary is the boundary corresponding to the second type of pixels.
  • the division of the boundary is not specifically limited in this embodiment of the present application.
  • Step S306 Output the blind area image based on the filling pixel points corresponding to the multiple target points.
  • the blind spot image acquiring device outputs the blind spot image based on the filled pixel points corresponding to the multiple target points.
  • the blind spot image acquisition device can output the current frame driving image and the blind spot image respectively, and can also fill the blind spot image to the corresponding position of the current frame driving image, and output a driving image showing the blind spot area.
  • multiple frames of driving images are selected to fill the images of the blind area of the current frame of driving images, so as to avoid the phenomenon of incomplete filling of the blind area images using a single driving image.
  • the filling pixel point according to the distance between the obstacle and the target in the intelligent vehicle; if the target distance is less than or equal to the preset distance threshold, the pixel point with the closest shooting time is selected as the filling pixel point, which effectively avoids the number of frames.
  • the selected fill image is a driving image with the latest shooting time, the final obtained blind spot area has the best clarity.
  • the pixels in the driving images of historical frames with many corresponding target points are selected as filling pixels, which effectively avoids the problems of image dislocation and brightness inconsistency caused by multi-image stitching, and improves the blind spot under the vehicle. display effect.
  • the filling strategies in two different situations greatly improve the probability of obtaining a complete, clear and accurate image of the blind spot, and ensure the driving safety of the intelligent terminal.
  • using a specific method to fill the blind spot can allow the driver to observe the position of the car in multiple directions, prevent tire wear and damage to the chassis, and also allow the user to better observe the information around the car, tires and bottom of the car, assisting Parking, avoid terminal loss accidents to the greatest extent, and improve driving experience and driving safety.
  • the blind spot image acquisition method provided by the present application can also be executed by an electronic device, a blind spot image acquisition device, and the like.
  • An electronic device refers to an electronic device that can be abstracted into a computer system and supports the function of processing images, and may also be referred to as an image processing device.
  • the blind spot image acquisition device may be the whole machine of the electronic device, or part of the electronic device, such as a chip that supports image processing functions and supports the function of blind spot image acquisition, such as a system chip or an image chip.
  • the system-on-a-chip is also called a system-on-chip, or a SoC chip.
  • the blind spot image acquisition device may be a related device such as an on-board computer in a smart vehicle, or a system chip or an image acquisition chip that can be installed in a computer system or an image processing system of an intelligent terminal.
  • the embodiments of the present application only exemplarily take the blind spot image obtaining device in the intelligent vehicle as an example to describe the blind spot image obtaining method.
  • the embodiments of the present application do not specifically limit the types of terminal devices.
  • the terminal device may also be a probe vehicle, an exploration robot, and the like.
  • FIG. 12 is a schematic diagram of a target vehicle in an application scenario provided by an embodiment of the present application, which can correspond to the related image processing method embodiments described above with reference to FIG. 4 . describe.
  • FIG. 13 is a schematic flowchart of a method for acquiring a blind spot image in an application scenario provided by an embodiment of the present application.
  • the blind spot image acquisition process can implement the following steps:
  • Step 1 Input multiple frames of driving images and their pose relationship with the current frame of driving images.
  • Step 2 Determine the target distance between the obstacle and the target vehicle.
  • Step 3 Start from the current frame to traverse the image in flashback, and use the pose relationship between the images to calculate whether the left and right edges of the chassis exist in the image from the front of the current chassis as the front edge.
  • Step 4 Find the frame where the chassis area first appears, and traverse the left and right edges of the chassis to calculate the longest chassis size that the frame can provide.
  • Step 5 When any edge is not in the frame, record the chassis area that exists in the frame, and judge whether it exceeds the chassis area. Continue to traverse the next frame in flashback with the current chassis position as the front edge.
  • Step 6 Output the chassis area corresponding to each image.
  • the blind area under the vehicle is (x, y, w, h).
  • f x , f y , u 0 , and v 0 are the camera internal parameters, and the camera internal parameters refer to the mapping relationship between the camera coordinate system and the pixel coordinate system, which can be obtained by the camera manufacturer;
  • R 3 ⁇ 3 , T 3 ⁇ 1 are the camera external parameters, where the camera external parameters refer to the mapping relationship between the camera coordinate system and the world coordinate system, which is obtained by pre-calibration;
  • Rr 3 ⁇ 3 and Tr 3 ⁇ 1 are the rth frame historical driving image and its relationship with the current frame The pose relationship between the driving images;
  • ⁇ x blind , y blind are the edge positions of the left and right blind spots and the edge positions of the upper blind zone in the driving images obtained by calibration and vehicle body size in advance, z ground is the ground, and the default is zero.
  • u, v represent the position of the pixel in the driving image corresponding to the target point in the blind area
  • the image height (image height) and the image width (image width) represent the size of the driving image, when +u, v and -u, v are In the driving image, where the unit pixel length of the driving image is meter per pixel. That is: when u, v meet the following conditions:
  • Case 1 A frame of historical driving images can provide the filling pixels corresponding to the target points in all blind areas.
  • FIG. 14 is a schematic diagram of a blind spot image that can be provided by one frame of historical driving images provided by an embodiment of the present application.
  • the current image blind spots on the historical driving images are calculated in the traversal order.
  • the previous several historical driving images do not provide blind spots, and the first historical driving image with a blind spot image can provide all image blind spots.
  • the pixel positions u, v calculated by the previous several historical driving images do not satisfy the above formula.
  • the first x-th frame of the historical driving image not only the pixel position is within the image range, but also the length is increased by the meter per pixel (meter per pixel), all of which meet the requirements.
  • the algorithm selects all blind spot images to be provided by the first x-th frame of historical driving images.
  • the image is the one that can provide the current blind area and is the closest to the current image, and the image resolution is the best. Moreover, since it is also the closest to the current state in time, the probability that it may be blocked by obstacles is also the smallest.
  • FIG. 15 is a schematic diagram of a blind spot image that can be provided by a multi-frame historical driving image provided by an embodiment of the present application.
  • the blind spot of the current frame of the driving image on the historical driving image is calculated in the traversal order.
  • the previous several historical driving images do not provide blind spots.
  • Frames of historical driving images not only include all the newly added blind areas, but also include part or even all of the blind areas existing in the first historical driving image.
  • the pixel positions u, v calculated by the previous several historical driving images do not satisfy the above formula.
  • the pixel position calculated from the initial position of the blind spot is within the image range.
  • the image no longer has a blind spot.
  • the x+1th frame of the historical driving image at the previous sampling time of the image the pixel position calculated at the position of the blind area that does not exist in the xth frame of the historical driving image is within the image range. .
  • the blind area range that can be provided by the xth frame of the historical driving image there is a part or even all of the x+1th frame of the historical driving image.
  • the algorithm selects the maximum blind area range provided by the xth frame of historical driving images, and the x+1th frame of historical driving images provides the range from the outer edge of the xth frame of historical driving images to all blind areas that do not exist. .
  • the blind spot is composed of the xth frame of historical driving images and the x+1th frame of historical driving images, and the overall image resolution is the best.
  • FIG. 16 is a schematic structural diagram of an apparatus for obtaining a blind spot image provided by an embodiment of the present application.
  • the apparatus 10 for obtaining a blind spot image may include a first determining unit 101 , a second determining unit 102 , a first obtaining unit 103 and
  • the output unit 104 may further include: the detailed description of each unit is as follows.
  • the first determining unit 101 is configured to determine the pose relationship between the current frame of driving images and the multi-frame historical driving images, wherein the current frame of driving images is the driving images captured at the current time, and the multi-frame historical driving images are: The driving image taken before the current time, the driving image is the surrounding environment image in the forward direction of the target terminal;
  • a second determining unit 102 configured to determine the target distance between the obstacle and the target terminal
  • the first obtaining unit 103 is configured to, when the target distance is less than or equal to a preset distance threshold, according to the pose relationship and the blind area position information of the blind area corresponding to the current frame of the driving image, from the multi-frame historical driving Obtain the filling pixel point of each target point in the multiple target points in the blind zone area in the image; wherein, the filling pixel point is the pixel point corresponding to the target point in the multi-frame historical driving image with the nearest shooting time ;
  • the output unit 104 is configured to output the blind area image based on the filling pixel points corresponding to the multiple target points.
  • the multiple frames of historical driving images are m frames of historical driving images
  • the first acquiring unit 103 is specifically configured to: when the target distance is less than or equal to a preset distance threshold, The m frames of historical driving images are sorted according to the shooting time, and m is an integer greater than 1; according to the pose relationship and the blind spot position information of the blind area corresponding to the current frame of driving images, the target is obtained from the xth frame of historical driving images A set of pixel points, as the set of filling pixel points corresponding to the multiple target points, wherein the set of target pixel points includes the filling pixel points corresponding to the multiple target points, and the xth frame of the historical driving image is the
  • the first obtaining unit 103 is further specifically configured to: when the target pixel point set is a set of filling pixel points corresponding to some target points, sequentially drive from the x+1th frame historically A set of target pixel points is obtained from the image as a set of filling pixel points corresponding to the remaining part of the target points, until the multiple target points in the blind zone area are filled.
  • the target points are divided into a first type target point and a second type target point, and the target pixel point set is the target pixel of the first type in the xth frame of the historical driving image.
  • the first obtaining unit 103 is further configured to: when the target distance is greater than the preset distance threshold, corresponding to the blind spot according to the pose relationship and the current frame of the driving image
  • the blind spot position information of the area is obtained, and the filling pixels of the blind spot area are obtained from the multi-frame historical driving images; wherein, the x+1th frame of the historical driving image includes the same as the xth frame of the historical driving image.
  • the filling pixel point corresponding to the same target point is the frame corresponding to the target point in the x+1th frame of historical driving image and the xth frame of historical driving image. Pixel points corresponding to the same target point in the historical driving image.
  • the apparatus further includes: a second acquiring unit 105, configured to obtain a multi-camera camera before determining the pose relationship between the current frame of driving images and one or more frames of historical driving images.
  • the current frame of driving images and the multiple frames of historical driving images are acquired.
  • the apparatus further includes: a third acquiring unit 106, configured to acquire the said driving image through a monocular camera before determining the pose relationship between the current frame of driving images and the multi-frame historical driving images The current frame of driving images and the multi-frame historical driving images; obtain speed information of the target terminal; based on the speed information, obtain distance information between adjacent frames of driving images; based on the distance information, determine the single The size value of the depth estimation of the eye camera, the size value is used to indicate the size of the unit length during depth estimation; the first determining unit 101 is specifically configured to: based on the size value, determine the current frame of the driving image The pose relationship with the multi-frame historical driving images.
  • the first determining unit 101 is specifically configured to: perform image feature detection on the current frame of driving images and the multiple frames of historical driving images, and obtain the current frame of driving images and all historical driving images.
  • the image feature points between the multiple frames of historical driving images, the image feature points are the common viewpoints between the current frame of driving images and the multiple frames of historical driving images; based on the image feature points and size values, determine The pose relationship between the current frame of driving images and the multiple frames of historical driving images.
  • each unit corresponds to its own program code (or program instruction), and when the program code corresponding to each of these units runs on a relevant hardware device, the unit executes a corresponding process to realize a corresponding function.
  • the function of each unit can also be implemented by related hardware.
  • the related functions of the first determination unit 101, the second determination unit 102, and the first acquisition unit 103 may be implemented by analog circuits or digital circuits, wherein the digital circuit may be a digital signal processor (Digital Signal Processor, DSP), Or a digital integrated circuit chip (field programmable gate array, FPGA); the related functions of the output unit 104 can be implemented by devices such as a graphics processor (graphics processing unit, GPU) or a processor CPU with a communication interface or a transceiver function.
  • DSP Digital Signal Processor
  • FPGA field programmable gate array
  • each functional unit in the blind spot image acquisition device 10 described in the embodiments of the present application, reference may be made to the relevant descriptions of steps S301 to S306 in the embodiment of the blind spot image acquisition method described in FIG. 4 .
  • the first determining unit 101 may refer to steps S301 to S302 in the method embodiment described above in FIG. 4
  • the second determining unit 102 may refer to steps S303 and S303 in the method embodiment described above in FIG. 4.
  • An obtaining unit 103 can refer to steps S304-S305 in the method embodiment described above in FIG. 4
  • the output unit 104 can refer to step S306 in the method embodiment described above in FIG. 4 , which is not repeated here.
  • FIG. 17 is a schematic structural diagram of another blind spot image acquisition apparatus provided by an embodiment of the present application.
  • the apparatus 20 includes at least one processor 201 , at least one memory 202 , and at least one communication interface 203 .
  • the device may also include general components such as an antenna, which will not be described in detail here.
  • the processor 201 may be a general-purpose central processing unit (CPU), a microprocessor, an application-specific integrated circuit (ASIC), or one or more integrated circuits for controlling the execution of the programs in the above solutions.
  • CPU central processing unit
  • ASIC application-specific integrated circuit
  • the communication interface 203 is used to communicate with other devices or communication networks, such as Ethernet, Radio Access Network (RAN), Core Network, Wireless Local Area Networks (Wireless Local Area Networks, WLAN) and the like.
  • RAN Radio Access Network
  • Core Network Core Network
  • Wireless Local Area Networks Wireless Local Area Networks, WLAN
  • the memory 202 may be read-only memory (ROM) or other type of static storage device that can store static information and instructions, random access memory (RAM) or other type of static storage device that can store information and instructions
  • the dynamic storage device can also be Electrically Erasable Programmable Read-Only Memory (EEPROM), Compact Disc Read-Only Memory (CD-ROM) or other optical disk storage, optical disk storage (including compact discs, laser discs, optical discs, digital versatile discs, Blu-ray discs, etc.), magnetic disk storage media or other magnetic storage devices, or capable of carrying or storing desired program code in the form of instructions or data structures and capable of being executed by a computer Access any other medium without limitation.
  • the memory can exist independently and be connected to the processor through a bus.
  • the memory can also be integrated with the processor.
  • the memory 202 is used for storing the application program code for executing the above solution, and the execution is controlled by the processor 201 .
  • the processor 201 is configured to execute the application code stored in the memory 202 .
  • the code stored in the memory 202 can execute the reflectivity estimation method provided in FIG. 3 above, such as determining the pose relationship between the current frame of driving images and multiple frames of historical driving images; determining the target distance between the obstacle and the target terminal; If the target distance is less than or equal to a preset distance threshold, obtain the filling of the blind area from the multi-frame historical driving images according to the pose relationship and the blind area position information of the blind area corresponding to the current frame of driving images pixel points; outputting the blind area image based on the filled pixel points corresponding to the multiple target points.
  • An embodiment of the present application further provides an apparatus, where the apparatus includes a processor, and the processor is configured to:
  • the target distance is less than or equal to a preset distance threshold
  • the filling pixel point of each target point in the multiple target points wherein, the filling pixel point is the pixel point with the nearest shooting time corresponding to the target point in the multi-frame historical driving images;
  • the blind area image is output based on the filled pixel points corresponding to the multiple target points.
  • the multiple frames of historical driving images are m frames of historical driving images; the processor is specifically configured to: when the target distance is less than or equal to a preset distance threshold, convert the m
  • the frames of historical driving images are sorted according to the shooting time, and m is an integer greater than 1; according to the pose relationship and the blind area position information of the blind area corresponding to the current frame of driving images, the target pixel set is obtained from the xth frame of historical driving images , as the set of filling pixel points corresponding to the multiple target points, wherein the set of target pixel points includes the filling pixel points corresponding to the multiple target points, and the xth frame of the historical driving image is the history of the m frames
  • the processor is further configured to: when the target pixel set is a set of filling pixels corresponding to some target points, sequentially acquire target pixels from the x+1th frame of historical driving images The point set is used as the set of filling pixel points corresponding to the remaining part of the target points, until the multiple target points in the blind area area are filled.
  • the target points are divided into a first type target point and a second type target point, and the target pixel point set is the target pixel of the first type in the xth frame of the historical driving image.
  • the processor is further configured to: when the target distance is greater than the preset distance threshold, according to the pose relationship and the blind spot position of the blind spot area corresponding to the current frame of the driving image information, obtain the filling pixel point of each target point in the multiple target points in the blind area from the multiple frames of historical driving images; wherein, the x+1th frame of the historical driving image includes the same When the pixel points of the same target point in the xth frame historical driving image, the filling pixel corresponding to the same target point is the x+1th frame historical driving image and the xth frame historical driving image corresponding to the The pixel points corresponding to the same target point in a frame of historical driving images with a large number of target points.
  • the processor is further configured to: obtain the current frame of driving images by using a multi-camera camera before determining the pose relationship between the current frame of driving images and one or more frames of historical driving images image with the multiple frames of historical driving images.
  • the processor is further configured to: before determining the pose relationship between the current frame of driving images and the multi-frame historical driving images, obtain the current frame of driving images and all historical driving images through a monocular camera. obtaining the speed information of the target terminal; based on the speed information, obtaining distance information between adjacent frames of driving images; based on the distance information, determining the depth estimation of the monocular camera a size value, where the size value is used to indicate the size of a unit length during depth estimation; the processor is specifically configured to: based on the size value, determine the distance between the current frame of driving images and the multiple frames of historical driving images pose relationship.
  • the processor is specifically configured to: perform image feature detection on the current frame of driving images and the multi-frame historical driving images, and obtain the current frame of driving images and the multi-frame historical driving images Image feature points between the driving images, the image feature points are the common viewpoints between the current frame of driving images and the multiple frames of historical driving images; based on the image feature points and size values, determine the current frame The pose relationship between the driving image and the multiple frames of historical driving images.
  • the device mentioned in the embodiments of this application may be a chip, a control device, or a processing module, etc., which is used to perform image processing on the environmental image around the terminal to obtain a blind spot image.
  • the specific form of the device is not specified in this application. Make specific restrictions.
  • the embodiment of the present application also provides an electronic device, which can be applied to the above application scenario, the electronic device includes a processor and a memory, wherein the memory is used to store the image processing program code, and the processor is used to call the image processing program. code to execute:
  • the target distance is less than or equal to a preset distance threshold
  • the filling pixel point of each target point in the multiple target points wherein, the filling pixel point is the pixel point with the nearest shooting time corresponding to the target point in the multi-frame historical driving images;
  • the blind area image is output based on the filled pixel points corresponding to the multiple target points.
  • the multiple frames of historical driving images are m frames of historical driving images
  • the processor is specifically configured to call the blind spot image acquisition program code to execute: when the target distance is less than or equal to a predetermined distance
  • the m frames of historical driving images are sorted according to the shooting time, and m is an integer greater than 1
  • a set of target pixels is obtained from the historical driving image as the set of filling pixels corresponding to the multiple target points, wherein the set of target pixels includes the filling pixels corresponding to the multiple target points, and the xth frame
  • the processor is further configured to call the blind area image acquisition program code to execute: when the target pixel point set is a set of filling pixel points corresponding to part of the target points, sequentially from the first A set of target pixel points is obtained from x+1 frames of historical driving images as the set of filling pixel points corresponding to the remaining part of the target points, until the multiple target points in the blind area are filled.
  • the target points are divided into a first type target point and a second type target point, and the target pixel point set is the target pixel of the first type in the xth frame of the historical driving image.
  • the processor is further configured to call the blind spot image acquisition program code to execute: when the target distance is greater than the preset distance threshold, according to the pose relationship and the The blind spot position information of the blind spot area corresponding to the current frame of driving images is obtained, and the filling pixel points of each target point in the multiple target points in the blind spot area are obtained from the multi-frame historical driving images; wherein, in the xth When the +1 frame of the historical driving image includes the pixel points of the same target point as the xth frame of the historical driving image, the filling pixel corresponding to the same target point is the x+1th frame of the historical driving image and the Pixel points corresponding to the same target point in a frame of historical driving image with a larger number corresponding to the target point in the xth frame of historical driving image.
  • the processor is further configured to call the blind spot image acquisition program code to execute: determine the pose relationship between the current frame of driving images and one or more frames of historical driving images, A camera is used to obtain the current frame of driving images and the multiple frames of historical driving images.
  • the processor is further configured to call the blind spot image acquisition program code to execute: before determining the pose relationship between the current frame of driving images and the multi-frame historical driving images, use the monocular camera , obtain the current frame of driving images and the multi-frame historical driving images; obtain the speed information of the target terminal; based on the speed information, obtain distance information between adjacent frames of driving images; based on the distance information, Determine the size value of the depth estimation of the monocular camera, where the size value is used to indicate the size of the unit length during depth estimation; the processor is specifically configured to call the blind spot image acquisition program code to execute: based on the The size value is used to determine the pose relationship between the current frame of driving images and the multiple frames of historical driving images.
  • the processor is specifically configured to call the blind spot image acquisition program code to execute: perform image feature detection on the current frame of driving images and the multi-frame historical driving images, and obtain the Image feature points between the current frame of driving images and the multi-frame historical driving images, the image feature points being the common viewpoints between the current frame of driving images and the multi-frame historical driving images; based on the image features point and size value to determine the pose relationship between the current frame of driving images and the multi-frame historical driving images.
  • the electronic device mentioned in the embodiments of this application may be a server, a processing device, etc. in the cloud, or may be a blind-spot image acquisition device that is in communication connection with the intelligent terminal, which is not specified in this application. limit.
  • a component may be, but is not limited to, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer.
  • an application running on a computing device and the computing device may be components.
  • One or more components may reside within a process and/or thread of execution, and a component may be localized on one computer and/or distributed between 2 or more computers.
  • these components can execute from various computer readable media having various data structures stored thereon.
  • a component may, for example, be based on data with one or more data packets (eg, data from two components interacting with another component between a local system, a distributed system, and/or a network, such as the Internet interacting with other systems via signals). Signals are communicated through local and/or remote processes.
  • data packets eg, data from two components interacting with another component between a local system, a distributed system, and/or a network, such as the Internet interacting with other systems via signals.
  • Signals are communicated through local and/or remote processes.
  • the disclosed apparatus may be implemented in other manners.
  • the device embodiments described above are only illustrative.
  • the division of the above-mentioned units is only a logical function division.
  • multiple units or components may be combined or integrated. to another system, or some features can be ignored, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical or other forms.
  • the units described above as separate components may or may not be physically separated, and components shown as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
  • the integrated units are implemented in the form of software functional units and sold or used as independent products, they may be stored in a computer-readable storage medium.
  • the technical solutions of the present application can be embodied in the form of software products in essence, or the parts that contribute to the prior art, or all or part of the technical solutions, and the computer software products are stored in a storage medium , including several instructions to make a computer device (which may be a personal computer, a server or a network device, etc., specifically a processor in the computer device) to execute all or part of the steps of the above methods in the various embodiments of the present application.
  • a computer device which may be a personal computer, a server or a network device, etc., specifically a processor in the computer device
  • the aforementioned storage medium may include: U disk, mobile hard disk, magnetic disk, optical disk, Read-Only Memory (Read-Only Memory, abbreviation: ROM) or Random Access Memory (Random Access Memory, abbreviation: RAM), etc.
  • a medium that can store program code may include: U disk, mobile hard disk, magnetic disk, optical disk, Read-Only Memory (Read-Only Memory, abbreviation: ROM) or Random Access Memory (Random Access Memory, abbreviation: RAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Image Analysis (AREA)

Abstract

本申请实施例公开了一种盲区图像获取方法及相关终端装置。其中,一种盲区图像获取方法,包括:确定当前帧行驶图像与多帧历史行驶图像之间的位姿关系;确定障碍物与目标终端之间的目标距离;若所述目标距离小于或等于预设距离阈值,根据所述位姿关系和所述当前帧行驶图像对应盲区区域的盲区位置信息,从所述多帧历史行驶图像中获取所述盲区区域的填充像素点;基于所述多个目标点对应的填充像素点输出所述盲区图像。实施本申请实施例,可以获得终端底部的盲区区域的图像。

Description

一种盲区图像获取方法及相关终端装置 技术领域
本申请涉及终端技术领域,尤其涉及一种盲区图像获取方法及相关终端装置。
背景技术
随着电子技术、计算机技术、传感器技术和新材料等的发展,终端的行驶安全和体验越来越被厂商和用户所重视。与其他众多传感器相比,终端上的摄像机系统可以通过视觉信息,为终端行驶过程中提供更广阔的视角,为用户提供更直观和准确的终端周边环境信息。例如:当终端为车辆时,可以通过对同一时刻环视摄像头采集车身四周图像影像进行处理,获得车身周边环境情况,但是车底盲区在现有普通车载全景环视系统中是不可见的。当车辆行驶在一些路况崎岖、地面不平、存在积水、石子等障碍物的场景时,车底容易发生剐蹭、石子被轮胎挤压袭击底盘等情况。如果仅仅在终端的底部盲区安装摄像头,往往会因为不同的路况,造成摄像头损坏、摄像头视野受阻等,从而导致无法采集终端行驶底部盲区的视觉信息。
因此,如何有效的获取终端底部的盲区图像,是亟待解决的问题。
发明内容
本申请实施例提供一种盲区图像获取方法及相关终端装置,可以精准获得终端底部的盲区区域的图像。
本申请提供的盲区图像获取方法可以由电子装置,盲区图像获取装置等执行。电子装置是指能够被抽象为计算机系统,支持处理图像功能的电子装置,也可称为图像处理装置。盲区图像获取装置可以是该电子装置的整机,也可以是该电子装置中的部分器件,例如:支持图像处理功能、支持盲区图像获取功能相关的芯片,如系统芯片或图像芯片。其中,系统芯片也称为片上系统,或称为SoC芯片。具体地,盲区图像获取装置可以是诸如智能车辆中车载电脑这样的相关装置,也可以是能够被设置在智能终端的计算机系统或图像处理系统中的系统芯片或图像获取芯片。
第一方面,本申请实施例提供了一种盲区图像获取方法,可包括:
确定当前帧行驶图像与多帧历史行驶图像之间的位姿关系,其中,所述当前帧行驶图像为当前时间拍摄的行驶图像,所述多帧历史行驶图像为在所述当前时间之前拍摄的行驶图像,所述行驶图像为目标终端前进方向上的周边环境图像;
确定障碍物与所述目标终端之间的目标距离;
在所述目标距离小于或等于预设距离阈值时,根据所述位姿关系和所述当前帧行驶图像对应盲区区域的盲区位置信息,从所述多帧历史行驶图像中获取所述盲区区域内多个目标点中每个目标点的填充像素点;其中,所述填充像素点为所述多帧历史行驶图像中所述目标点对应的拍摄时间最近的像素点;
基于所述多个目标点对应的填充像素点输出所述盲区图像。
通过第一方面提供的方法,在障碍物与所述目标终端之间的目标距离小于或等于预设距离阈值时,通过从多帧行驶图像中选取拍摄时间最近的像素点作为盲区区域内目标点的填充像素点,获得盲区图像。这种选择最近拍摄时间的像素点作为盲区区域内目标点的填充像素点,可以最大限度的保证获得的盲区区域的盲区图像清晰度较高,而且降低了障碍物遮挡的概率。因此,采用某种特定的方法对车底盲区进行填充,可以让驾驶员多方位观察汽车所在位置,防止轮胎磨损、底盘剐蹭受损,以便用户可以更好地观察终端周围、轮胎及终端底部信息,辅助停止行驶,最大限度避免终端损失事故发生,大大的提升了驾驶体验和行驶安全性。
在一种可能实现的方式中,所述多帧历史行驶图像为m帧历史行驶图像;所述在所述目标距离小于或等于预设距离阈值时,根据所述位姿关系和所述当前帧行驶图像对应盲区区域的盲区位置信息,从所述多帧历史行驶图像中获取所述盲区区域内多个目标点中的每个目标点的填充像素点,包括:在所述目标距离小于或等于预设距离阈值时,将所述m帧历史行驶图像按照拍摄时间排序,m为大于1的整数;根据所述位姿关系和所述当前帧行驶图像对应盲区区域的盲区位置信息,从第x帧历史行驶图像中获取目标像素点集合,作为所述多个目标点对应的填充像素点集合,其中,所述目标像素点集合包括所述多个目标点对应的填充像素点,所述第x帧历史行驶图像为所述m帧历史行驶图像中包括所述目标像素点集合的,且拍摄时间距当前时间最近的一帧历史行驶图像,x=1、2、3…m。实施本申请实施例,按照时间排序后,选择拍摄时间最近的图像对盲区区域进行填充,可以有效的避免帧数时延导致的障碍物遮挡等问题,而且获得最终的盲区区域图像的画质比较清晰,提高了终端底部盲区的显示效果。
在一种可能实现的方式中,所述方法还包括:在所述目标像素点集合为部分目标点对应的填充像素点集合时,依次从第x+1帧历史行驶图像中获取目标像素点集合,作为剩余部分目标点对应的填充像素点集合,直至所述盲区区域中所述多个目标点填充完毕。实施本申请实施例,按照时间排序后,从多帧历史行驶图像中选择拍摄时间最近一帧历史行驶图像对盲区区域进行填充,可以有效的避免帧数时延导致的障碍物遮挡等问题,而且获得最终的盲区区域图像的画质比较清晰,提高了终端底部盲区的显示效果。
在一种可能实现的方式中,所述目标点分为第一类目标点和第二类目标点,所述目标像素点集合为所述第x帧历史行驶图像中以所述第一类目标点对应的填充像素点和所述第二类目标点对应的填充像素点为端点的线段上的所有像素点集合,其中,所述第一类目标点和所述第二类目标点分别为所述盲区区域不同的边界上的目标点,且在所述盲区区域中,所述第一类目标点和所述第二类目标点一一对应并呈轴对称分布。实施本申请实施例,只计算部分位置是否在历史帧行驶图像像素范围之内,以条状填充盲区位置。大大减少了在实际应用中的计算量,提高了盲区图像获取效率,减少了延时时间。
在一种可能实现的方式中,所述方法还包括:在所述目标距离大于所述预设距离阈值时,根据所述位姿关系和所述当前帧行驶图像对应盲区区域的盲区位置信息,从所述多帧历史行驶图像中获取所述盲区区域内所述多个目标点中每个目标点的填充像素点;其中,在所述第x+1帧历史行驶图像中包括与所述第x帧历史行驶图像中相同目标点的像素点时, 所述相同目标点对应的填充像素点为所述第x+1帧历史行驶图像与所述第x帧历史行驶图像中对应所述目标点的数量多的一帧历史行驶图像中所述相同目标点对应的像素点。实施本申请实施例,这种针对前一帧与后一帧历史行驶图像之间重合的盲区区域,选择较大面积的历史行驶图像对重合的盲区区域进行填充,可以尽可能的减少盲区图像的拼接次数和拼接数量,可以有效的避免了多图拼接引起的图像错位、亮度不一致的问题,提高了终端底部盲区的展示效果。
在一种可能实现的方式中,所述确定当前帧行驶图像与一帧或多帧历史行驶图像之间的位姿关系之前,还包括:通过多目摄像头,获取所述当前帧行驶图像与所述多帧历史行驶图像。实施本申请实施例,这种可以通过多目摄像头获取行驶图像,极大地提高了获取盲区图像的清晰度,提高了驾驶的安全性。需要说明的是,摄像头的安装位置可以根据终端的行驶需求,安装在目标终端的前进方向上。
在一种可能实现的方式中,所述确定当前帧行驶图像与一帧或多帧历史行驶图像之间的位姿关系之前,还包括:通过单目摄像头,获取所述当前帧行驶图像与所述多帧历史行驶图像。实施本申请实施例,这种可以通过单目摄像头获取行驶图像,极大地缓解了盲区图像获取方法对硬件的要求,降低了普及难度,提高了驾驶的安全性。需要说明的是,摄像头的安装位置可以根据行车需求,安装在目标终端的前进方向上。
在一种可能实现的方式中,所述确定当前帧行驶图像与一帧或多帧历史行驶图像之间的位姿关系之前,还包括:通过单目摄像头,获取所述当前帧行驶图像与所述多帧历史行驶图像;获取所述目标终端的速度信息;基于所述速度信息,获取相邻帧行驶图像之间的距离信息;基于所述距离信息,确定所述单目摄像头的深度估计的尺寸值,所述尺寸值用于指示在深度估计时单位长度的大小;所述确定当前帧行驶图像与多帧历史行驶图像之间的位姿关系,包括:基于所述尺寸值,确定所述当前帧行驶图像与所述多帧历史行驶图像之间的位姿关系。实施本申请实施例,在通过单目摄像头获取多帧行驶图像时,为了确定相机的深度估计尺寸值,还需要获取目标终端的速度信号,速度信号用来进行初始化深度估计尺寸值。因此,只需要开始时刻数据且不需要高精度,每时刻的速度数据。这种方式不会过分依赖速度信息的获取,在堵车等驾驶场景下,依旧可以实现盲区图像的获取,最大概率的保证了终端在速度失效的应用场景下,依旧可以保持盲区图像的正常获取,以保持终端的安全行驶,降低行车的安全隐患。
在一种可能实现的方式中,所述确定当前帧行驶图像与多帧历史行驶图像之间的位姿关系,包括:对所述当前帧行驶图像与所述多帧历史行驶图像进行图像特征检测,获取所述当前帧行驶图像与所述多帧历史行驶图像之间的图像特征点,所述图像特征点为所述当前帧行驶图像与所述多帧历史行驶图像之间的共视点;基于所述图像特征点和尺寸值,确定所述当前帧行驶图像与所述多帧历史行驶图像之间的位姿关系。实施本申请实施例,通过多帧行驶图像之间的特征点确定多帧行驶图像之间的位姿关系,可以提高位姿关系确定的准确率,从而提高盲区图像的获取效率。
第二方面,本申请实施例提供了一种盲区图像获取装置,包括:
第一确定单元,用于确定当前帧行驶图像与多帧历史行驶图像之间的位姿关系,其中, 所述当前帧行驶图像为当前时间拍摄的行驶图像,所述多帧历史行驶图像为在所述当前时间之前拍摄的行驶图像,所述行驶图像为目标终端前进方向上的周边环境图像;
第二确定单元,用于确定障碍物与所述目标终端之间的目标距离;
第一获取单元,用于在所述目标距离小于或等于预设距离阈值时,根据所述位姿关系和所述当前帧行驶图像对应盲区区域的盲区位置信息,从所述多帧历史行驶图像中获取所述盲区区域内多个目标点中每个目标点的填充像素点;其中,所述填充像素点为所述多帧历史行驶图像中所述目标点对应的拍摄时间最近的像素点;
输出单元,用于基于所述多个目标点对应的填充像素点输出所述盲区图像。
在一种可能实现的方式中,所述多帧历史行驶图像为m帧历史行驶图像,所述第一获取单元,具体用于:在所述目标距离小于或等于预设距离阈值时,将所述m帧历史行驶图像按照拍摄时间排序,m为大于1的整数;根据所述位姿关系和所述当前帧行驶图像对应盲区区域的盲区位置信息,从第x帧历史行驶图像中获取目标像素点集合,作为所述多个目标点对应的填充像素点集合,其中,所述目标像素点集合包括所述多个目标点对应的填充像素点,所述第x帧历史行驶图像为所述m帧历史行驶图像中包括所述目标像素点集合的,且拍摄时间距当前时间最近的一帧历史行驶图像,x=1、2、3…m。
在一种可能实现的方式中,所述第一获取单元,还具体用于:在所述目标像素点集合为部分目标点对应的填充像素点集合时,依次从第x+1帧历史行驶图像中获取目标像素点集合,作为剩余部分目标点对应的填充像素点集合,直至所述盲区区域中所述多个目标点填充完毕。
在一种可能实现的方式中,所述目标点分为第一类目标点和第二类目标点,所述目标像素点集合为所述第x帧历史行驶图像中以所述第一类目标点对应的填充像素点和所述第二类目标点对应的填充像素点为端点的线段上的所有像素点集合,其中,所述第一类目标点和所述第二类目标点分别为所述盲区区域不同的边界上的目标点,且在所述盲区区域中,所述第一类目标点和所述第二类目标点一一对应并呈轴对称分布。
在一种可能实现的方式中,所述第一获取单元,还用于:在所述目标距离大于所述预设距离阈值时,根据所述位姿关系和所述当前帧行驶图像对应盲区区域的盲区位置信息,从所述多帧历史行驶图像中获取所述盲区区域的填充像素点;其中,在所述第x+1帧历史行驶图像中包括与所述第x帧历史行驶图像中相同目标点的像素点时,所述相同目标点对应的填充像素点为所述第x+1帧历史行驶图像与所述第x帧历史行驶图像中对应所述目标点的数量多的一帧历史行驶图像中所述相同目标点对应的像素点。
在一种可能实现的方式中,所述装置还包括:第二获取单元,用于确定当前帧行驶图像与一帧或多帧历史行驶图像之间的位姿关系之前,通过单目摄像头,获取所述当前帧行驶图像与所述多帧历史行驶图像;或者,通过多目摄像头,获取所述当前帧行驶图像与所述多帧历史行驶图像。
在一种可能实现的方式中,所述装置还包括:第三获取单元,用于确定当前帧行驶图像与多帧历史行驶图像之间的位姿关系之前,通过单目摄像头,获取所述当前帧行驶图像与所述多帧历史行驶图像;获取所述目标终端的速度信息;基于所述速度信息,获取相邻帧行驶图像之间的距离信息;基于所述距离信息,确定所述单目摄像头的深度估计的尺寸 值,所述尺寸值用于指示在深度估计时单位长度的大小;所述第一确定单元,具体用于:基于所述尺寸值,确定所述当前帧行驶图像与所述多帧历史行驶图像之间的位姿关系。
在一种可能实现的方式中,所述第一确定单元,具体用于:对所述当前帧行驶图像与所述多帧历史行驶图像进行图像特征检测,获取所述当前帧行驶图像与所述多帧历史行驶图像之间的图像特征点,所述图像特征点为所述当前帧行驶图像与所述多帧历史行驶图像之间的共视点;基于所述图像特征点和尺寸值,确定所述当前帧行驶图像与所述多帧历史行驶图像之间的位姿关系。
第三方面,本申请实施例提供了一种装置,可包括处理器,所述处理器用于:
确定当前帧行驶图像与多帧历史行驶图像之间的位姿关系,其中,所述当前帧行驶图像为当前时间拍摄的行驶图像,所述多帧历史行驶图像为在所述当前时间之前拍摄的行驶图像,所述行驶图像为目标终端前进方向上的周边环境图像;
确定障碍物与所述目标终端之间的目标距离;
在所述目标距离小于或等于预设距离阈值时,根据所述位姿关系和所述当前帧行驶图像对应盲区区域的盲区位置信息,从所述多帧历史行驶图像中获取所述盲区区域内多个目标点中每个目标点的填充像素点;其中,所述填充像素点为所述多帧历史行驶图像中所述目标点对应的拍摄时间最近的像素点;
基于所述多个目标点对应的填充像素点输出所述盲区图像。
在一种可能实现的方式中,所述多帧历史行驶图像为m帧历史行驶图像;所述处理器,具体用于:在所述目标距离小于或等于预设距离阈值时,将所述m帧历史行驶图像按照拍摄时间排序,m为大于1的整数;根据所述位姿关系和所述当前帧行驶图像对应盲区区域的盲区位置信息,从第x帧历史行驶图像中获取目标像素点集合,作为所述多个目标点对应的填充像素点集合,其中,所述目标像素点集合包括所述多个目标点对应的填充像素点,所述第x帧历史行驶图像为所述m帧历史行驶图像中包括所述目标像素点集合的,且拍摄时间距当前时间最近的一帧历史行驶图像,x=1、2、3…m。
在一种可能实现的方式中,所述处理器还用于:在所述目标像素点集合为部分目标点对应的填充像素点集合时,依次从第x+1帧历史行驶图像中获取目标像素点集合,作为剩余部分目标点对应的填充像素点集合,直至所述盲区区域中所述多个目标点填充完毕。
在一种可能实现的方式中,所述目标点分为第一类目标点和第二类目标点,所述目标像素点集合为所述第x帧历史行驶图像中以所述第一类目标点对应的填充像素点和所述第二类目标点对应的填充像素点为端点的线段上的所有像素点集合,其中,所述第一类目标点和所述第二类目标点分别为所述盲区区域不同的边界上的目标点,且在所述盲区区域中,所述第一类目标点和所述第二类目标点一一对应并呈轴对称分布。
在一种可能实现的方式中,所述处理器还用于:
在所述目标距离大于所述预设距离阈值时,根据所述位姿关系和所述当前帧行驶图像对应盲区区域的盲区位置信息,从所述多帧历史行驶图像中获取所述盲区区域内所述多个目标点中每个目标点的填充像素点;其中,在所述第x+1帧历史行驶图像中包括与所述第x帧历史行驶图像中相同目标点的像素点时,所述相同目标点对应的填充像素点为所述第 x+1帧历史行驶图像与所述第x帧历史行驶图像中对应所述目标点的数量多的一帧历史行驶图像中所述相同目标点对应的像素点。
在一种可能实现的方式中,所述处理器还用于:确定当前帧行驶图像与一帧或多帧历史行驶图像之间的位姿关系之前通过多目摄像头,获取所述当前帧行驶图像与所述多帧历史行驶图像。
在一种可能实现的方式中,所述处理器还用于:确定当前帧行驶图像与多帧历史行驶图像之间的位姿关系之前,通过单目摄像头,获取所述当前帧行驶图像与所述多帧历史行驶图像;获取所述目标终端的速度信息;基于所述速度信息,获取相邻帧行驶图像之间的距离信息;基于所述距离信息,确定所述单目摄像头的深度估计的尺寸值,所述尺寸值用于指示在深度估计时单位长度的大小;所述处理器具体用于:基于所述尺寸值,确定所述当前帧行驶图像与所述多帧历史行驶图像之间的位姿关系。
在一种可能实现的方式中,所述处理器具体用于:对所述当前帧行驶图像与所述多帧历史行驶图像进行图像特征检测,获取所述当前帧行驶图像与所述多帧历史行驶图像之间的图像特征点,所述图像特征点为所述当前帧行驶图像与所述多帧历史行驶图像之间的共视点;基于所述图像特征点和尺寸值,确定所述当前帧行驶图像与所述多帧历史行驶图像之间的位姿关系。
第四方面,本申请实施例提供了一种电子装置,可包括处理器和存储器,其中,所述存储器用于存储盲区图像获取程序代码,所述处理器用于调用所述盲区图像获取程序代码来执行:
确定当前帧行驶图像与多帧历史行驶图像之间的位姿关系,其中,所述当前帧行驶图像为当前时间拍摄的行驶图像,所述多帧历史行驶图像为在所述当前时间之前拍摄的行驶图像,所述行驶图像为目标终端前进方向上的周边环境图像;
确定障碍物与所述目标终端之间的目标距离;
在所述目标距离小于或等于预设距离阈值时,根据所述位姿关系和所述当前帧行驶图像对应盲区区域的盲区位置信息,从所述多帧历史行驶图像中获取所述盲区区域内多个目标点中每个目标点的填充像素点;其中,所述填充像素点为所述多帧历史行驶图像中所述目标点对应的拍摄时间最近的像素点;
基于所述多个目标点对应的填充像素点输出所述盲区图像。
在一种可能实现的方式中,所述多帧历史行驶图像为m帧历史行驶图像;所述处理器还用于调用所述盲区图像获取程序代码来执行:在所述目标距离小于或等于预设距离阈值时,将所述m帧历史行驶图像按照拍摄时间排序,m为大于1的整数;根据所述位姿关系和所述当前帧行驶图像对应盲区区域的盲区位置信息,从第x帧历史行驶图像中获取目标像素点集合,作为所述多个目标点对应的填充像素点集合,其中,所述目标像素点集合包括所述多个目标点对应的填充像素点,所述第x帧历史行驶图像为所述m帧历史行驶图像中包括所述目标像素点集合的,且拍摄时间距当前时间最近的一帧历史行驶图像,x=1、2、3…m。
在一种可能实现的方式中,所述处理器还用于调用所述盲区图像获取程序代码来执行: 在所述目标像素点集合为部分目标点对应的填充像素点集合时,依次从第x+1帧历史行驶图像中获取目标像素点集合,作为剩余部分目标点对应的填充像素点集合,直至所述盲区区域中所述多个目标点填充完毕。
在一种可能实现的方式中,所述目标点分为第一类目标点和第二类目标点,所述目标像素点集合为所述第x帧历史行驶图像中以所述第一类目标点对应的填充像素点和所述第二类目标点对应的填充像素点为端点的线段上的所有像素点集合,其中,所述第一类目标点和所述第二类目标点分别为所述盲区区域不同的边界上的目标点,且在所述盲区区域中,所述第一类目标点和所述第二类目标点一一对应并呈轴对称分布。
在一种可能实现的方式中,所述处理器还用于调用所述盲区图像获取程序代码来执行:在所述目标距离大于所述预设距离阈值时,根据所述位姿关系和所述当前帧行驶图像对应盲区区域的盲区位置信息,从所述多帧历史行驶图像中获取所述盲区区域内所述多个目标点中每个目标点的填充像素点;其中,在所述第x+1帧历史行驶图像中包括与所述第x帧历史行驶图像中相同目标点的像素点时,所述相同目标点对应的填充像素点为所述第x+1帧历史行驶图像与所述第x帧历史行驶图像中对应所述目标点的数量多的一帧历史行驶图像中所述相同目标点对应的像素点。
在一种可能实现的方式中,所述处理器还用于调用所述盲区图像获取程序代码来执行:确定当前帧行驶图像与一帧或多帧历史行驶图像之间的位姿关系,通过单目摄像头,获取所述当前帧行驶图像与所述多帧历史行驶图像;或者,通过多目摄像头,获取所述当前帧行驶图像与所述多帧历史行驶图像。
在一种可能实现的方式中,所述处理器还用于调用所述盲区图像获取程序代码来执行:确定当前帧行驶图像与多帧历史行驶图像之间的位姿关系之前,通过单目摄像头,获取所述当前帧行驶图像与所述多帧历史行驶图像;获取所述目标终端的速度信息;基于所述速度信息,获取相邻帧行驶图像之间的距离信息;基于所述距离信息,确定所述单目摄像头的深度估计的尺寸值,所述尺寸值用于指示在深度估计时单位长度的大小;所述处理器具体用于调用所述盲区图像获取程序代码来执行:基于所述尺寸值,确定所述当前帧行驶图像与所述多帧历史行驶图像之间的位姿关系。
在一种可能实现的方式中,所述处理器具体用于调用所述盲区图像获取程序代码来执行:对所述当前帧行驶图像与所述多帧历史行驶图像进行图像特征检测,获取所述当前帧行驶图像与所述多帧历史行驶图像之间的图像特征点,所述图像特征点为所述当前帧行驶图像与所述多帧历史行驶图像之间的共视点;基于所述图像特征点和尺寸值,确定所述当前帧行驶图像与所述多帧历史行驶图像之间的位姿关系。
第五方面,本申请实施例提供一种计算机存储介质,用于储存为上述第一方面提供的一种盲区图像获取方法所用的计算机软件指令,其包含用于执行上述方面所设计的程序。
第六方面,本申请实施例提供了一种计算机程序,该计算机程序包括指令,当该计算机程序被计算机执行时,使得计算机可以执行上述第一方面中的盲区图像获取方法所执行的流程。
第七方面,本申请实施例提供了一种智能车辆,包括图像处理系统,其中,所述图像处理系统用于执行第一方面提供的盲区图像获取方法中相应的功能。
第八方面,本申请提供了一种芯片系统,该芯片系统包括处理器,用于支持电子装置实现上述第一方面中所涉及的功能,例如,生成或处理上述盲区图像获取方法中所涉及的信息。在一种可能的设计中,所述芯片系统还包括存储器,所述存储器,用于保存数据发送装置必要的程序指令和数据。该芯片系统,可以由芯片构成,也可以包含芯片和其他分立器件。
通过本申请实施例,在障碍物与所述目标终端之间的目标距离小于或等于预设距离阈值时,通过从多帧行驶图像中选取拍摄时间最近的像素点作为盲区区域内目标点的填充像素点,获得盲区图像。这种选择最近拍摄时间的像素点作为盲区区域内目标点的填充像素点,可以最大限度的保证获得的盲区区域的盲区图像清晰度较高,而且降低了障碍物遮挡的概率。因此,采用某种特定的方法对车底盲区进行填充,可以让驾驶员多方位观察汽车所在位置,防止轮胎磨损、底盘剐蹭受损,以便用户可以更好地观察终端周围、轮胎及终端底部信息,辅助停止行驶,最大限度避免终端损失事故发生,大大的提升了驾驶体验和行驶安全性。
附图说明
为了更清楚地说明本申请实施例或背景技术中的技术方案,下面将对本申请实施例或背景技术中所需要使用的附图进行说明。
图1是本申请实施例提供的一种智能车辆001的功能框图。
图2是本申请实施例提供的一种智能车辆中计算装置结构示意图。
图3是本申请实施例提供的一种盲区图像获取系统架构示意图。
图4是本申请实施例提供的一种盲区图像获取方法的流程示意图。
图5是本申请实施例提供的一种确定多帧行驶图像之间位姿关系的流程示意图。
图6是本申请实施例提供的一种车辆在行驶时的场景示意图。
图7是本申请实施例提供的一种目标点和填充像素点示意图。
图8是本申请实施例提供的一种盲区填充示意图。
图9是本申请实施例提供的另一种盲区填充示意图。
图10是本申请实施例提供的多种第一类目标点和第二类目标点的分布示意图。
图11是本申请实施例提供的多种盲区区域示意图。
图12是本申请实施例提供的一种目标车辆在一种应用场景下的场景示意图。
图13是本申请实施例提供的一种应用场景下盲区图像获取方法流程示意图。
图14是本申请实施例提供的一种一帧历史行驶图像可以提供盲区图像示意图。
图15是本申请实施例提供的一种多帧历史行驶图像可以提供盲区图像示意图。
图16是本申请实施例提供的一种盲区图像获取装置的结构示意图。
图17是本申请实施例提供的另一种盲区图像获取装置的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例进行描述。
首先,需要说明的是,本申请实施例所涉及的终端、智能终端、目标终端、终端装置等,可以包括但不限于:车辆、可移动的机器人、可移动的终端装置等等。
为了便于理解本申请实施例,下面先以智能车辆为例,对本申请实施例所基于的其中一种安装有盲区图像获取系统的终端装置进行描述。
请参见图1,图1是本申请实施例提供的一种智能车辆001的功能框图。
在一个实施例中,可以将智能车辆001配置为完全或部分地自动驾驶模式。例如,智能车辆001可以在处于自动驾驶模式中的同时控制自身,并且可通过人为操作来确定车辆及其周边环境的当前状态,确定周边环境中的至少一个其他车辆的可能行为,并确定该其他车辆执行可能行为的可能性相对应的置信水平,基于所确定的信息来控制智能车辆001。在智能车辆001处于自动驾驶模式中时,可以将智能车辆001置为在没有和人交互的情况下操作。
智能车辆001可包括各种子系统,例如行进系统202、传感器系统204、控制系统206、一个或多个外围装置208以及电源210、计算机系统212和用户接口216。可选地,智能车辆001可包括更多或更少的子系统,并且每个子系统可包括多个元件。另外,智能车辆001的每个子系统和元件可以通过有线或者无线互连。
行进系统202可包括为智能车辆001提供动力运动的组件。在一个实施例中,行进系统202可包括引擎218、能量源219、传动装置220和车轮/轮胎221。
引擎218可以是内燃引擎、电动机、空气压缩引擎或其他类型的引擎组合,例如汽油发动机和电动机组成的混动引擎,内燃引擎和空气压缩引擎组成的混动引擎。引擎218将能量源219转换成机械能量。
能量源219的示例包括汽油、柴油、其他基于石油的燃料、丙烷、其他基于压缩气体的燃料、乙醇、太阳能电池板、电池和其他电力来源。能量源219也可以为智能车辆001的其他系统提供能量。
传动装置220可以将来自引擎218的机械动力传送到车轮221。传动装置220可包括变速箱、差速器和驱动轴。在一个实施例中,传动装置220还可以包括其他器件,比如离合器。其中,驱动轴可包括可耦合到一个或多个车轮221的一个或多个轴。
传感器系统204可包括感测关于智能车辆001周边的环境的信息的若干个传感器。例如,传感器系统204可包括定位系统222(定位系统可以是全球定位(global positioning system,GPS)系统,也可以是北斗系统或者其他定位系统)、惯性测量单元(inertial measurement unit,IMU)224、雷达226、激光测距仪228以及相机230。传感器系统204还可包括被监视智能车辆001的内部系统的传感器(例如,车内空气质量监测器、燃油量表、机油温度表等)。来自这些传感器中的一个或多个的传感器数据可用于检测对象及其相应特性(位置、形状、方向、速度等)。这种检测和识别是自主智能车辆001的安全操作的关键功能。
定位系统222可用于估计智能车辆001的地理位置。
IMU 224用于基于惯性加速度来感测智能车辆001的位置和朝向变化。在一个实施例中,IMU 224可以是加速度计和陀螺仪的组合。例如:IMU 224可以用于测量智能车辆001的曲率。
雷达226可利用无线电信号来感测智能车辆001的周边环境内的物体。在一些实施例中,除了感测物体以外,雷达226还可用于感测物体的速度和/或前进方向。
例如:在本申请实施例中,雷达226可以用于检测智能车辆001前进方向上的障碍物。例如:静态或动态的障碍物。还可以用于获取障碍物的位置信息、速度信息、移动方向信息等等,以辅助智能车辆001的安全行驶,以及完善周边环境的视觉效果。
激光测距仪228可利用激光来感测智能车辆001所位于的环境中的物体。在一些实施例中,激光测距仪228可包括一个或多个激光源、激光扫描器以及一个或多个检测器,以及其他系统组件。
例如:在本申请实施例中,激光测距仪228可辅助雷达226可以用于检测智能车辆001前进方向上的障碍物,获取障碍物的位置与智能车辆001之间的距离信息等等,以辅助智能车辆001的安全行驶,以及完善周边环境的视觉效果。
相机230可用于捕捉智能车辆001的周边环境的多个图像。相机230可以是静态相机或视频相机。相机230还可以构成360环视监控(around view monitor,AVM)系统,用于监测智能车辆周边的障碍物、道路情况等。其中,相机230可以包括多个不同或相同规格的车载摄像头,如:广角摄像头,长焦摄像头,鱼眼摄像头,标准摄像头,变焦摄像头等等。其中,根据具体的应用,相机230还可以分为单目摄像机和多目摄像机,分别应用于不同的驾驶场景。
例如:在本申请实施例中,相机230可以包括一个鱼眼摄像头、广角摄像头等等各种类型的单目摄像头,用于拍摄智能车辆前进方向上的周边环境,然后通过获取车辆的速度信息确定该相机230的深度估计的尺寸值,进而确定多帧行驶图像间的位姿关系;最后根据该多帧行驶图像间的位姿关系以拼接生成车辆对应盲区的图像,以辅助驾驶员更好地实现驾驶、泊车、会车等驾驶操作。又例如,当相机230包括一个双目摄像头时,可以通过不同目的摄像头拍摄的行驶图像确定该相机230的深度估计的尺寸值,从而确定多帧行驶图像间的位姿关系;最后根据该多帧行驶图像间的位姿关系以拼接生成车辆对应盲区的图像,以辅助驾驶员更好地驾驶车辆。如:利用双目相机同一时刻两张图像进行特征检测和匹配,根据匹配结果和双目相机安装时标定出的基线距离,直接计算图像深度,得到特征点的空间三维坐标。
控制系统206为控制智能车辆001及其组件的操作。控制系统206可包括各种元件,其中包括转向系统232、油门234、制动单元236、传感器融合算法238、计算机视觉系统240、路线控制系统242以及障碍物避免系统244。
转向系统232可操作来调整智能车辆001的前进方向。例如在一个实施例中可以为方向盘系统。
油门234用于控制引擎218的操作速度并进而控制智能车辆001的速度。
制动单元236用于控制智能车辆001减速。制动单元236可使用摩擦力来减慢车轮221。在其他实施例中,制动单元236可将车轮221的动能转换为电流。制动单元236也可采取 其他形式来减慢车轮221转速从而控制智能车辆001的速度。
计算机视觉系统240可以操作来处理和分析由相机230捕捉的图像以便识别智能车辆001周边环境中的物体和/或特征。所述物体和/或特征可包括交通信号、道路边界和障碍物。计算机视觉系统240可使用物体识别算法、运动中恢复结构(structure from motion,SFM)算法、视频跟踪和其他计算机视觉技术。在一些实施例中,计算机视觉系统240可以用于为环境绘制地图、跟踪物体、估计物体的速度等等。
例如:计算机视觉系统240可以基于智能车辆上的多个车载摄像头的相机参数,将多个车载摄像头分别拍摄的图像转换为世界坐标系中的图像,并获得智能车辆在世界坐标系中对应的行驶图像。
例如:在本申请实施例中,计算机视觉系统240还可以根据障碍物与智能车辆001之间的距离,分别对应按照不同的策略获取盲区图像的填充像素点。例如,根据当前帧行驶图像与一帧或多帧历史行驶图像之间的位姿关系和当前帧行驶图像对应的盲区位置信息,获取当前帧图像对应车辆盲区区域的盲区图像。其中,该盲区图像获取方法可以有效避免了帧数时延导致的障碍物遮挡等问题,还可以有效的避免了多图拼接引起的图像错位、亮度不一致的问题,提高了车底盲区的显示效果。需要说明的是,关于该盲区图像获取方法的具体实施方式,可以对应参考下述方法实施例的相关描述,本申请实施例,在此暂不叙述。
路线控制系统242用于确定智能车辆001的行驶路线。在一些实施例中,路线控制系统242可结合来自传感器238、GPS 222和一个或多个预定地图的数据以为智能车辆001确定行驶路线。
障碍物避免系统244用于识别、评估和避免或者以其他方式越过智能车辆001的环境中的潜在障碍物。
当然,在一个实例中,控制系统206可以增加或替换地包括除了所示出和描述的那些以外的组件。或者也可以减少一部分上述示出的组件。
智能车辆001通过外围装置208与外部传感器、其他车辆、其他计算机系统或用户之间进行交互。外围装置208可包括无线通信系统246、车载电脑248、麦克风250和/或扬声器252。
在一些实施例中,外围装置208提供智能车辆001的用户与用户接口216交互的手段。例如,车载电脑248可向智能车辆001的用户提供信息。用户接口216还可操作车载电脑248来接收用户的输入。车载电脑248可以通过触摸屏进行操作。在其他情况中,外围装置208可提供用于智能车辆001与位于车内的其它装置通信的手段。
麦克风250可从智能车辆001的用户接收音频(例如,语音命令或其他音频输入)。麦克风250还可以采集智能车辆001内各类装置工作时的噪音。
扬声器252可向智能车辆001输出各种需要的声波信号。例如,扬声器252可以是一种将电信号转换为声信号的电声换能器。
无线通信系统246可以直接地或者经由通信网络来与一个或多个装置无线通信。例如,无线通信系统246可使用3G蜂窝通信,例如码分多址(code division multiple access,CDMA)、演变数据优化(evolution-data optimized,EVDO)、全球移动通讯系统(global system  for mobile communications,GSM)/通用分组无线服务(general packet radio service,GPRS),或者蜂窝通信,例如长期演进(long time evolution,LTE)。或者5G蜂窝通信。无线通信系统246可利用WiFi与无线局域网(wireless local area network,WLAN)通信。在一些实施例中,无线通信系统246可利用红外链路、蓝牙或Zig Bee与装置直接通信。其他无线协议,例如:各种车辆通信系统,例如,无线通信系统246可包括一个或多个专用短程通信(dedicated short range communications,DSRC)装置,这些装置可包括车辆和/或路边台站之间的公共和/或私有数据通信。例如:在本申请实施例中,麦克风250采集到的噪音信号可以通过无线通信系统发送至处理器213中。
电源210可向智能车辆001的各种组件提供电力。在一个实施例中,电源210可以为可再充电锂离子或铅酸电池。这种电池的一个或多个电池组可被配置为电源为智能车辆001的各种组件提供电力。在一些实施例中,电源210和能量源219可一起实现,例如一些全电动车中那样。
智能车辆001的部分或所有功能受计算机系统212控制。计算机系统212可包括至少一个处理器213,处理器213执行存储在例如存储器214这样的非暂态计算机可读介质中的指令215。计算机系统212还可以是采用分布式方式控制智能车辆001的个体组件或子系统的多个计算装置。
处理器213可以是任何常规的处理器,诸如商业可获得的中央处理单元(central processing unit,CPU)。替选地,该处理器可以是诸如应用集成电路(application specific integrated circuit,ASIC)或其它基于硬件的处理器的专用装置。尽管图1功能性地图示了处理器、存储器、和在相同块中的计算机的其它元件,但是本领域的普通技术人员应该理解该处理器、或存储器实际上可以包括可以或者可以不存储在相同的物理外壳内的多个处理器、或存储器。例如,存储器可以是硬盘驱动器或位于不同于计算机的外壳内的其它存储介质。因此,对处理器或计算机的引用将被理解为包括对可以或者可以不并行操作的处理器或计算机或存储器的集合的引用。不同于使用单一的处理器来执行此处所描述的步骤,诸如转向组件和减速组件的一些组件每个都可以具有其自己的处理器,所述处理器只执行与特定于组件的功能相关的计算。
在此处所描述的各个方面中,处理器213可以位于远离该车辆并且与该车辆进行无线通信。在其它方面中,此处所描述的过程中的一些在布置于车辆内的处理器上执行而其它则由远程处理器执行,包括采取执行单一操纵的必要步骤。
在本申请实施例中,处理器213用于确定当前帧行驶图像与一帧或多帧历史行驶图像之间的位姿关系;确定障碍物与所述目标车辆之间的目标距离;根据位姿关系和当前帧行驶图像对应盲区区域的盲区位置信息,从多帧历史行驶图像中获取盲区区域内多个目标点中每个目标点的填充像素点;其中,在目标距离小于或等于预设距离阈值时,填充像素点为多帧历史行驶图像中目标点对应的拍摄时间最近的像素点;在目标距离大于预设距离阈值时,第x+1帧历史行驶图像中包括与第x帧历史行驶图像中相同目标点的像素点,相同目标点对应的填充像素点为第x+1帧历史行驶图像与第x帧历史行驶图像中对应目标点的数量多的一帧历史行驶图像中相同目标点对应的像素点;输出所述盲区图像。该填充像素点获取策略的具体实现方式,以及,处理器213确定当前帧行驶图像与一帧或多帧历史行 驶图像之间的位姿关系的具体计算方式,可参考后续系统和方法实施例的相关描述,在此暂不赘述。
在一些实施例中,存储器214可包含指令215(例如,程序逻辑),指令215可被处理器213执行来执行智能车辆001的各种功能,包括以上描述的那些功能。存储器214也可包含额外的指令,包括向推进系统202、传感器系统204、控制系统206和外围装置208中的一个或多个发送数据、从其接收数据、与其交互和/或对其进行控制的指令。
除了指令215以外,存储器214在本申请实施例中还可存储数据,例如:智能车辆中多个车载摄像头拍摄的多帧行驶图像,智能车辆中每个车载摄像头的相机参数、智能车辆的盲区位置信息、盲区形状信息等等以及其它这样的车辆数据。这种信息可在智能车辆001在汽车盲区图像获取的操作期间被智能车辆001和/或计算机系统212使用。
用户接口216,用于向智能车辆001的用户提供信息或从其接收信息。可选地,用户接口216可包括在外围装置208的集合内的一个或多个输入/输出装置,例如无线通信系统246、车载电脑248、麦克风250和扬声器252。
计算机系统212可基于从各种子系统(例如,行进系统202、传感器系统204和控制系统206)以及从用户接口216接收的输入来控制智能车辆001的功能。例如,计算机系统212可利用来自控制系统206的输入以便控制转向单元232来避免由传感器系统204和障碍物避免系统244检测到的障碍物。在一些实施例中,计算机系统212可操作来对智能车辆001及其子系统的许多方面提供控制。
可选地,上述这些组件中的一个或多个可与智能车辆001分开安装或关联。例如,存储器214可以部分或完全地与智能车辆001分开存在。上述组件可以按有线和/或无线方式来通信地耦合在一起。
可选地,上述组件只是一个示例,实际应用中,上述各个模块中的组件有可能根据实际需要增添或者删除,图1不应理解为对本申请实施例的限制。
在道路行进的自动驾驶汽车,如上面的智能车辆001,可以识别智能车辆前进方向上的障碍物距该智能车辆的距离大小,其距离大小可以确定对当前盲区图像获取的策略进行选择。
可选地,智能车辆001或者与智能车辆001相关联的计算装置(如图1的计算机系统212、计算机视觉系统240、存储器214)可以基于所识别的物体的特性和周围环境的状态(例如,停车场内的静态或动态物体等等)来预测所述识别的物体的行为。可选地,每一个所识别的物体都依赖于彼此的行为,因此还可以将所识别的所有物体全部一起考虑来预测单个识别的物体的行为。智能车辆001能够基于预测的所述识别的物体的行为来调整它的速度。换句话说,自动驾驶汽车能够基于所预测的物体的行为来确定车辆将需要调整到(例如,加速、减速、或者停止)什么稳定状态。在这个过程中,也可以考虑其它因素来确定智能车辆001的速度,诸如,智能车辆001在行驶的道路中的横向位置、道路的曲率、静态和动态物体的接近度等等。
除了提供调整自动驾驶汽车的速度的指令之外,计算装置还可以提供修改智能车辆001的转向角的指令,以使得自动驾驶汽车遵循给定的轨迹和/或维持与自动驾驶汽车附近的物体(例如,道路上的相邻车道中的轿车)的安全横向和纵向距离。
上述智能车辆001可以为轿车、卡车、摩托车、公共汽车、娱乐车、游乐场车辆、施工装置、电车、高尔夫球车、火车、和手推车等拥有车载摄像头的各式车辆,本申请实施例不做特别的限定。
可以理解的是,图1中的智能车辆功能图只是本申请实施例中的一种示例性的实施方式,本申请实施例中的智能车辆包括但不仅限于以上结构。
请参考附图2,图2是本申请实施例提供的一种智能车辆中计算装置结构示意图,应用于上述图1中,相当于图1所示的计算机系统212,可以包括处理器203,处理器203和系统总线205耦合。处理器203可以是一个或者多个处理器,其中每个处理器都可以包括一个或多个处理器核,相当于上述图1所示的处理器213。存储器235可以存储相关数据信息,存储器235和系统总线205耦合,相当于上述图1所示的存储器214。显示适配器(video adapter)207,显示适配器207可以驱动显示器209,显示器209和系统总线205耦合。系统总线205通过总线桥201和输入输出(input/output,I/O)总线213耦合。I/O接口215和I/O总线耦合。I/O接口215和多种I/O装置进行通信,比如输入装置217(如:键盘,鼠标,触摸屏等),多媒体盘(media tray)221,(例如,只读光盘(compact disc read-only memory,CD-ROM),多媒体接口等)。收发器223(可以发送和/或接受无线电通信信号),摄像头255(可以捕捉景田和动态数字视频图像)和外部通用外行总线(universal serial bus,USB)接口225。其中,可选地,和I/O接口215相连接的接口可以是USB接口。
其中,处理器203可以是任何传统处理器,包括精简指令集计算(reduced instruction set computing,RISC)处理器、复杂指令集计算(complex instruction set computer,CISC)处理器或上述的组合。可选地,处理器可以是诸如专用集成电路ASIC的专用装置。可选地,处理器203可以是神经网络处理器或者是神经网络处理器和上述传统处理器的组合。例如:处理器203可以确定当前帧行驶图像与一帧或多帧历史行驶图像之间的位姿关系;确定障碍物与所述目标车辆之间的目标距离;根据位姿关系和当前帧行驶图像对应盲区区域的盲区位置信息,从多帧历史行驶图像中获取盲区区域内多个目标点中每个目标点的填充像素点;其中,在目标距离小于或等于预设距离阈值时,填充像素点为多帧历史行驶图像中目标点对应的拍摄时间最近的像素点;在目标距离大于预设距离阈值时,第x+1帧历史行驶图像中包括与第x帧历史行驶图像中相同目标点的像素点,相同目标点对应的填充像素点为第x+1帧历史行驶图像与第x帧历史行驶图像中对应目标点的数量多的一帧历史行驶图像中相同目标点对应的像素点;输出所述盲区图像。
计算机系统212可以通过网络接口229和软件部署服务器(deploying server)249通信。网络接口229是硬件网络接口,比如,网卡。网络227可以是外部网络,比如因特网,也可以是内部网络,比如以太网或者虚拟私人网络(virtual private network,VPN)。可选地,网络227还可以是无线网络,比如WiFi网络,蜂窝网络等。
收发器223(可以发送和/或接受无线电通信信号),可以通过不限于第二代(2th generation,2G)移动通信网络、第三代(3th generation,3G)移动通信网络、第四代(4th generation,4G)移动通信网络、第五代(5th generation,5G)移动通信网络等各种无线通信方式,也可以是专用短程通信技术(dedicated short range communications,DSRC),或 者长时间演进-车辆(long term evolution-vehicle,LTE-V)等技术,其主要功能是接收外部装置发送的信息数据,并将该车辆在目标路段行驶时信息数据发送回给外部装置进行存储分析。
硬盘驱动接口231和系统总线205耦合。硬件驱动接口231和硬盘驱动器233相连接。系统内存235和系统总线205耦合。运行在系统内存235的数据可以包括计算机系统212的操作系统OS 237和应用程序243。
存储器235和系统总线205耦合。
操作系统包括壳shell 239和内核(kernel)241。shell 239是介于使用者和操作系统之内核间的一个接口。shell是操作系统最外面的一层。shell管理使用者与操作系统之间的交互:等待使用者的输入;向操作系统解释使用者的输入;并且处理各种各样的操作系统的输出结果。
内核241由操作系统中用于管理存储器、文件、外设和系统资源的那些部分组成。直接与硬件交互,操作系统内核通常运行进程,并提供进程间的通信,提供CPU时间片管理、中断、内存管理、I/O管理等等。
应用程序243包括控制盲区图像获取相关的程序,比如,管理车载摄像头获取行驶图像的程序,计算多帧行驶图像之间位姿关系的程序,从多帧行驶图像中筛选出部分或全部行驶图像以获取车辆盲区区域的盲区图像的程序等等。应用程序243也存在于软件部署服务器249的系统上。在一个实施例中,在需要执行盲区图像获取的相关程序247时,计算机系统212可以从软件部署服务器249下载应用程序243。例如:应用程序243选择多帧行驶图像填充当前帧行驶图像的盲区区域的图像,避免了使用单张行驶图像盲区区域填充不完全的现象。同时在选择填充像素点时,根据障碍物与在智能车辆之间的目标距离;若目标距离小于或等于预设距离阈值,选择拍摄时间最近的像素点为填充像素点,有效避免了帧数时延导致的障碍物遮挡等问题,同时由于选择的填充图像为拍摄时间最近的一帧行驶图像,其最终获得的盲区区域的清晰度也最好。若目标距离大于预设距离阈值,选择对应目标点多的历史帧行驶图像中的像素点为填充像素点,有效的避免了多图拼接引起的图像错位、亮度不一致的问题,提高了车底盲区的显示效果。两种不同情况下的选择的填充像素点不同,大大提高了获得完整、清晰、准确的盲区图像的概率,保障了智能车辆的行车安全。而且,采用某种特定的方法对盲区进行填充,可以让驾驶员多方位观察汽车所在位置,防止轮胎磨损、底盘剐蹭受损,也以便驾驶员更好地观察车周、轮胎及车底信息,辅助泊车,最大限度避免车辆损失事故发生,提升驾驶体验和驾驶安全性。
传感器253和计算机系统212关联。传感器253用于探测计算机系统212周围的环境。举例来说,传感器253可以探测动物,汽车,障碍物和人行横道等,进一步传感器还可以探测上述动物,汽车,障碍物和人行横道等物体周围的环境,比如:动物周围的环境,例如,动物周围出现的其他动物,天气条件,周围环境的光亮度等。可选地,如果计算机系统212位于盲区图像获取系统的汽车上,传感器可以是摄像头,红外线感应器,化学检测器等。
可以理解的是,图2中的盲区图像获取装置结构只是本申请实施例中的一种示例性的实施方式,本申请实施例中的应用于智能车辆的盲区图像获取装置结构包括但不仅限于以 上结构。
另外,结合本申请中提供的盲区图像获取方法,请参考附图3,图3是本申请实施例提供的一种盲区图像获取系统架构示意图。如图3所示,该盲区图像获取系统架构包括数据加载模块(相当于上述图1所示的传感器系统204)、图像处理模块、动态选帧模块,还可以包括拼接及优化模块、显示模块。其中,图像处理模块和动态选帧模块均相当于上述图1所示的计算机视觉系统。
数据加载模块,相当于上述图1所示的传感器系统204。可以用于获得障碍物与智能终端之间的距离信息;还可以用于获得当前帧行驶图像与一帧或多帧历史行驶图像;还可以用于获取智能终端的速度信息等。例如,可以负责获取同时刻的终端上环视系统的多路鱼眼摄像头图像数据,并从车辆总线上获得速度数据,在有其他传感器的条件下,获取其他提供终端状态或速度信息传感器数据,比如组合定位系统、加装的双目摄像头。可以理解的,当终端装置为车辆时,该传感器模块内相关组件的作用可对应参考上述图1所示的智能车辆架构中传感器系统204的相关描述,本申请实施例在此不再赘述。
图像处理模块,相当于上述图1所示的计算机视觉系统或者上述图1所示的计算机系统212。该图像处理模块可以用于确定当前帧行驶图像与一帧或多帧历史行驶图像之间的位姿关系。例如:负责对多帧前视或后视鱼眼图像(可以由智能终端的前进方向确定)或加装的双目摄像头进行计算,通过对不同图像间特征进行检测和匹配,计算不同帧图像间的相对位姿关系。
动态选帧模块,相当于上述图1所示的计算机视觉系统或者上述图1所示的计算机系统212。动态选帧模块可以用于从一帧或多帧历史行驶图像筛选出可以填充盲区区域图像的行驶图像,其中具体的筛选方式可以对应参考下述方法实施例的相关描述,本申请实施例对此暂不叙述。例如:负责针对已知相对位姿关系的多帧图像和当前盲区位置,首先按时间顺序排列多帧图像,并通过快速边缘判断获取已经排好序的不同帧图像针对当前盲区的区域动态拼接范围,从而筛选出目标帧图像以拼接盲区图像。
拼接及优化模块,用于根据筛选后的目标帧图像,拼接、优化并输出智能终端当前帧行驶图像对应的盲区图像。例如:负责对不同帧图像对应此刻的拼接范围进行计算,完成盲区区域图像的生成和拼接,并对多帧图像的亮度进行归一化调整,生成亮度比较均一的完整的盲区区域图像。
显示模块,可以负责显示智能终端底部盲区区域的图像。
以智能车辆为例,基于图1提供的智能车辆架构,图2提供的盲区图像获取装置的结构,以及图3提供的盲区图像获取系统架构,结合本申请中提供的盲区图像获取方法,对本申请中提出的技术问题进行具体分析和解决。
参见图4,图4是本申请实施例提供的一种盲区图像获取方法的流程示意图,该盲区图像获取方法可应用于上述图1中上述的智能车辆中,其中的智能车辆001可以用于支持并执行图4中所示的方法流程步骤S301-步骤S306,下面将结合附图4进行描述。该方法可以包括以下步骤S301-步骤S306。
步骤S301:获取当前帧行驶图像与多帧历史行驶图像。
具体地,盲区图像获取装置获取当前帧行驶图像与多帧历史行驶图像,其中,所述当前帧行驶图像为当前时间拍摄的行驶图像,所述多帧历史行驶图像为在所述当前时间之前拍摄的行驶图像,所述行驶图像为目标终端(如:车辆等)前进方向上的周边环境图像。可以理解的是,多帧历史行驶图像也可以称为历史帧行驶图像。其中,多帧行驶图像可以通过终端上的摄像头获得。例如:当目标终端为智能车辆时,在目标车辆前行的情况下,获取的前方的环境图像为目标车辆的行驶图像;当目标车辆倒车时,获取的后方的环境图像为目标车辆的行驶图像。又例如:当目标车辆行驶时,可以获取的车辆的鸟瞰图为目标车辆的行驶图像。
需要说明的是,环境图像是指终端在行驶时包括行驶的路面、周边障碍物等等包含周边行驶环境的图像。还需要说明的是,本申请实施例提及的目标车辆、智能车辆等相关描述相当于上述图1所示的智能车辆。
可选的,通过单目摄像头,获取所述当前帧行驶图像与所述多帧历史行驶图像;或者,通过多目摄像头,获取所述当前帧行驶图像与所述多帧历史行驶图像。例如:可以通过智能车辆的行车记录仪,获取目标车辆前进方向上的行驶图像。又例如:可以通过智能车辆的双目摄像头,获取目标车辆前进方向上的行驶图像。这种既可以通过单目摄像头获取行驶图像又可以通过多目摄像头获取行驶图像,极大地缓解了盲区图像获取方法对硬件的要求,降低了普及难度,提高了驾驶的安全性。需要说明的是,摄像头的安装位置可以根据行车需求,安装在目标终端的前进方向上。
步骤S302:确定当前帧行驶图像与一帧或多帧历史行驶图像之间的位姿关系。
具体地,盲区图像获取装置确定当前帧行驶图像与一帧或多帧历史行驶图像之间的位姿关系,其中,所述当前帧行驶图像为当前时间拍摄的行驶图像,所述多帧历史行驶图像为在所述当前时间之前拍摄的行驶图像,所述行驶图像为目标终端前进方向上的周边环境图像。需要说明的是,该位姿关系是指当前帧行驶图像与多帧历史行驶图像中每一帧历史行驶图像之间的位姿关系,也可以是指拍摄当前帧行驶图像时目标终端与拍摄历史行驶图像时目标终端之间的位姿关系。其中,根据该位姿关系可以确定拍摄当前帧行驶图像时目标终端对应盲区位置在之前拍摄的历史行驶图像中的位置。而且,该位姿关系包括了旋转信息(如:旋转角度)和平移信息(如:平移距离)。
可选的,对所述当前帧行驶图像与所述多帧历史行驶图像进行图像特征检测,获取所述当前帧行驶图像与所述多帧历史行驶图像之间的图像特征点,所述图像特征点为所述当前帧行驶图像与所述多帧历史行驶图像之间的共视点;基于所述图像特征点和尺寸值,确定所述当前帧行驶图像与所述多帧历史行驶图像之间的位姿关系。需要说明的是,尺寸值用于指示在深度估计时行驶图像单位像素长度的大小。特征点可以是行驶图像中的共视点,即,当前帧图像与历史行驶图像中均包括的目标对象对应的像素点。而且该特征点,一般具有尺寸不变性,旋转不变性。例如:目标对象一般可以选择如路边的标识、红路灯、树木、路障等等,进而选择其对应的像素点作为图像的特征点。还需要说明的是,在急转弯、大转弯的行驶场景下,若确定当前帧行驶图像A与拍摄时间距当前帧行驶图像较远的历史行驶图像B的之间的位姿关系时,可以先确定当前帧行驶图像A与历史行驶图像C之间的 第一位姿关系;在确定历史行驶图像C与历史行驶图像B之间的第二位姿关系;最后根据第一位姿关系和第二位姿关系确定当前帧行驶图像A与历史行驶图像B之间的位姿关系;其中,历史行驶图像C的拍摄时间在当前时间与拍摄历史行驶图像B时间点之间。
请参考附图5,图5是本申请实施例提供的一种确定多帧行驶图像之间位姿关系的流程示意图。如图5所示:第一步:图像特征检测;该图像特征检测是对行驶图像中的特征检测,识别图像中包括的路面特征,环境特征,障碍物特征等等,以方便选取上述特征的特征点。第二步:图像特征点匹配;图像特征点匹配是将不同行驶图像间相同的特征点匹配对应起来,以便确定行驶图像间的位姿关系。第三步:深度估计;是对物体到车载摄像头之间距离估计,以便确定拍摄完后,行驶图像上相机坐标系下每单位像素点对应世界坐标系下的尺寸大小。第四步,像素点和世界点匹配位姿估计;即将每个行驶图像中像素点与世界坐标系中的点匹配起来,以确定两个不同拍摄时间拍摄的相同的世界点(世界坐标系中的点)之间位姿关系,进而确定其对应像素点之间的位姿关系。第五步:重投影误差优化;对比多帧行驶图像之间的位姿关系,优化结果,减小误差,最后输出位姿关系。这种通过多帧行驶图像之间的特征点确定多帧行驶图像之间的位姿关系,可以提高位姿关系确定的准确率,从而提高盲区图像的获取效率。
例如:在确定位姿关系之前,还可以利用事先标定好的相机外参及相机模组厂商给的相机内参,对终端上的环视系统行驶图像进行畸变矫正,获取畸变矫正后的图像,并对中心区域进行剪裁,获取图像质量较优的部分。其中,上述相机内参包括上述摄像头所拍摄的图像的坐标系与摄像头坐标系之间的关系,上述相机外参包括上述摄像头坐标系与上述世界坐标系之间的关系。将畸变矫正及剪裁后的图像序列(图像序列包括当前帧行驶图像与一帧或多帧历史行驶图像)上进行图像特征检测,针对检测到的图像特征,在图像序列上进行匹配,通过匹配得到一系列行驶图像上同一特征点的对应关系。利用三角化和多点投影关系算法,获得图片序列间相对位姿关系,并利用相机投影模型进行优化。其中,当目标终端为车辆,获取行驶图像的车载摄像头为单目车载摄像头时,位姿关系中位姿的尺度由汽车CAN总线提供的速度信号进行初始化。这种通过多帧行驶图像确定位姿关系的方式会更加准确,提高了确定行驶图像中盲区位置的准确度。
可选的,若通过单目摄像头,获取所述当前帧行驶图像与所述多帧历史行驶图像;则确定当前帧行驶图像与一帧或多帧历史行驶图像之间的位姿关系之前,所述方法还包括:获取所述目标终端的速度信息;基于所述速度信息,获取相邻帧行驶图像之间的距离信息;基于所述距离信息,确定所述单目摄像头的深度估计的尺寸值,所述尺寸值用于指示在深度估计时单位长度的大小;所述确定当前帧行驶图像与一帧或多帧历史行驶图像之间的位姿关系,包括:基于所述尺寸值,确定当前帧行驶图像与一帧或多帧历史行驶图像之间的位姿关系。在通过单目摄像头获取多帧行驶图像时,为了确定相机的深度估计尺寸值,还需要获取目标终端的速度信号,如,可以通过传感器系统或者车辆的控制器局域网络(controller area network,CAN)获取目标车辆的速度信号,其中,速度信号只用来进行初始化深度估计尺寸值,因此,只需要开始时刻数据且不需要高精度,每时刻的速度数据。这种方式不会过分依赖速度信息的获取,在堵车、速度缓慢前行等行驶场景下,依旧可以实现盲区图像的获取,最大概率的保证了终端在速度失效的应用场景下,依旧可以保持盲区 图像的正常获取,以保持车辆的安全行驶,降低行车的安全隐患。
步骤S303:确定障碍物与目标终端之间的目标距离。
具体地,盲区图像获取装置确定障碍物与所述目标终端之间的目标距离。可以理解的是,障碍物可以是目标终端前进方向上的障碍物。以智能车辆为例,请参考附图6,图6是本申请实施例提供的一种车辆在行驶时的场景示意图。如图6所示:目标车辆A和目标车辆B行驶在马路上,其中,目标车辆A行驶在目标车辆B的前方,左右两边的道路上均有树木。对于目标车辆A,其前进方向上并没有障碍物(静态或动态),既可以理解为障碍物与目标车辆之间的目标距离为最大值,且大于预设阈值。对于目标车辆B,由于目标车辆A在目标车辆B的前方行驶,则障碍物与目标车辆B之间的目标距离为目标车辆A和目标车辆B之间的距离,目标车辆A为目标车辆B前进方向上的障碍物(动态)。
需要说明的是,步骤S303与步骤S301和步骤S302的执行顺序本申请实施例不做具体的限定。例如:本申请实施例还可以先确定障碍物与目标终端之间的目标距离,再确定多帧行驶图像之间的位姿关系。
步骤S304:在目标距离小于或等于预设距离阈值时,根据位姿关系和当前帧行驶图像对应盲区区域的盲区位置信息,从多帧历史行驶图像中获取盲区区域内多个目标点中每个目标点的填充像素点。
具体地,盲区图像获取装置在确定障碍物与目标终端之间的目标距离小于或等于预设距离阈值时,根据所述位姿关系和所述当前帧行驶图像对应盲区区域的盲区位置信息,从所述多帧历史行驶图像中获取所述盲区区域内多个目标点中每个目标点的填充像素点。其中,所述填充像素点为所述多帧历史行驶图像中所述目标点对应的拍摄时间最近的像素点。盲区区域包括多个目标点,针对盲区区域内所述多个目标点中的每个目标点,从所述多帧历史行驶图像中选择拍摄时间最近的像素点作为所述目标点的所述填充像素点。需要说明的是,请参考附图7,图7是本申请实施例提供的一种目标点和填充像素点示意图。如图7所示,盲区区域包括多个目标点,每个目标点可以对应行驶图像中的某一个像素点,获取到行驶图像中目标点对应的像素点为该像素点的填充像素点后,输出所有目标点对应的填充像素点可以获得盲区图像。
还需要说明的是,盲区图像获取装置在确定在距目标终端的预设范围内存在障碍物时,从所述多帧历史行驶图像中选择所述目标点对应的拍摄时间最近的像素点为盲区区域的填充像素点,以获取盲区区域的盲区图像。其中,针对所述多个目标点中的每个目标点,从所述多帧历史行驶图像中选择拍摄时间最近的像素点作为所述目标点的所述填充像素点。即,可以理解为当多帧历史行驶图像均包括相同像素点的像素值时,选择拍摄时间距所述当前时间最近的一帧历史行驶图像提供所述像素点的像素值。这种选择拍摄时间最近的像素点对盲区区域内的目标点进行填充,可以有效的避免帧数时延导致的障碍物遮挡等问题,而且获得最终的盲区区域图像的画质比较清晰,提高了终端底部盲区的显示效果。另外,预设距离阈值的大小可以基于目标终端盲区大小确定,在障碍物与目标终端的距离超过盲区的大小时,则障碍物对盲区的遮挡概率较小;在障碍物与目标终端的距离小于盲区的大小时,则障碍物对盲区的遮挡概率较大,因此,在实际驾驶过程中,预设距离阈值的大小可以基于目标终端盲区大小确定。
可选的,所述多帧历史行驶图像为m帧历史行驶图像;所述在所述目标距离小于或等于预设距离阈值时,根据所述位姿关系和所述当前帧行驶图像对应盲区区域的盲区位置信息,从所述多帧历史行驶图像中获取所述盲区区域内多个目标点中的每个目标点的填充像素点,包括:在所述目标距离小于或等于预设距离阈值时,将所述m帧历史行驶图像按照拍摄时间排序,m为大于1的整数;根据所述位姿关系和所述当前帧行驶图像对应盲区区域的盲区位置信息,从第x帧历史行驶图像中获取目标像素点集合,作为所述多个目标点对应的填充像素点集合,其中,所述目标像素点集合包括所述多个目标点对应的填充像素点,所述第x帧历史行驶图像为所述m帧历史行驶图像中包括所述目标像素点集合的,且拍摄时间距当前时间最近的一帧历史行驶图像,x=1、2、3…m。可以理解的是:目标像素点集合包括第x帧历史行驶图像中与目标点对应的填充像素点。将所述多帧历史行驶图像按照拍摄时间排序;根据所述位姿关系和所述当前帧行驶图像对应盲区区域的盲区位置信息,从排序后的所述多帧历史行驶图像中确定包含与目标点对应像素点的、拍摄时间最近的一帧行驶图像;获取所述该行驶图像中像素点集合,填充至所述盲区区域。需要说明的所示,本申请实施例是将填充像素点的像素值填充至目标点处以获得盲区区域的图像。
可选的,所述方法还包括:在所述目标像素点集合为部分目标点对应的填充像素点集合时,依次从第x+1帧历史行驶图像中获取目标像素点集合,作为剩余部分目标点对应的填充像素点集合,直至所述盲区区域中所述多个目标点填充完毕。若将上述行驶图像中的目标像素点集合无法完全填充当前帧行驶图像对应的盲区区域,即,所述盲区区域中存在未填充的区域,则需要获取所述盲区区域中当前未填充的区域,并将所述未填充的区域对应的位置信息更新为盲区位置信息;根据所述位姿关系和所述更新后的盲区位置信息,从所述第x+1帧历史行驶图像中确定目标像素点集合,填充至所述盲区区域中未填充的区域;依次遍历所述第x+1帧历史行驶图像后的所述多帧历史行驶图像,获取目标像素点集合,直至所述盲区区域填充完毕,获得所述盲区图像。
例如:针对当前帧盲区,历史帧为按时间顺序排序的,历史帧第x帧和第x+1帧均能提供部分盲区区域,其中,有重叠区域及各自单独覆盖区域。针对盲区开始以左右边缘端点计算当前历史帧可以提供的区域(目标像素点集合)。判断可以第x帧可以提供的最大盲区范围后,更新盲区区域范围并继续下一帧图像寻找可以提供的区域(目标像素点集合),直至盲区全部填充。这种选择拍摄时间最近的图像对盲区区域进行填充,可以有效的避免帧数时延导致的障碍物遮挡等问题,而且获得最终的盲区区域图像的画质比较清晰,提高了车底盲区的显示效果。
又例如:请参考附图8,图8是本申请实施例提供的一种盲区填充示意图。如图8所示:第一帧历史行驶图像、第二帧历史行驶图像和第三帧历史行驶图像,以及第四帧历史行驶图像,上述多帧历史行驶图像均可以提供部分或全部盲区图像,即,可以提供部分或全部盲区区域内目标点对应的填充像素点,如:分别为A、B、C、D四个部分。其中,第一帧至第四帧历史行驶图像按照拍摄时间由晚到早排序,即,拍摄第一帧历史行驶图像的时间要晚于第二帧历史行驶图像。在多帧历史行驶图像均存在有与同一个目标点对应的像素点时,按照拍摄时间,选取距当前时间最近的一帧历史行驶图像提供该目标点对应的像素点为填充像素点,获取该填充像素点的像素值。如图7所示:在拼接过程,按照拍摄时 间,盲区区域的图像由第一帧历史行驶图像提供的A区域内的填充像素点、第二帧历史行驶图像提供的B区域除去与A区域重合区域的剩余部分区域内的填充像素点和第三帧历史行驶图像提供的C区域除去与B区域重合区域的剩余部分区域内的填充像素点填充完成。同时,由于前三帧历史行驶图像中的盲区图像可以填充完全全部盲区区域,所以由于前三帧历史行驶图像可以提供全部盲区区域的图像,且拍摄时间距当前时间更近,因此,第四帧历史行驶图像即使可以提供全部盲区区域的像素点,也需要舍弃。
步骤S305:在目标距离大于预设距离阈值时,根据位姿关系和当前帧行驶图像对应盲区区域的盲区位置信息,从多帧历史行驶图像中获取盲区区域内多个目标点中每个目标点的填充像素点。
具体地,盲区图像获取装置在确定障碍物与目标终端之间的目标距离大于预设距离阈值时,根据所述位姿关系和所述当前帧行驶图像对应盲区区域的盲区位置信息,从所述多帧历史行驶图像中获取所述盲区区域的填充像素点;其中,在所述第x+1帧历史行驶图像中包括与所述第x帧历史行驶图像中相同目标点的像素点时,所述相同目标点对应的填充像素点为所述第x+1帧历史行驶图像与所述第x帧历史行驶图像中对应所述目标点的数量多的一帧历史行驶图像中所述相同目标点对应的像素点。即,在第x+1帧历史行驶图像中对应所述目标点的数量大于第x帧历史行驶图像中对应所述目标点的数量,且第x+1帧历史行驶图像中包括的与所述第x帧历史行驶图像中相同目标点对应的像素点时,针对所述相同目标点,从所述多帧历史行驶图像中选择第x+1帧历史行驶图像相同目标点对应的像素点集合作为所述相同目标点对应的所述填充像素点集合。另外,针对所述第x+1帧历史行驶图像与所述第x帧历史行驶图像不相同的目标点,根据所述位姿关系和所述当前帧行驶图像对应盲区区域的盲区位置信息,从所述第x+1帧历史行驶图像与所述第x帧历史行驶图像中获取不相同的目标点对应的填充像素点填充至盲区区域。
例如:针对当前帧盲区,历史帧为按时间顺序排序的,历史帧x和x+1均能提供部分盲区区域,其中有重叠区域(第二像素点集合对应的目标点区域)及各自单独覆盖区域(第一像素点集合对应的目标点区域)。针对盲区开始以左右边缘端点计算当前历史帧可以提供的区域。当后序历史帧覆盖范围包括前序历史帧区域部分及全部范围时,修改前序历史帧区域范围为后续历史帧不包括的范围。最终,实现全部区域的终端底部盲区填充。当后序帧提供的范围足够大时,该方案可能从后序某一帧中填充全部终端底部盲区。其中,所述第一像索点策合为拍摄时间最近的与所述目标点对应的像紊点,所述第二像索点集合包抬所述第x+1帧历史行半图像中包指的与所述第x幀历史行驶图像中相同目标点对应的像索点。
需要说明的是,盲区图像获取装置在确定在距目标终端的预设范围内存在障碍物时,从所述多帧历史行驶图像的每相邻两帧历史行驶图像中选择所述目标点对应的像素点多的一帧行驶图像提供相同目标点的填充像素点,以获取盲区区域的盲区图像。其中,当多帧历史行驶图像包括相同目标点的填充像素点时,选择多帧历史行驶图像中包含填充像素点数量多的一帧行驶图像提供所述像素点。这种选择较大面积的图像对盲区区域进行填充,可以尽可能的减少盲区图像的拼接次数和拼接数量,可以有效的避免了多图拼接引起的图像错位、亮度不一致的问题,提高了车底盲区的展示效果。
例如:请参考附图9,图9是本申请实施例提供的另一种盲区填充示意图。如图9所示:第一帧历史行驶图像、第二帧历史行驶图像和第三帧历史行驶图像,以及第四帧历史行驶图像,上述多帧历史行驶图像均可以提供部分或全部盲区图像,即,可以提供部分或全部盲区区域内目标点对应的填充像素点,如:分别为A、B、C、D四个部分。其中,第一帧至第四帧历史行驶图像按照拍摄时间由晚到早排序,即,拍摄第一帧历史行驶图像的时间要晚于第二帧历史行驶图像。在多帧历史行驶图像均可提供同一个目标点的填充像素点时,按照所能够提供盲区的面积,选取提供盲区的面积更大的一帧历史行驶图像提供该目标点的填充像素点。
如图9所示:在拼接过程,按照拍摄时间,开始从第一帧历史行驶图像遍历,确认第一帧历史行驶图像中A区域对应的像素点为盲区区域的目标点对应的填充像素点,同时获取A区域的位置信息和面积信息。此时,盲区区域的图像由第一帧历史行驶图像提供的A区域内的填充像素点。
在确定第一帧历史行驶图像只是部分盲区区域后,开始遍历第一帧历史行驶图像后的第二帧历史行驶图像;确认第二帧历史行驶图像中B区域对应的像素点为盲区区域的目标点对应的填充像素点,同时获取B区域的位置信息和面积信息,对比A区域与B区域的位置和面积,确定B区域所能够提供盲区区域的面积更大,则进一步判断A区域与B区域是否可以提供相同像素点的像素值(即,A区域与B区域是否存在重叠区域,或者第二帧历史行驶图像是否包含第二像素点集合),发现B区域可以提供与A区域相同像素点的像素值,即B区域与A区域之间存在的重叠区域为A区域。因此,此时,盲区区域的图像由第二帧历史行驶图像提供的第一像素点集合(B区域单独所提供的填充像素点)和第二像素点集合(A区域内的填充像素点)。即,填充B区域单独所提供的盲区区域、舍弃A区域所提供的盲区区域,将B区域所提供的盲区区域图像填充至当前帧行驶图像对应的盲区区域中。
确定第一帧历史行驶图像和第二帧历史行驶图像当前提供的盲区区域只是部分盲区区域后,开始遍历第二帧历史行驶图像后的第三帧历史行驶图像;确认第三帧历史行驶图像中C区域图像对应的像素点为盲区区域的部分目标点对应的填充像素点,同时获取C区域的位置信息和面积信息,对比C区域与B区域的位置和面积,确定第三帧历史行驶图像中C区域所能够提供盲区区域的面积更大,发现C区域可以提供与B区域相同像素点的像素值,且同时C区域与B区域分别也存在单独的盲区区域。因此,此时,盲区区域的图像由第二帧历史行驶图像提供的第一像素点集合(B区域相较于C区域单独所提供的填充像素点)和第三帧历史行驶图像提供的第一像素点集合(C区域相较于B区域单独所提供的填充像素点)与第二像素点集合(C区域可以提供与B区域相同像素点)。即,对比C区域与B区域后,保留B区域所单独提供的部分盲区区域、舍弃B区域中与C区域重合的重叠区域,将与C区域拼接。将B区域除去与C区域重合的部分区域所提供的盲区区域图像与C区域所提供的盲区区域图像填充至当前帧行驶图像对应的盲区区域中。
确定第二帧历史行驶图像和第三帧历史行驶图像所提供的盲区区域图像可以完全覆盖全部盲区区域,则盲区区域的图像由第二帧历史行驶图像提供的B区域除去与C区域重合区域的剩余部分区域和第三帧历史行驶图像提供的C区域拼接完成。另外,由于前三帧历 史行驶图像中的盲区图像可以填充完全全部盲区区域,所以第四帧历史行驶图像即使可以提供全部盲区的图像,也需要舍弃。
还需要说明的是,第x帧历史行驶图像中对应所述目标点的数量相当于第x帧历史行驶图像中像素点与盲区区域内目标点的对应数量;也可以理解为,第x帧历史行驶图像中所对应盲区区域的面积大小。同理,可以确定第x+1帧历史行驶图像中对应所述目标点的数量。另外,第x+1帧历史行驶图像的第一像素点集合,相当于拍摄时间距离当前时间最近的、与目标点对应的像素点集合,相当于第一次获取到的像素点集合;第x+1帧历史行驶图像的第二像素点集合,相当于与第x帧历史行驶图像中相同目标点对应的像素点集合,相当于与第x帧历史行驶图像相同的像素点集合,也相当于,第x+1帧历史行驶图像中与第x帧历史行驶图像对应的相同的盲区区域。
例如:将所述多帧历史行驶图像按照拍摄时间排序;根据所述位姿关系和所述当前帧行驶图像对应盲区区域的盲区位置信息,从排序后的所述多帧历史行驶图像中确定包含所述目标像素点的行驶图像;确定该历史行驶图像中所述目标像素点集合对应的第一盲区区域;若该第一盲区区域为部分盲区区域,则遍历该历史行驶图像排序后的历史行驶图像,确定排序后的历史行驶图像包含的第二盲区区域;对比所述第一盲区区域和所述第二盲区区域的位置和覆盖面积大小;若所述第一盲区区域与所述第二盲区区域之间存在重叠区域,且小于所述第二盲区区域,则所述重叠区域由所述第二盲区区域对应的像素点填充;若所述第一盲区区域与所述第二盲区区域之间存在重叠区域,且第一盲区区域大于或等于所述第二盲区区域,则所述重叠区域由第一盲区区域对应的像素点填充,直至填充完毕。另外,若所述第一盲区区域与所述第二盲区区域之间不存在重叠区域,则将所述第一盲区区域与所述第二盲区区域依次填充至所述盲区区域。
可选的,在所述第x+1帧历史行驶图像中对应所述目标点的数量小于或等于所述第x帧历史行驶图像中对应所述目标点的数量,且第x+1帧历史行驶图像中包括的与所述第x帧历史行驶图像中相同目标点对应的像素点时,针对所述相同目标点,从所述多帧历史行驶图像中选择第x帧历史行驶图像相同目标点对应的像素点集合作为所述相同目标点对应的所述填充像素点集合。在后序历史行驶图像中,若没有单独的盲区填充区域时,即,第x+1帧历史行驶图像中对应所述目标点的数量小于或等于所述第x帧历史行驶图像中对应所述目标点的数量,则以第x帧历史行驶图像中的第一像素点集合作为所述目标点的所述填充像素点。这样既可以保证最终获得的盲区图像的清晰度,又可以减少拼接次数。
可选的,目标点分为第一类目标点和第二类目标点,所述目标像素点集合为所述多帧历史行驶图像中以所述第一类目标点对应的填充像素点和所述第二类目标点对应的填充像素点为端点的线段上的所有像素点集合,其中,第一类目标点和第二类目标点分别位于所述盲区区域不同的边界上,且在所述盲区区域中,所述第一类目标点和所述第二类目标点一一对应并呈轴对称分布。这种只计算如左右端边缘两个位置是否在历史帧行驶图像像素范围之内,以条状填充盲区位置。大大减少了在实际应用中的计算量,提高了盲区图像获取效率,减少了延时时间。需要说明的是。本申请实施例中所涉及的盲区图形获取方法可以先确定第一类目标点和第二类目标点分别对应的填充像素点,以减少在实际应用中的计算量。
可以理解的是,盲区的形状大小与目标终端的形状和大小有关。在行驶过程中,以目标终端的前进方向为正方向,以过盲区的中线为轴,第一类目标点和第二类目标点可以成左右对称分布,分别对应所述盲区边界的左边界和右边界。例如:请参考附图10,图10是本申请实施例提供的多种第一类目标点和第二类目标点的分布示意图。例如:如图10中(1)所示:盲区区域的形状为矩形,第一类目标点和第二类目标点可以分别对应矩形的左右两边上的点。其中,再确认行驶图像的目标像素点集合时,是以所述第一类目标点和对应的所述第二类目标点为端点的线段上的所有像素点集合,若仅有左端点(第一类目标点),没有与之对应的右端点(第二类目标点)的情况下(如图10中(1)所示),则可以认为该行驶图像不包括目标像素点集合,也不包括目标像素点。因此,确定该行驶图像不包括未填充盲区的目标像素点集合。又例如:如图10中(2)所示:盲区区域的形状为三角形,过该三角形的中点,以终端的前进方向为正方向,第一类目标点和第二类目标点分别对应三角形区域的其左右两边的边界。其中,再确认行驶图像的目标像素点集合时,是以所述第一类目标点和对应的所述第二类目标点为端点的线段上的所有像素点集合,因此,如图10中(2)所示,该行驶图像对应的三角形盲区区域为包括目标像素点集合和非目标像素点集合。其中,由于非目标像素点集合对应的仅有左端点(第一类目标点),没有与之对应的右端点(第二类目标点)的情况下,则可以认为该非目标像素点集合所对应的像素点不满足需求。上述只获取第一类目标点和对应第二类目标点同时存在的像素点集合,使得最后拼接填充的盲区图像是由条状区域组合成的,降低了盲区在多帧行驶图像中对应像素点集合是不规则形状下的拼接难度,同时在第一类目标点和对应第二类目标点同时存在下,获取与第一类目标点和对应第二类目标点所在相同线段上的像素点,大大减少了在实际应用中的计算量,提高了盲区图像获取效率,减少了延时时间。
需要说明的是,由于终端形状的不同,盲区区域的形状也不相同,因此本申请对盲区的形状并不作具体的限定。例如:请参考附图11,图11是本申请实施例提供的多种盲区区域示意图。如图11所示,当盲区区域的形状为圆形、椭圆形或不规则形状时,以目标终端的前进方向为正方向,尽可能的将该盲区区域平分周长相等的为左右两部分,一部分的边界为第一类像素点对应的边界,另一部分边界为第二类像素点对应的边界。其中,该边界的划分,本申请实施例并不作具体的限定。
步骤S306:基于多个目标点对应的填充像素点输出盲区图像。
具体地,盲区图像获取装置基于多个目标点对应的填充像素点输出盲区图像。例如:盲区图像获取装置可以分别输出当前帧行驶图像和盲区图像,还可以将盲区图像填充至当前帧行驶图像的对应位置,输出一张显示盲区区域的行驶图像。
实施本申请实施例,选择多帧行驶图像填充当前帧行驶图像的盲区区域的图像,避免了使用单张行驶图像盲区区域图像填充不完全的现象。同时在选择填充像素点时,根据障碍物与在智能车辆之间的目标距离;若目标距离小于或等于预设距离阈值,选择拍摄时间最近的像素点为填充像素点,有效避免了帧数时延导致的障碍物遮挡等问题,同时由于选择的填充图像为拍摄时间最近的一帧行驶图像,其最终获得的盲区区域的清晰度也最好。若目标距离大于预设距离阈值,选择对应目标点多的历史帧行驶图像中的像素点为填充像素点,有效的避免了多图拼接引起的图像错位、亮度不一致的问题,提高了车底盲区的显 示效果。两种不同情况下的填充策略,大大提高了获得完整、清晰、准确的盲区图像的概率,保障了智能终端的行驶安全。而且,采用某种特定的方法对盲区进行填充,可以让驾驶员多方位观察汽车所在位置,防止轮胎磨损、底盘剐蹭受损,也以便用户更好地观察车周、轮胎及车底信息,辅助泊车,最大限度避免终端损失事故发生,提升驾驶体验和驾驶安全性。
可以理解的是,本申请提供的盲区图像获取方法还可以由电子装置,盲区图像获取装置等执行。电子装置是指能够被抽象为计算机系统,支持处理图像功能的电子装置,也可称为图像处理装置。盲区图像获取装置可以是该电子装置的整机,也可以是该电子装置中的部分器件,例如:支持图像处理功能、支持盲区图像获取功能相关的芯片,如系统芯片或图像芯片。其中,系统芯片也称为片上系统,或称为SoC芯片。具体地,盲区图像获取装置可以是诸如智能车辆中车载电脑这样的相关装置,也可以是能够被设置在智能终端的计算机系统或图像处理系统中的系统芯片或图像获取芯片。
另外,本申请实施例只是示例性的以智能车辆中的盲区图像获取装置为例说明该盲区图像获取方法,本申请实施例对终端装置的种类,在此不做具体的限定。例如:终端装置还可以为探测车、探索机器人等。
基于图1提供的智能车辆架构,图2提供的盲区图像获取装置的结构,以及图3提供的盲区图像获取系统架构,结合本申请中提供的盲区图像获取方法。以智能车辆为例,请参见图12,图12是本申请实施例提供的一种目标车辆在一种应用场景下的场景示意图,可以对应参考上述图4上述的图像处理的方法实施例的相关描述。
应用场景:如图12所示,在当目标车辆A行驶道路上时,会经常出现堵车的状况,在堵车时,由于车辆与车辆之间的距离小于预设距离,而且速度较低,经常会出现急刹车的情况。如果按照现有的盲区图像获取方式,很容易会出现盲区缺失或者盲区填充错误的情况。因此,根据本申请实施例的盲区图像获取方法,该目标车辆A此时可以选择多帧历史行驶图像中目标点对应的拍摄时间最近的像素点填充盲区区域,获得较为清晰的盲区图像。请参考附图13,图13是本申请实施例提供的一种应用场景下盲区图像获取方法流程示意图。如图13所示:该盲区图像获取过程可以实施如下步骤:
确定与前方障碍物之间的距离,根据该距离选择拍摄时间最近的像素点填充盲区图像。
第一步:输入多帧行驶图像及其与当前帧行驶图像间位姿关系。
第二步:确定障碍物与目标车辆之间的目标距离。
第三步:以当前帧开始倒叙遍历图像,利用图像间位姿关系,从当前底盘前端为前边缘,计算底盘左右侧边缘是否存在图像中。
第四步:找到最先出现底盘区域的帧,并遍历底盘左右边缘计算该帧可以提供的最长底盘尺寸。
第五步:当任意边缘不在该帧时,记录该帧所存在的底盘区域,判断是否超过底盘区域。以当前底盘位置为前边缘按倒叙继续遍历下一帧。
第六步:输出每张图像对应的底盘区域。
针对第三步:
按时序范围,图像编号记为第r帧图像(frameId=r,r=0、1、2、…),例如:当前帧为frameId=0,历史行驶图像为frameId=1,2,…。根据标定关系,在当前帧中,车底盲区范围为(x,y,w,h)。设其他帧行驶图像frameId=r与当前帧行驶图像frameId=0的相对位置关系为Mr[4*4],其中,
Figure PCTCN2021083514-appb-000001
盲区位置在其他帧图像上的像素位置计算公式如下:
Figure PCTCN2021083514-appb-000002
其中,f x、f y、u 0、v 0为相机内参,其中,相机内参指的是相机坐标系到像素坐标系的映射关系,可以由相机厂商得到;R 3×3,T 3×1为相机外参,其中,相机外参指的是相机坐标系到世界坐标系的映射关系,由事先标定获得;Rr 3×3、Tr 3×1为第r帧历史行驶图像及其与当前帧行驶图像间位姿关系;±x blind,y blind为事先由标定及车身尺寸得到的行驶图像中左右盲区边缘位置和上盲区边缘位置,z ground为地面,默认为零。
u,v表示行驶图像中与盲区区域内目标点对应的像素点的位置,图像高度(image height)、图像宽度(image width)表示行驶图像的尺寸,当+u,v和-u,v均在行驶图像中时,其中,行驶图像的单位像素长度为每像素米(meter per pixel)。即:u,v满足如下条件时:
v≥0&&v<ImageHeight
u≥0&&u<ImageWidth
认为此时该第frameId=r帧历史行驶图像可以提供至少(+u,v)到(-u,v)像素米长的盲区范围。继续按上述公式与u,v间隔MeterPerPixel计算,记录frameId=r帧历史行驶图像能提供的最大长度,得到frameId=r帧历史行驶图像对应frameId=0的盲区范围。当第frameId=r帧历史行驶图像提供的盲区范围小于全部盲区时,按上述方式再次计算frameId=r+1帧历史行驶图像,直至盲区范围全部得到填充。
针对第四步:可分为两种情况,其中:
情况一:一帧历史行驶图像可以提供全部盲区区域的目标点对应的填充像素点。
请参考附图14,图14是本申请实施例提供的一种一帧历史行驶图像可以提供盲区图像示意图。如图14所示,按遍历顺序计算历史行驶图像上当前图像盲区,前面若干张历史行驶图像不提供盲区,第一张出现盲区图像的历史行驶图像即可提供全部图像盲区。此时按照时间顺序,前若干张历史行驶图像计算的像素位置u,v不满足上述公式。最先出现的第x帧历史行驶图像,不仅像素位置在图像范围之内,而且长度上按每像素米(meter per pixel)的增加,全部满足要求。此时,算法选择全部盲区图像由最先出现的第x帧历史行驶图像提供。该图像为能提供当前盲区范围且距离当前图像最近的图像,图像分辨率最好。而且,由于时间上也是距离当前状态最近的,所以可能被障碍物遮挡的概率也是最小的。
情况二:多帧历史行驶图像提供全部盲区区域的目标点对应的填充像素点。
请参考附图15,图15是本申请实施例提供的一种多帧历史行驶图像可以提供盲区图像示意图。如图15所示,按遍历顺序计算历史行驶图像上当前帧行驶图像盲区,前面若干 张历史行驶图像不提供盲区,第一张出现盲区图像的历史行驶图像,但不存在全部盲区范围,第二帧历史行驶图像不仅存在新增的全部的盲区范围,而且包含部分甚至全部的第一张历史行驶图像存在的盲区范围。此时按照时间顺序,前若干张历史行驶图像计算的像素位置u,v不满足上述公式。最先出现的第x帧历史行驶图像,盲区初始位置计算出的像素位置在图像范围之内,当长度上按每像素米的增加,增加到某长度,图像不再存在盲区。图像前一采样时刻第x+1帧历史行驶图像,在第x帧历史行驶图像不存在的盲区区域的位置计算出的像素位置在图像范围之内,按每像素米增加,可以提供全部盲区范围。同时,对于第x帧历史行驶图像可以提供的盲区范围,第x+1帧历史行驶图像存在一部分甚至全部。此时,对于全部盲区范围,算法选择由第x帧历史行驶图像提供它存在的最大盲区范围,第x+1帧历史行驶图像提供由第x帧历史行驶图像不存在的外边缘至全部盲区范围。此时,盲区由第x帧历史行驶图像和第x+1帧历史行驶图像拼接组成,整体图像分辨率最佳。
上述详细阐述了本申请实施例的方法,下面提供了本申请实施例的相关装置。
请参见图16,图16是本申请实施例提供的一种盲区图像获取装置的结构示意图,该盲区图像获取装置10可以包括第一确定单元101、第二确定单元102、第一获取单元103和输出单元104,还可以包括:其中,各个单元的详细描述如下。
第一确定单元101,用于确定当前帧行驶图像与多帧历史行驶图像之间的位姿关系,其中,所述当前帧行驶图像为当前时间拍摄的行驶图像,所述多帧历史行驶图像为在所述当前时间之前拍摄的行驶图像,所述行驶图像为目标终端前进方向上的周边环境图像;
第二确定单元102,用于确定障碍物与所述目标终端之间的目标距离;
第一获取单元103,用于在所述目标距离小于或等于预设距离阈值时,根据所述位姿关系和所述当前帧行驶图像对应盲区区域的盲区位置信息,从所述多帧历史行驶图像中获取所述盲区区域内多个目标点中每个目标点的填充像素点;其中,所述填充像素点为所述多帧历史行驶图像中所述目标点对应的拍摄时间最近的像素点;
输出单元104,用于基于所述多个目标点对应的填充像素点输出所述盲区图像。
在一种可能实现的方式中,所述多帧历史行驶图像为m帧历史行驶图像,所述第一获取单元103,具体用于:在所述目标距离小于或等于预设距离阈值时,将所述m帧历史行驶图像按照拍摄时间排序,m为大于1的整数;根据所述位姿关系和所述当前帧行驶图像对应盲区区域的盲区位置信息,从第x帧历史行驶图像中获取目标像素点集合,作为所述多个目标点对应的填充像素点集合,其中,所述目标像素点集合包括所述多个目标点对应的填充像素点,所述第x帧历史行驶图像为所述m帧历史行驶图像中包括所述目标像素点集合的,且拍摄时间距当前时间最近的一帧历史行驶图像,x=1、2、3…m。
在一种可能实现的方式中,所述第一获取单元103,还具体用于:在所述目标像素点集合为部分目标点对应的填充像素点集合时,依次从第x+1帧历史行驶图像中获取目标像素点集合,作为剩余部分目标点对应的填充像素点集合,直至所述盲区区域中所述多个目标点填充完毕。
在一种可能实现的方式中,所述目标点分为第一类目标点和第二类目标点,所述目标像素点集合为所述第x帧历史行驶图像中以所述第一类目标点对应的填充像素点和所述第 二类目标点对应的填充像素点为端点的线段上的所有像素点集合,其中,所述第一类目标点和所述第二类目标点分别为所述盲区区域不同的边界上的目标点,且在所述盲区区域中,所述第一类目标点和所述第二类目标点一一对应并呈轴对称分布。
在一种可能实现的方式中,所述第一获取单元103,还用于:在所述目标距离大于所述预设距离阈值时,根据所述位姿关系和所述当前帧行驶图像对应盲区区域的盲区位置信息,从所述多帧历史行驶图像中获取所述盲区区域的填充像素点;其中,在所述第x+1帧历史行驶图像中包括与所述第x帧历史行驶图像中相同目标点的像素点时,所述相同目标点对应的填充像素点为所述第x+1帧历史行驶图像与所述第x帧历史行驶图像中对应所述目标点的数量多的一帧历史行驶图像中所述相同目标点对应的像素点。
在一种可能实现的方式中,所述装置还包括:第二获取单元105,用于确定当前帧行驶图像与一帧或多帧历史行驶图像之间的位姿关系之前,通过多目摄像头,获取所述当前帧行驶图像与所述多帧历史行驶图像。
在一种可能实现的方式中,所述装置还包括:第三获取单元106,用于确定当前帧行驶图像与多帧历史行驶图像之间的位姿关系之前,通过单目摄像头,获取所述当前帧行驶图像与所述多帧历史行驶图像;获取所述目标终端的速度信息;基于所述速度信息,获取相邻帧行驶图像之间的距离信息;基于所述距离信息,确定所述单目摄像头的深度估计的尺寸值,所述尺寸值用于指示在深度估计时单位长度的大小;所述第一确定单元101,具体用于:基于所述尺寸值,确定所述当前帧行驶图像与所述多帧历史行驶图像之间的位姿关系。
在一种可能实现的方式中,所述第一确定单元101,具体用于:对所述当前帧行驶图像与所述多帧历史行驶图像进行图像特征检测,获取所述当前帧行驶图像与所述多帧历史行驶图像之间的图像特征点,所述图像特征点为所述当前帧行驶图像与所述多帧历史行驶图像之间的共视点;基于所述图像特征点和尺寸值,确定所述当前帧行驶图像与所述多帧历史行驶图像之间的位姿关系。
需要说明的是,上述多个单元的划分仅是一种根据功能进行的逻辑划分,不作为对盲区图像获取装置10具体的结构的限定。在具体实现中,其中部分功能模块可能被细分为更多细小的功能模块,部分功能模块也可能组合成一个功能模块,但无论这些功能模块是进行了细分还是组合,装置10在对盲区图像获取的过程中所执行的大致流程是相同的。通常,每个单元都对应有各自的程序代码(或者说程序指令),这些单元各自对应的程序代码在相关硬件装置上运行时,使得该单元执行相应的流程从而实现相应功能。另外,每个单元的功能还可以通过相关的硬件实现。例如:第一确定单元101、第二确定单元102和第一获取单元103等的相关功能可以通过模拟电路或者数字电路实现,其中,数字电路可以为数字信号处理器(Digital Signal Processor,DSP),或者数字集成电路芯片(field programmable gate array,FPGA);输出单元104的相关功能可以通过带有通信接口或收发功能的图形处理器(graphics processing unit,GPU)或处理器CPU等装置实现。
还需要说明的是,本申请实施例中所描述的盲区图像获取装置10中各功能单元的功能可参见上述图4中所述的盲区图像获取方法实施例中步骤S301-步骤S306的相关描述,例如:第一确定单元101可对应参见上述图4中所述的方法实施例中步骤S301-步骤S302、 第二确定单元102可对应参见上述图4中所述的方法实施例中步骤S303、第一获取单元103可对应参见上述图4中所述的方法实施例中步骤S304-步骤S305,输出单元104可对应参见上述图4中所述的方法实施例中步骤S306,此处不再赘述。
如图17所示,图17是本申请实施例提供的另一种盲区图像获取装置的结构示意图,该装置20包括至少一个处理器201,至少一个存储器202、至少一个通信接口203。此外,该装置还可以包括天线等通用部件,在此不再详述。
处理器201可以是通用中央处理器(CPU),微处理器,特定应用集成电路(application-specific integrated circuit,ASIC),或一个或多个用于控制以上方案程序执行的集成电路。
通信接口203,用于与其他装置或通信网络通信,如以太网,无线接入网(RAN),核心网,无线局域网(Wireless Local Area Networks,WLAN)等。
存储器202可以是只读存储器(read-only memory,ROM)或可存储静态信息和指令的其他类型的静态存储装置,随机存取存储器(random access memory,RAM)或者可存储信息和指令的其他类型的动态存储装置,也可以是电可擦可编程只读存储器(Electrically Erasable Programmable Read-Only Memory,EEPROM)、只读光盘(Compact Disc Read-Only Memory,CD-ROM)或其他光盘存储、光碟存储(包括压缩光碟、激光碟、光碟、数字通用光碟、蓝光光碟等)、磁盘存储介质或者其他磁存储装置、或者能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。存储器可以是独立存在,通过总线与处理器相连接。存储器也可以和处理器集成在一起。
其中,所述存储器202用于存储执行以上方案的应用程序代码,并由处理器201来控制执行。所述处理器201用于执行所述存储器202中存储的应用程序代码。
存储器202存储的代码可执行以上图3提供的反射率估计方法,比如确定当前帧行驶图像与多帧历史行驶图像之间的位姿关系;确定障碍物与所述目标终端之间的目标距离;若所述目标距离小于或等于预设距离阈值,根据所述位姿关系和所述当前帧行驶图像对应盲区区域的盲区位置信息,从所述多帧历史行驶图像中获取所述盲区区域的填充像素点;基于所述多个目标点对应的填充像素点输出所述盲区图像。
需要说明的是,本申请实施例中所描述的盲区图像获取装置20中各功能单元的功能可参见上述图4中所述的方法实施例中的步骤S301-步骤S306相关描述,此处不再赘述。
本申请实施例还提供了一种装置,上述装置包括处理器,上述处理器用于:
确定当前帧行驶图像与多帧历史行驶图像之间的位姿关系,其中,所述当前帧行驶图像为当前时间拍摄的行驶图像,所述多帧历史行驶图像为在所述当前时间之前拍摄的行驶图像,所述行驶图像为目标终端前进方向上的周边环境图像;
确定障碍物与所述目标终端之间的目标距离;
在所述目标距离小于或等于预设距离阈值时,根据所述位姿关系和所述当前帧行驶图像对应盲区区域的盲区位置信息,从所述多帧历史行驶图像中获取所述盲区区域内多个目标点中每个目标点的填充像素点;其中,所述填充像素点为所述多帧历史行驶图像中所述 目标点对应的拍摄时间最近的像素点;
基于所述多个目标点对应的填充像素点输出所述盲区图像。
在一种可能实现的方式中,所述多帧历史行驶图像为m帧历史行驶图像;所述处理器,具体用于:在所述目标距离小于或等于预设距离阈值时,将所述m帧历史行驶图像按照拍摄时间排序,m为大于1的整数;根据所述位姿关系和所述当前帧行驶图像对应盲区区域的盲区位置信息,从第x帧历史行驶图像中获取目标像素点集合,作为所述多个目标点对应的填充像素点集合,其中,所述目标像素点集合包括所述多个目标点对应的填充像素点,所述第x帧历史行驶图像为所述m帧历史行驶图像中包括所述目标像素点集合的,且拍摄时间距当前时间最近的一帧历史行驶图像,x=1、2、3…m。
在一种可能实现的方式中,所述处理器还用于:在所述目标像素点集合为部分目标点对应的填充像素点集合时,依次从第x+1帧历史行驶图像中获取目标像素点集合,作为剩余部分目标点对应的填充像素点集合,直至所述盲区区域中所述多个目标点填充完毕。
在一种可能实现的方式中,所述目标点分为第一类目标点和第二类目标点,所述目标像素点集合为所述第x帧历史行驶图像中以所述第一类目标点对应的填充像素点和所述第二类目标点对应的填充像素点为端点的线段上的所有像素点集合,其中,所述第一类目标点和所述第二类目标点分别为所述盲区区域不同的边界上的目标点,且在所述盲区区域中,所述第一类目标点和所述第二类目标点一一对应并呈轴对称分布。
在一种可能实现的方式中,所述处理器还用于:在所述目标距离大于所述预设距离阈值时,根据所述位姿关系和所述当前帧行驶图像对应盲区区域的盲区位置信息,从所述多帧历史行驶图像中获取所述盲区区域内所述多个目标点中每个目标点的填充像素点;其中,在所述第x+1帧历史行驶图像中包括与所述第x帧历史行驶图像中相同目标点的像素点时,所述相同目标点对应的填充像素点为所述第x+1帧历史行驶图像与所述第x帧历史行驶图像中对应所述目标点的数量多的一帧历史行驶图像中所述相同目标点对应的像素点。
在一种可能实现的方式中,所述处理器还用于:确定当前帧行驶图像与一帧或多帧历史行驶图像之间的位姿关系之前,通过多目摄像头,获取所述当前帧行驶图像与所述多帧历史行驶图像。
在一种可能实现的方式中,所述处理器还用于:确定当前帧行驶图像与多帧历史行驶图像之间的位姿关系之前,通过单目摄像头,获取所述当前帧行驶图像与所述多帧历史行驶图像;获取所述目标终端的速度信息;基于所述速度信息,获取相邻帧行驶图像之间的距离信息;基于所述距离信息,确定所述单目摄像头的深度估计的尺寸值,所述尺寸值用于指示在深度估计时单位长度的大小;所述处理器具体用于:基于所述尺寸值,确定所述当前帧行驶图像与所述多帧历史行驶图像之间的位姿关系。
在一种可能实现的方式中,所述处理器具体用于:对所述当前帧行驶图像与所述多帧历史行驶图像进行图像特征检测,获取所述当前帧行驶图像与所述多帧历史行驶图像之间的图像特征点,所述图像特征点为所述当前帧行驶图像与所述多帧历史行驶图像之间的共视点;基于所述图像特征点和尺寸值,确定所述当前帧行驶图像与所述多帧历史行驶图像之间的位姿关系。
需要说明的是,本申请实施例所提及的装置可以是一个芯片、一个控制装置或者一个 处理模块等用于对终端周边的环境图像进行图像处理获得盲区图像,本申请对装置的具体形式不做具体的限定。
还需要说明的是,本申请实施例中所描述的装置中相关的功能可参见上述图4中上述的方法实施例中的步骤S301-步骤S306以及其他实施例的相关描述,此处不再赘述。
本申请实施例还提供了一种电子装置,可应用于上述应用场景,该电子装置中包括处理器和存储器,其中,上述存储器用于存储图像处理程序代码,上述处理器用于调用上述图像处理程序代码来执行:
确定当前帧行驶图像与多帧历史行驶图像之间的位姿关系,其中,所述当前帧行驶图像为当前时间拍摄的行驶图像,所述多帧历史行驶图像为在所述当前时间之前拍摄的行驶图像,所述行驶图像为目标终端前进方向上的周边环境图像;
确定障碍物与所述目标终端之间的目标距离;
在所述目标距离小于或等于预设距离阈值时,根据所述位姿关系和所述当前帧行驶图像对应盲区区域的盲区位置信息,从所述多帧历史行驶图像中获取所述盲区区域内多个目标点中每个目标点的填充像素点;其中,所述填充像素点为所述多帧历史行驶图像中所述目标点对应的拍摄时间最近的像素点;
基于所述多个目标点对应的填充像素点输出所述盲区图像。
在一种可能实现的方式中,所述多帧历史行驶图像为m帧历史行驶图像,所述处理器具体用于调用所述盲区图像获取程序代码来执行:在所述目标距离小于或等于预设距离阈值时,将所述m帧历史行驶图像按照拍摄时间排序,m为大于1的整数;根据所述位姿关系和所述当前帧行驶图像对应盲区区域的盲区位置信息,从第x帧历史行驶图像中获取目标像素点集合,作为所述多个目标点对应的填充像素点集合,其中,所述目标像素点集合包括所述多个目标点对应的填充像素点,所述第x帧历史行驶图像为所述m帧历史行驶图像中包括所述目标像素点集合的,且拍摄时间距当前时间最近的一帧历史行驶图像,x=1、2、3…m。
在一种可能实现的方式中,,所述处理器还用于调用所述盲区图像获取程序代码来执行:在所述目标像素点集合为部分目标点对应的填充像素点集合时,依次从第x+1帧历史行驶图像中获取目标像素点集合,作为剩余部分目标点对应的填充像素点集合,直至所述盲区区域中所述多个目标点填充完毕。
在一种可能实现的方式中,所述目标点分为第一类目标点和第二类目标点,所述目标像素点集合为所述第x帧历史行驶图像中以所述第一类目标点对应的填充像素点和所述第二类目标点对应的填充像素点为端点的线段上的所有像素点集合,其中,所述第一类目标点和所述第二类目标点分别为所述盲区区域不同的边界上的目标点,且在所述盲区区域中,所述第一类目标点和所述第二类目标点一一对应并呈轴对称分布。
在一种可能实现的方式中,所述处理器还用于调用所述盲区图像获取程序代码来执行:在所述目标距离大于所述预设距离阈值时,根据所述位姿关系和所述当前帧行驶图像对应盲区区域的盲区位置信息,从所述多帧历史行驶图像中获取所述盲区区域内所述多个目标点中每个目标点的填充像素点;其中,在所述第x+1帧历史行驶图像中包括与所述第x帧 历史行驶图像中相同目标点的像素点时,所述相同目标点对应的填充像素点为所述第x+1帧历史行驶图像与所述第x帧历史行驶图像中对应所述目标点的数量多的一帧历史行驶图像中所述相同目标点对应的像素点。
在一种可能实现的方式中,所述处理器还用于调用所述盲区图像获取程序代码来执行:确定当前帧行驶图像与一帧或多帧历史行驶图像之间的位姿关系,通过多目摄像头,获取所述当前帧行驶图像与所述多帧历史行驶图像。
在一种可能实现的方式中,所述处理器还用于调用所述盲区图像获取程序代码来执行:确定当前帧行驶图像与多帧历史行驶图像之间的位姿关系之前,通过单目摄像头,获取所述当前帧行驶图像与所述多帧历史行驶图像;获取所述目标终端的速度信息;基于所述速度信息,获取相邻帧行驶图像之间的距离信息;基于所述距离信息,确定所述单目摄像头的深度估计的尺寸值,所述尺寸值用于指示在深度估计时单位长度的大小;所述处理器具体用于调用所述盲区图像获取程序代码来执行:基于所述尺寸值,确定所述当前帧行驶图像与所述多帧历史行驶图像之间的位姿关系。
在一种可能实现的方式中,所述处理器具体用于调用所述盲区图像获取程序代码来执行:对所述当前帧行驶图像与所述多帧历史行驶图像进行图像特征检测,获取所述当前帧行驶图像与所述多帧历史行驶图像之间的图像特征点,所述图像特征点为所述当前帧行驶图像与所述多帧历史行驶图像之间的共视点;基于所述图像特征点和尺寸值,确定所述当前帧行驶图像与所述多帧历史行驶图像之间的位姿关系。
需要说明的是,本申请实施例所提及的电子装置可以是云端的一个服务器、一个处理装置等,也可以是与智能终端存在通信连接的一个盲区图像获取装置,本申请对此不做具体的限定。
还需要说明的是,本申请实施例中所描述的电子装置中相关的功能可参见上述图4中上述的方法实施例中的步骤S301-步骤S306以及其他实施例的相关描述,此处不再赘述。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可能可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本申请所必须的。
还需要说明的是,本申请的说明书和权利要求书及所述附图中的术语“第一”和“第二”、等是用于区别不同对象,而不是用于描述特定顺序。此外,术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、系统、产品或装置没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或装置固有的其它步骤或单元。
在本文中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本申请的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是指相同的 实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域技术人员显式地和隐式地理解的是,本文所描述的实施例可以与其它实施例相结合。
在本说明书中使用的术语“部件”、“模块”、“系统”等用于表示计算机相关的实体、硬件、固件、硬件和软件的组合、软件、或执行中的软件。例如,部件可以是但不限于,在处理器上运行的进程、处理器、对象、可执行文件、执行线程、程序和/或计算机。通过图示,在计算装置上运行的应用和计算装置都可以是部件。一个或多个部件可驻留在进程和/或执行线程中,部件可位于一个计算机上和/或分布在2个或更多个计算机之间。此外,这些部件可从在上面存储有各种数据结构的各种计算机可读介质执行。部件可例如根据具有一个或多个数据分组(例如来自与本地系统、分布式系统和/或网络间的另一部件交互的二个部件的数据,例如,通过信号与其它系统交互的互联网)的信号通过本地和/或远程进程来通信。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置,可通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如上述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性或其它的形式。
上述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
上述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机装置(可以为个人计算机、服务端或者网络装置等,具体可以是计算机装置中的处理器)执行本申请各个实施例上述方法的全部或部分步骤。其中,而前述的存储介质可包括:U盘、移动硬盘、磁碟、光盘、只读存储器(Read-Only Memory,缩写:ROM)或者随机存取存储器(Random Access Memory,缩写:RAM)等各种可以存储程序代码的介质。
以上所述,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。

Claims (24)

  1. 一种盲区图像获取方法,其特征在于,包括:
    确定当前帧行驶图像与多帧历史行驶图像之间的位姿关系,其中,所述当前帧行驶图像为当前时间拍摄的行驶图像,所述多帧历史行驶图像为在所述当前时间之前拍摄的行驶图像,所述行驶图像为目标终端前进方向上的周边环境图像;
    确定障碍物与所述目标终端之间的目标距离;
    在所述目标距离小于或等于预设距离阈值时,根据所述位姿关系和所述当前帧行驶图像对应盲区区域的盲区位置信息,从所述多帧历史行驶图像中获取所述盲区区域内多个目标点中每个目标点的填充像素点;其中,所述填充像素点为所述多帧历史行驶图像中所述目标点对应的拍摄时间最近的像素点;
    基于所述多个目标点对应的填充像素点输出所述盲区图像。
  2. 根据权利要求1所述方法,其特征在于,所述多帧历史行驶图像为m帧历史行驶图像,所述在所述目标距离小于或等于预设距离阈值时,根据所述位姿关系和所述当前帧行驶图像对应盲区区域的盲区位置信息,从所述多帧历史行驶图像中获取所述盲区区域内多个目标点中的每个目标点的填充像素点,包括:
    在所述目标距离小于或等于预设距离阈值时,将所述m帧历史行驶图像按照拍摄时间排序,m为大于1的整数;
    根据所述位姿关系和所述当前帧行驶图像对应盲区区域的盲区位置信息,从第x帧历史行驶图像中获取目标像素点集合,作为所述多个目标点对应的填充像素点集合,其中,所述目标像素点集合包括所述多个目标点对应的填充像素点,所述第x帧历史行驶图像为所述m帧历史行驶图像中包括所述目标像素点集合的,且拍摄时间距当前时间最近的一帧历史行驶图像,x=1、2、3…m。
  3. 根据权利要求2所述方法,其特征在于,所述方法还包括:
    在所述目标像素点集合为部分目标点对应的填充像素点集合时,依次从第x+1帧历史行驶图像中获取目标像素点集合,作为剩余部分目标点对应的填充像素点集合,直至所述盲区区域中所述多个目标点填充完毕。
  4. 根据权利要求2或3所述方法,其特征在于,所述目标点分为第一类目标点和第二类目标点,所述目标像素点集合为所述第x帧历史行驶图像中以所述第一类目标点对应的填充像素点和所述第二类目标点对应的填充像素点为端点的线段上的所有像素点集合,其中,所述第一类目标点和所述第二类目标点分别为所述盲区区域不同的边界上的目标点,且在所述盲区区域中,所述第一类目标点和所述第二类目标点一一对应并呈轴对称分布。
  5. 根据权利要求2所述方法,其特征在于,所述方法还包括:
    在所述目标距离大于所述预设距离阈值时,根据所述位姿关系和所述当前帧行驶图像 对应盲区区域的盲区位置信息,从所述多帧历史行驶图像中获取所述盲区区域内所述多个目标点中每个目标点的填充像素点;
    其中,在所述第x+1帧历史行驶图像中包括与所述第x帧历史行驶图像中相同目标点的像素点时,所述相同目标点对应的填充像素点为所述第x+1帧历史行驶图像与所述第x帧历史行驶图像中对应所述目标点的数量多的一帧历史行驶图像中所述相同目标点对应的像素点。
  6. 根据权利要求1-5所述任意一项方法,其特征在于,所述确定当前帧行驶图像与一帧或多帧历史行驶图像之间的位姿关系之前,还包括:
    通过多目摄像头,获取所述当前帧行驶图像与所述多帧历史行驶图像。
  7. 根据权利要求1-5所述任意一项方法,其特征在于,所述确定当前帧行驶图像与一帧或多帧历史行驶图像之间的位姿关系之前,还包括:
    通过单目摄像头,获取所述当前帧行驶图像与所述多帧历史行驶图像;
    获取所述目标终端的速度信息;
    基于所述速度信息,获取相邻帧行驶图像之间的距离信息;
    基于所述距离信息,确定所述单目摄像头的深度估计的尺寸值,所述尺寸值用于指示在深度估计时单位长度的大小;
    所述确定当前帧行驶图像与多帧历史行驶图像之间的位姿关系,包括:
    基于所述尺寸值,确定所述当前帧行驶图像与所述多帧历史行驶图像之间的位姿关系。
  8. 根据权利要求1-7所述任意一项方法,其特征在于,所述确定当前帧行驶图像与多帧历史行驶图像之间的位姿关系,包括:
    对所述当前帧行驶图像与所述多帧历史行驶图像进行图像特征检测,获取所述当前帧行驶图像与所述多帧历史行驶图像之间的图像特征点,所述图像特征点为所述当前帧行驶图像与所述多帧历史行驶图像之间的共视点;
    基于所述图像特征点和尺寸值,确定所述当前帧行驶图像与所述多帧历史行驶图像之间的位姿关系。
  9. 一种盲区图像获取装置,其特征在于,包括:
    第一确定单元,用于确定当前帧行驶图像与多帧历史行驶图像之间的位姿关系,其中,所述当前帧行驶图像为当前时间拍摄的行驶图像,所述多帧历史行驶图像为在所述当前时间之前拍摄的行驶图像,所述行驶图像为目标终端前进方向上的周边环境图像;
    第二确定单元,用于确定障碍物与所述目标终端之间的目标距离;
    第一获取单元,用于在所述目标距离小于或等于预设距离阈值时,根据所述位姿关系和所述当前帧行驶图像对应盲区区域的盲区位置信息,从所述多帧历史行驶图像中获取所述盲区区域内多个目标点中每个目标点的填充像素点;其中,所述填充像素点为所述多帧历史行驶图像中所述目标点对应的拍摄时间最近的像素点;
    输出单元,用于基于所述多个目标点对应的填充像素点输出所述盲区图像。
  10. 根据权利要求9所述装置,其特征在于,所述多帧历史行驶图像为m帧历史行驶图像,所述第一获取单元,具体用于:
    在所述目标距离小于或等于预设距离阈值时,将所述m帧历史行驶图像按照拍摄时间排序,m为大于1的整数;
    根据所述位姿关系和所述当前帧行驶图像对应盲区区域的盲区位置信息,从第x帧历史行驶图像中获取目标像素点集合,作为所述多个目标点对应的填充像素点集合,其中,所述目标像素点集合包括所述多个目标点对应的填充像素点,所述第x帧历史行驶图像为所述m帧历史行驶图像中包括所述目标像素点集合的,且拍摄时间距当前时间最近的一帧历史行驶图像,x=1、2、3…m。
  11. 根据权利要求10所述装置,其特征在于,所述第一获取单元,还具体用于:
    在所述目标像素点集合为部分目标点对应的填充像素点集合时,依次从第x+1帧历史行驶图像中获取目标像素点集合,作为剩余部分目标点对应的填充像素点集合,直至所述盲区区域中所述多个目标点填充完毕。
  12. 根据权利要求10或11所述装置,其特征在于,所述目标点分为第一类目标点和第二类目标点,所述目标像素点集合为所述第x帧历史行驶图像中以所述第一类目标点对应的填充像素点和所述第二类目标点对应的填充像素点为端点的线段上的所有像素点集合,其中,所述第一类目标点和所述第二类目标点分别为所述盲区区域不同的边界上的目标点,且在所述盲区区域中,所述第一类目标点和所述第二类目标点一一对应并呈轴对称分布。
  13. 根据权利要求12所述装置,其特征在于,所述第一获取单元,还用于:
    在所述目标距离大于所述预设距离阈值时,根据所述位姿关系和所述当前帧行驶图像对应盲区区域的盲区位置信息,从所述多帧历史行驶图像中获取所述盲区区域的填充像素点;
    其中,在所述第x+1帧历史行驶图像中包括与所述第x帧历史行驶图像中相同目标点的像素点时,所述相同目标点对应的填充像素点为所述第x+1帧历史行驶图像与所述第x帧历史行驶图像中对应所述目标点的数量多的一帧历史行驶图像中所述相同目标点对应的像素点。
  14. 根据权利要求9-13所述任意一项装置,其特征在于,所述装置还包括:
    第二获取单元,用于确定当前帧行驶图像与一帧或多帧历史行驶图像之间的位姿关系之前,通过多目摄像头,获取所述当前帧行驶图像与所述多帧历史行驶图像。
  15. 根据权利要求9-13所述任意一项装置,其特征在于,所述装置还包括:
    第三获取单元,用于确定当前帧行驶图像与多帧历史行驶图像之间的位姿关系之前, 通过单目摄像头,获取所述当前帧行驶图像与所述多帧历史行驶图像;
    获取所述目标终端的速度信息;
    基于所述速度信息,获取相邻帧行驶图像之间的距离信息;
    基于所述距离信息,确定所述单目摄像头的深度估计的尺寸值,所述尺寸值用于指示在深度估计时单位长度的大小;
    所述第一确定单元,具体用于:
    基于所述尺寸值,确定所述当前帧行驶图像与所述多帧历史行驶图像之间的位姿关系。
  16. 根据权利要求9-15所述任意一项方法,其特征在于,所述第一确定单元,具体用于:
    对所述当前帧行驶图像与所述多帧历史行驶图像进行图像特征检测,获取所述当前帧行驶图像与所述多帧历史行驶图像之间的图像特征点,所述图像特征点为所述当前帧行驶图像与所述多帧历史行驶图像之间的共视点;
    基于所述图像特征点和尺寸值,确定所述当前帧行驶图像与所述多帧历史行驶图像之间的位姿关系。
  17. 一种装置,其特征在于,所述装置包括处理器,所述处理器用于:
    确定当前帧行驶图像与多帧历史行驶图像之间的位姿关系,其中,所述当前帧行驶图像为当前时间拍摄的行驶图像,所述多帧历史行驶图像为在所述当前时间之前拍摄的行驶图像,所述行驶图像为目标终端前进方向上的周边环境图像;
    确定障碍物与所述目标终端之间的目标距离;
    在所述目标距离小于或等于预设距离阈值时,根据所述位姿关系和所述当前帧行驶图像对应盲区区域的盲区位置信息,从所述多帧历史行驶图像中获取所述盲区区域内多个目标点中每个目标点的填充像素点;其中,所述填充像素点为所述多帧历史行驶图像中所述目标点对应的拍摄时间最近的像素点;
    基于所述多个目标点对应的填充像素点输出所述盲区图像。
  18. 根据权利要求17所述装置,其特征在于,所述处理器具体用于:
    在所述目标距离大于所述预设距离阈值时,根据所述位姿关系和所述当前帧行驶图像对应盲区区域的盲区位置信息,从所述多帧历史行驶图像中获取所述盲区区域的填充像素点;
    其中,在所述第x+1帧历史行驶图像中包括与所述第x帧历史行驶图像中相同目标点的像素点时,所述相同目标点对应的填充像素点为所述第x+1帧历史行驶图像与所述第x帧历史行驶图像中对应所述目标点的数量多的一帧历史行驶图像中所述相同目标点对应的像素点。
  19. 一种电子装置,其特征在于,包括处理器和存储器,其中,所述存储器用于存储盲区图像获取程序代码,所述处理器用于调用所述盲区图像获取程序代码来执行:
    确定当前帧行驶图像与多帧历史行驶图像之间的位姿关系,其中,所述当前帧行驶图 像为当前时间拍摄的行驶图像,所述多帧历史行驶图像为在所述当前时间之前拍摄的行驶图像,所述行驶图像为目标终端前进方向上的周边环境图像;
    确定障碍物与所述目标终端之间的目标距离;
    在所述目标距离小于或等于预设距离阈值时,根据所述位姿关系和所述当前帧行驶图像对应盲区区域的盲区位置信息,从所述多帧历史行驶图像中获取所述盲区区域内多个目标点中每个目标点的填充像素点;其中,所述填充像素点为所述多帧历史行驶图像中所述目标点对应的拍摄时间最近的像素点;
    基于所述多个目标点对应的填充像素点输出所述盲区图像。
  20. 根据权利要求19所述电子装置,其特征在于,所述处理器用于调用所述盲区图像获取程序代码来具体执行:
    在所述目标距离大于所述预设距离阈值时,根据所述位姿关系和所述当前帧行驶图像对应盲区区域的盲区位置信息,从所述多帧历史行驶图像中获取所述盲区区域的填充像素点;
    其中,在所述第x+1帧历史行驶图像中包括与所述第x帧历史行驶图像中相同目标点的像素点时,所述相同目标点对应的填充像素点为所述第x+1帧历史行驶图像与所述第x帧历史行驶图像中对应所述目标点的数量多的一帧历史行驶图像中所述相同目标点对应的像素点。
  21. 一种智能车辆,其特征在于,包括处理器、存储器以及通信接口,其中,所述存储器用于存储信息发送盲区图像获取程序代码,所述处理器用于调用所述盲区图像获取程序代码来执行权利要求1-8任一项所述的方法。
  22. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,该计算机程序被处理器执行时实现上述权利要求1-8任意一项所述的方法。
  23. 一种计算机程序,其特征在于,所述计算机程序包括指令,当所述计算机程序被计算机执行时,使得所述计算机执行如权利要求1-8中任意一项所述的方法。
  24. 一种盲区图像获取系统,其特征在于,包括处理器和存储器,所述处理器用于执行上述权利要求1-8任意一项所述的方法。
PCT/CN2021/083514 2021-03-29 2021-03-29 一种盲区图像获取方法及相关终端装置 WO2022204854A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202180001469.5A CN113228135B (zh) 2021-03-29 2021-03-29 一种盲区图像获取方法及相关终端装置
PCT/CN2021/083514 WO2022204854A1 (zh) 2021-03-29 2021-03-29 一种盲区图像获取方法及相关终端装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/083514 WO2022204854A1 (zh) 2021-03-29 2021-03-29 一种盲区图像获取方法及相关终端装置

Publications (1)

Publication Number Publication Date
WO2022204854A1 true WO2022204854A1 (zh) 2022-10-06

Family

ID=77081268

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/083514 WO2022204854A1 (zh) 2021-03-29 2021-03-29 一种盲区图像获取方法及相关终端装置

Country Status (2)

Country Link
CN (1) CN113228135B (zh)
WO (1) WO2022204854A1 (zh)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113935942A (zh) * 2021-09-02 2022-01-14 同致电子科技(厦门)有限公司 一种基于非线性优化的车底透明方法、装置、设备、存储介质及计算机程序产品
CN116184992A (zh) * 2021-11-29 2023-05-30 上海商汤临港智能科技有限公司 车辆控制方法、装置、电子设备及存储介质
CN114245006B (zh) * 2021-11-30 2023-05-23 联想(北京)有限公司 一种处理方法、装置及系统
CN114582165A (zh) * 2022-03-02 2022-06-03 浙江海康智联科技有限公司 一种基于v2x协作式变道安全辅助预警方法及系统
CN115861080B (zh) * 2023-02-24 2023-05-23 杭州枕石智能科技有限公司 环视透明车底图像的拼接方法和终端设备
CN117690278B (zh) * 2024-02-02 2024-04-26 长沙弘汇电子科技有限公司 一种基于图像识别的地质灾害预警系统

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090102921A1 (en) * 2007-10-23 2009-04-23 Haruo Ito Vehicle-mounted image capturing apparatus
CN107021015A (zh) * 2015-11-08 2017-08-08 欧特明电子股份有限公司 用于图像处理的系统及方法
CN107274342A (zh) * 2017-05-22 2017-10-20 纵目科技(上海)股份有限公司 一种车底盲区填充方法及系统、存储介质、终端设备
CN108909625A (zh) * 2018-06-22 2018-11-30 河海大学常州校区 基于全景环视系统的车底地面显示方法
CN111160070A (zh) * 2018-11-07 2020-05-15 广州汽车集团股份有限公司 车辆全景图像盲区消除方法、装置、存储介质及终端设备
CN111959397A (zh) * 2020-08-24 2020-11-20 北京茵沃汽车科技有限公司 在全景影像中显示车底图像的方法、系统、装置及介质
CN112215747A (zh) * 2019-07-12 2021-01-12 杭州海康威视数字技术股份有限公司 无车底盲区的车载全景图的生成方法、装置及存储介质
WO2021004077A1 (zh) * 2019-07-09 2021-01-14 华为技术有限公司 一种检测车辆的盲区的方法及装置

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE60207655T2 (de) * 2001-09-07 2006-06-08 Matsushita Electric Industrial Co., Ltd., Kadoma Vorrichtung zum Anzeigen der Umgebung eines Fahrzeuges und System zur Bildbereitstellung
CN104661014B (zh) * 2015-01-29 2016-09-28 四川虹微技术有限公司 时空结合的空洞填充方法
EP3107063A1 (en) * 2015-06-16 2016-12-21 Continental Automotive GmbH Method for processing camera images
US9902322B2 (en) * 2015-10-30 2018-02-27 Bendix Commercial Vehicle Systems Llc Filling in surround view areas blocked by mirrors or other vehicle parts
JP2018117320A (ja) * 2017-01-20 2018-07-26 株式会社東芝 電子ミラーの映像合成装置及び映像合成方法
GB2559759B (en) * 2017-02-16 2020-07-29 Jaguar Land Rover Ltd Apparatus and method for displaying information
CN110936893B (zh) * 2018-09-21 2021-12-14 驭势科技(北京)有限公司 一种盲区障碍物处理方法、装置、车载设备及存储介质
US10812699B1 (en) * 2019-05-08 2020-10-20 Lenovo (Singapore) Pte. Ltd. Device having a camera overlaid by display and method for implementing the same

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090102921A1 (en) * 2007-10-23 2009-04-23 Haruo Ito Vehicle-mounted image capturing apparatus
CN107021015A (zh) * 2015-11-08 2017-08-08 欧特明电子股份有限公司 用于图像处理的系统及方法
CN107274342A (zh) * 2017-05-22 2017-10-20 纵目科技(上海)股份有限公司 一种车底盲区填充方法及系统、存储介质、终端设备
CN108909625A (zh) * 2018-06-22 2018-11-30 河海大学常州校区 基于全景环视系统的车底地面显示方法
CN111160070A (zh) * 2018-11-07 2020-05-15 广州汽车集团股份有限公司 车辆全景图像盲区消除方法、装置、存储介质及终端设备
WO2021004077A1 (zh) * 2019-07-09 2021-01-14 华为技术有限公司 一种检测车辆的盲区的方法及装置
CN112215747A (zh) * 2019-07-12 2021-01-12 杭州海康威视数字技术股份有限公司 无车底盲区的车载全景图的生成方法、装置及存储介质
CN111959397A (zh) * 2020-08-24 2020-11-20 北京茵沃汽车科技有限公司 在全景影像中显示车底图像的方法、系统、装置及介质

Also Published As

Publication number Publication date
CN113228135B (zh) 2022-08-26
CN113228135A (zh) 2021-08-06

Similar Documents

Publication Publication Date Title
WO2022204854A1 (zh) 一种盲区图像获取方法及相关终端装置
WO2021226776A1 (zh) 一种车辆可行驶区域检测方法、系统以及采用该系统的自动驾驶车辆
US11181914B2 (en) Use of detected objects for image processing
JP6174073B2 (ja) レーザー点クラウドを用いる物体検出のための方法及びシステム
US20220215639A1 (en) Data Presentation Method and Terminal Device
WO2022204855A1 (zh) 一种图像处理方法及相关终端装置
EP3909811A1 (en) Adaptive adjustment method and device for rear-view mirror
WO2022205243A1 (zh) 一种变道区域获取方法以及装置
WO2022016351A1 (zh) 一种行驶决策选择方法以及装置
US20230046258A1 (en) Method and apparatus for identifying object of interest of user
JP2023534406A (ja) 車線境界線を検出するための方法および装置
CN112810603B (zh) 定位方法和相关产品
WO2022052881A1 (zh) 一种构建地图的方法及计算设备
CN115100630B (zh) 障碍物检测方法、装置、车辆、介质及芯片
CN115205311B (zh) 图像处理方法、装置、车辆、介质及芯片
CN114842455B (zh) 障碍物检测方法、装置、设备、介质、芯片及车辆
WO2021159397A1 (zh) 车辆可行驶区域的检测方法以及检测装置
CN113128497A (zh) 目标形状估计方法及装置
WO2023050058A1 (zh) 控制车载摄像头的视角的方法、装置以及车辆
CN115082886B (zh) 目标检测的方法、装置、存储介质、芯片及车辆
WO2022236492A1 (zh) 图像处理方法及装置
CN115082573B (zh) 参数标定方法、装置、车辆及存储介质
WO2022033089A1 (zh) 确定检测对象的三维信息的方法及装置
EP4365052A1 (en) Collision avoidance method and control apparatus
WO2024108380A1 (zh) 自动泊车方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21933541

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21933541

Country of ref document: EP

Kind code of ref document: A1