WO2019085929A1 - 图像处理方法及其装置、安全驾驶方法 - Google Patents

图像处理方法及其装置、安全驾驶方法 Download PDF

Info

Publication number
WO2019085929A1
WO2019085929A1 PCT/CN2018/112902 CN2018112902W WO2019085929A1 WO 2019085929 A1 WO2019085929 A1 WO 2019085929A1 CN 2018112902 W CN2018112902 W CN 2018112902W WO 2019085929 A1 WO2019085929 A1 WO 2019085929A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
vehicle
pedestrian
target
resolution
Prior art date
Application number
PCT/CN2018/112902
Other languages
English (en)
French (fr)
Inventor
何敏政
Original Assignee
比亚迪股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 比亚迪股份有限公司 filed Critical 比亚迪股份有限公司
Publication of WO2019085929A1 publication Critical patent/WO2019085929A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems

Definitions

  • the present disclosure relates to the field of vehicle control technologies, and in particular, to an image processing method and apparatus thereof, and a safe driving method.
  • the vehicle's Advanced Driver Assistant System adopts a visual mode to collect environmental data outside the vehicle, and then recognizes the collected data. Specifically, an image of the outside of the vehicle is acquired by using a visible light camera, and then object recognition is performed on the collected image.
  • the present disclosure provides an image processing method, an apparatus thereof, and a safe driving method.
  • image fusion By performing image fusion on an image captured by a dual camera to form a target image, the quality of the target image can be ensured, thereby identifying the target image, and the object recognition can be improved.
  • the accuracy, and thus the safety of the vehicle is used to solve the image taken in the prior art because the visible light camera can only shoot high-quality images in a scene with sufficient light, and in a scene with weak light. It is relatively fuzzy and has a large noise. Therefore, in the process of object recognition of the image in the subsequent process, the object's false recognition rate and the leak recognition rate are large, which directly affects the technical problem of the vehicle's driving safety.
  • An embodiment of the first aspect of the present disclosure provides an image processing method, including:
  • the image processing method of the embodiment of the present disclosure acquires a first image and a second image based on a dual imaging device on a vehicle; performs image fusion on the first image and the second image to form a target image; and identifies an object from the target image.
  • the image captured by the dual camera is image-fused to form a target image, which can ensure the quality of the target image, thereby identifying the target image, thereby improving the accuracy of the object recognition, thereby ensuring the safety of the vehicle.
  • the second aspect of the present disclosure provides a safe driving method, including:
  • a safe driving strategy is generated and executed based on the identified object and the current operational information of the vehicle.
  • the safe driving method of the embodiment of the present disclosure can acquire an object from the target image by acquiring the target image, and then generate a safe driving strategy and execute according to the recognized object and the current running information of the vehicle, thereby effectively ensuring the safety of the driving of the vehicle. .
  • An embodiment of the third aspect of the present disclosure provides an image processing apparatus, including:
  • An image acquisition module configured to acquire the first image and the second image based on the dual camera device on the vehicle
  • Forming a module configured to perform image fusion on the first image and the second image to form a target image
  • a first identification module configured to identify an object from the target image.
  • An image processing apparatus acquires a first image and a second image based on a dual imaging device on a vehicle; performs image fusion on the first image and the second image to form a target image; and identifies an object from the target image.
  • image fusion on the image captured by the dual camera to form a target image, the quality of the target image can be ensured, thereby identifying the target image, and the accuracy of the object recognition can be improved, thereby ensuring the safety of the vehicle.
  • FIG. 1 is a schematic flowchart diagram of a first image processing method according to an embodiment of the present disclosure
  • FIG. 2 is a schematic flowchart diagram of a second image processing method according to an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of a calibration template of a dual camera device according to an embodiment of the present disclosure
  • FIG. 4 is a schematic flow chart of a first safe driving method according to an embodiment of the present disclosure
  • FIG. 5 is a schematic flowchart diagram of a second safe driving method according to an embodiment of the present disclosure
  • FIG. 6 is a schematic flow chart of a third safe driving method according to an embodiment of the present disclosure.
  • Figure 7 is a schematic diagram showing the position of each object in the target image in the embodiment of the present disclosure.
  • FIG. 8 is a schematic structural diagram of a safe driving system according to an embodiment of the present disclosure.
  • FIG. 9 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure.
  • FIG. 10 is a schematic structural diagram of another image processing apparatus according to an embodiment of the present disclosure.
  • YUV is a color coding method in which "Y" represents brightness (Luminance or Luma), that is, gray scale value; and "U” and “V” represent chromaticity (Chrominance or Chroma), which is used to describe image color. And saturation, used to specify the color of the pixel.
  • the YUV color space is characterized by its luminance signal Y and chrominance signals U, V being separated. If there is only a Y signal component and no U, V components, the image is a black and white grayscale image.
  • Multi-Scale decomposition refers to scaling multiple scales of an input image to generate reduced images of multiple resolutions, and then analyzing and processing the scaled images of each scale.
  • the MSD can separate the high and low frequency details contained in the image into the scaled images of each scale, and then analyze and process the information of different frequency bands of the image.
  • FIG. 1 is a schematic flowchart diagram of a first image processing method according to an embodiment of the present disclosure.
  • the image processing method includes the following steps:
  • Step 101 Acquire a first image and a second image based on a dual camera device on the vehicle.
  • two different resolution imaging devices can be installed side by side on the vehicle, for example, can be recorded as the imaging device A and the imaging device B, and the imaging devices A and B take the same range of fields of view to capture images thereof.
  • the imaging device A may be a visible light imaging device
  • the imaging device B may be an infrared imaging device.
  • the resolution of the infrared camera device can be lower than the resolution of the visible light camera device, and the description of the detailed information in the scene is relatively insufficient. Therefore, the visible light camera device can select the high-definition camera device, which can ensure the visible light camera in the case of sufficient light.
  • the image captured by the device has a clear description of the detailed information of the scene, and in the case of weak light, the image captured by the infrared camera device has a clear description of the detailed information of the scene.
  • the first image can be acquired based on one of the cameras of the dual camera on the vehicle, and the second image can be acquired based on the other camera.
  • the first image may be acquired based on the imaging device A
  • the second image may be acquired based on the imaging device B
  • the first image may be acquired based on the imaging device B
  • the second image may be acquired based on the imaging device A.
  • the first image is captured by the imaging device A
  • the second image example is captured by the imaging device B.
  • Step 102 Perform image fusion on the first image and the second image to form a target image.
  • the resolution of the dual camera device may be different, before the image fusion of the first image and the second image, the resolutions of the first image and the second image need to be adjusted such that the first image and the second image The resolution of the image is the same.
  • the resolution of another image can be adjusted based on the resolution of one of the two images such that the resolutions of the two images are the same.
  • a resolution of a compromise may be obtained as a target resolution according to the resolution of the first image and the resolution of the second image, and then the resolutions of the first image and the second image are simultaneously adjusted to the target resolution.
  • the target resolution may be 1280*960
  • the resolution of the second image of the first image is adjusted to 1280*. 960.
  • the dual cameras are installed side by side and the field of view of the shooting is the same, the two images cannot be completely overlapped after the resolution adjustment due to the different positions of the dual cameras. Therefore, in the embodiment of the present disclosure, two images with the same resolution can be registered, and then the first image and the second image after registration are fused to obtain a target image.
  • one image may be selected as a reference image, and then another image is geometrically transformed according to the reference image, and the processed image is fused with the reference image, so that the two images completely coincide.
  • Step 103 Identify an object from the target image; wherein the object is a pedestrian object or a vehicle object.
  • the input image of the camera device is a color image
  • the color space is YUV.
  • the Y component in the color space can be calculated in the process of image fusion calculation.
  • the UV component does not participate in the calculation.
  • the Y component when the object in the target image is identified, the Y component may be extracted from the fused target image. Whether the target image is a color image or a black and white image, the process of extracting the Y component is the gray of the image. Degree processing to reduce the amount of computation and improve the real-time performance of the system.
  • a grayscale image of the target image can be obtained.
  • the histogram equalization processing can be performed on the grayscale image to obtain a balanced grayscale image.
  • the equalized grayscale image may be split to form at least two equalized grayscale images, and then The pedestrian identification of the gray image after equalization can be performed to obtain the identification information of the pedestrian object and the pedestrian object, and the vehicle identification of the other equalized gray image is performed to obtain the identification information of the vehicle object and the vehicle object. It should be noted that the above two-way identification processing process is performed simultaneously to improve the real-time performance of the system.
  • the identification information may include: coordinate information, width information, height information, distance information, and the like.
  • the Laplacian pyramid decomposition algorithm can be used to perform multi-level scaling processing on the equalized gray image, and then the direction gradient is performed on the scaled image on each level. Histogram of Oriented Gradient (Hologram) is extracted, and then the hog feature can be classified and recognized based on the hog feature.
  • the Laplacian pyramid decomposition algorithm can be used to perform multi-level scaling processing on the equalized grayscale image, and then the haar feature extraction is performed on the scaled image at each level, which can be based on The haar feature classifies and identifies the vehicle object identified from the object.
  • a tracking algorithm of the pedestrian object and the vehicle object such as a Kalman filter algorithm, may also be used. Pedestrian objects and vehicle objects are tracked to eliminate misidentified pedestrian and vehicle objects.
  • the image processing method of the embodiment of the present disclosure acquires a first image and a second image based on a dual imaging device on a vehicle; performs image fusion on the first image and the second image to form a target image; and identifies an object from the target image.
  • image fusion on the image captured by the dual camera to form a target image, the quality of the target image can be ensured, thereby identifying the target image, and the accuracy of the object recognition can be improved, thereby ensuring the safety of the vehicle.
  • step 102 specifically includes the following sub-steps:
  • Step 201 Adjust the resolution of the first image and/or the second image so that the resolutions of the two images are the same.
  • the resolution of another image may be adjusted based on the resolution of one of the two images such that the resolutions of the two images are the same.
  • one of the first image and the second image may be selected as a reference image, and then the resolution of the other image may be adjusted according to the resolution of the reference image.
  • the resolution of the second image may be adjusted such that the resolution of the second image is the same as the resolution of the first image, or the first image may be adjusted when the reference image is the second image.
  • the resolution of the image is such that the resolution of the first image is the same as the resolution of the second image.
  • a smaller resolution may be selected from the first image and the second image as a reference image, for example, when the resolution of the first image is lower than the resolution of the second image, the first image may be used as a reference.
  • the image can then be scaled to the second image to reduce the resolution of the second image such that the resolutions of the two images are the same. Thereby, the amount of calculation of the system can be reduced, and the real-time performance of the system can be improved.
  • a target resolution may be acquired according to the resolution of the first image and the resolution of the second image; and the resolutions of the first image and the second image are adjusted as targets.
  • Resolution For example, when the resolution of the first image is 1600*1200 and the resolution of the second image is 1024*768, the target resolution may be 1280*960, and the resolution of the second image of the first image is adjusted to 1280*. 960.
  • Step 202 Register the first image and the second image with the same resolution.
  • one of the two images with the same resolution may be selected as the reference image, and then the other image may be geometrically transformed according to the reference image, so that the processed image may well coincide with the reference image.
  • a transform coefficient that performs affine transformation on another image may be acquired according to the reference image, and then the other image is affine transformed according to the transform coefficient to obtain the first image and the second image after registration.
  • the image and the transform coefficient are obtained by calibrating the dual imaging device in advance.
  • the embodiment of the present disclosure takes the first image as a reference image example, and the imaging device that captures the first image is the imaging device A. Therefore, the second image may be geometrically transformed according to the first image captured by the imaging device A, so that the processed second image may coincide with the first image. That is, according to the first image, the transform coefficients for performing the radiation transform on the second image are acquired, and then the second image is affine-transformed according to the transform coefficients to obtain the first image and the second image after registration.
  • the calibration process of the transform coefficients may be as follows:
  • the calibration template can be made as shown in Fig. 3 (the calibration template of Fig. 3 is only an example, and can be made according to actual conditions when implemented), and then printed out with paper. Then, the calibration template is placed in front of the dual camera device, and the distance between the calibration template and the dual camera device is adjusted, so that the black rectangular frames of the four corners on the calibration template fall into the image captured by the dual camera device. In the corner area. Then, the image captured by the dual camera device can be acquired, and the coordinates of all the vertices of the black rectangular frame of the four corners are solved by the "corner point detection" method.
  • the vertex coordinates of all the black rectangular frames on the image captured by the imaging device A and the vertex coordinates of the black rectangular frame corresponding to the image captured by the imaging device B may be substituted into the formula (1).
  • the affine transforms the matrix equation and then derives the formula (2).
  • x and y represent the vertex coordinates of the black rectangular frame on the image captured by the image pickup apparatus A
  • x' and y' represent the black rectangular frame corresponding to the image captured by the image pickup apparatus A on the image captured by the image pickup apparatus B.
  • the vertex coordinates, m 1 , m 2 , m 3 , m 4 , m 5 and m 6 are the transform coefficients of the affine transformation.
  • k represents the number of vertex coordinates of the black rectangular frame (the number of k in FIG. 3 is 28), and x k and y k represent the vertex coordinates of the kth black rectangular frame on the image captured by the image pickup apparatus A. x k ' and y k ' represent the vertex coordinates corresponding to the kth black rectangular frame on the image captured by the imaging device A on the image captured by the imaging device B.
  • the transform coefficients m 1 , m 2 , m 3 , m 4 , m 5 and m 6 of the affine transform can be solved by the least squares method.
  • the second image captured by the imaging device B may be affine transformed according to the transform coefficients to obtain the registered first image and second image.
  • Step 203 merging the first image and the second image after registration to obtain a target image.
  • the fusion coefficients of the two images are first calculated.
  • the MSD method may be used to calculate the fusion coefficients of the first image and the second image after registration, and then the target image may be obtained based on the fusion coefficient.
  • the multi-scale decomposition of the first image and the second image after registration may be performed to obtain two sets of multi-scale decomposition coefficients:
  • the two sets of multi-scale decomposition coefficients can be fused according to a preset fusion rule to obtain a fusion coefficient:
  • the multi-scale inverse transform is used to inversely reconstruct the target image, as shown in the following equation:
  • image r represents the merged target image.
  • the resolutions of the two images are the same, and the first image and the second image having the same resolution are registered, The first image and the second image after registration are fused to obtain a target image. Thereby, a better coincidence of the two images can be achieved, thereby improving the accuracy of image recognition.
  • FIG. 4 is a schematic flow chart of a first safe driving method according to an embodiment of the present disclosure.
  • the safe driving method may include the following steps:
  • Step 301 Acquire a target image.
  • the first image and the second image are image-fused by the step 102, and after the target image is formed, the target image may be acquired.
  • Step 302 identifying an object from the target image.
  • the input image of the camera device is a color image
  • the color space is YUV.
  • the Y component in the color space is calculated during the image fusion calculation process, and The UV component does not participate in the calculation.
  • the Y component when the object in the target image is identified, the Y component may be extracted from the fused target image. Whether the target image is a color image or a black and white image, the process of extracting the Y component is the gray of the image. Degree processing to reduce the amount of computation and improve the real-time performance of the system.
  • a grayscale image of the target image can be obtained.
  • the histogram equalization processing can be performed on the grayscale image to obtain a balanced grayscale image.
  • the equalized grayscale image may be split to form at least two equalized grayscale images, and then The pedestrian identification of the gray image after equalization can be performed to obtain the identification information of the pedestrian object and the pedestrian object, and the vehicle identification of the other equalized gray image is performed to obtain the identification information of the vehicle object and the vehicle object. It should be noted that the above two-way identification processing process is performed simultaneously to improve the real-time performance of the system.
  • the identification information may include: coordinate information, width information, height information, distance information, and the like.
  • the Laplacian pyramid decomposition algorithm can be used to perform multi-level scaling processing on the equalized gray image, and then the direction gradient is performed on the scaled image on each level. Histogram of Oriented Gradient (Hologram) is extracted, and then the hog feature can be classified and recognized based on the hog feature.
  • the Laplacian pyramid decomposition algorithm can be used to perform multi-level scaling processing on the equalized grayscale image, and then the haar feature extraction is performed on the scaled image at each level, which can be based on The haar feature is classified and identified, and the vehicle object is identified from the object.
  • a tracking algorithm of the pedestrian object and the vehicle object such as a Kalman filter algorithm, may also be used. Pedestrian objects and vehicle objects are tracked to eliminate misidentified pedestrian and vehicle objects.
  • Step 303 Generate a safe driving strategy and execute according to the identified object and the current running information of the vehicle.
  • the current running information of the vehicle may include: the current traveling speed of the vehicle (vehicle speed), the current traveling position of the vehicle, the accelerator pedal state, and/or the brake pedal state, and the like.
  • the current operating information of the vehicle can be collected via the CAN bus of the vehicle.
  • the safe driving strategy of the vehicle can be generated and executed according to the current running information of the object and the vehicle.
  • the driver may be prompted by a voice reminder, an alarm light, or the like. In order to make the driver slow down.
  • the driver can be prompted by the steering wheel to vibrate the driver, automatically brake, etc., to protect the vehicle and the vehicle. Passenger safety.
  • the safe driving method of the embodiment of the present disclosure can acquire an object from the target image by acquiring the target image, and then generate a safe driving strategy and execute according to the recognized object and the current running information of the vehicle, thereby effectively ensuring the safety of the driving of the vehicle. .
  • step 303 specifically includes the following sub-steps:
  • Step 401 Determine the closest object from all the objects as the first object that is at risk to the vehicle.
  • the identification information of the pedestrian object and the identification information of the vehicle object may be selected, the object located in the lane in which the vehicle is located is obtained as a candidate object, and then the distance between each candidate object and the vehicle is determined according to the distance between each candidate object and the vehicle.
  • the first object is identified from all candidates.
  • Step 402 Generate and execute a safe driving strategy of the vehicle according to the first object and the running information.
  • the risk level affecting the safe driving of the vehicle is different. For example, when the first object is closer to the vehicle and the vehicle speed is higher, the first object has a higher risk level for safe driving of the vehicle, or when the first object is farther from the vehicle and the traveling speed of the vehicle is lower, It has a lower risk level for safe driving of vehicles.
  • the first distance between the first object and the vehicle may be acquired, and then the running time required for the vehicle to reach the first object is obtained according to the vehicle speed and the first distance in the operation information, and then the operation may be performed according to the operation.
  • the time determines the level of risk between the first object and the vehicle, by generating a safe driving strategy based on the risk level and executing.
  • the running time required for the vehicle to reach the first object is long. At this time, the first object has a lower risk level for safe driving of the vehicle.
  • the driver can be prompted by voice reminder, warning light, etc., so that the driver can slow down.
  • the running time required for the vehicle to reach the first object is shorter. At this time, the first object has a higher risk level for safe driving of the vehicle.
  • the steering wheel vibrates to alert the driver, automatic braking, etc., to ensure the safety of the vehicle and the passengers on the vehicle.
  • the safe driving method of the embodiment of the present disclosure by determining an object closest to the vehicle from among all the identified objects as a first object having a risk with the vehicle; generating a safe driving strategy of the vehicle according to the first object and the running information and carried out. Thereby, it is possible to determine the first object having a risk with the vehicle, to ensure the safety of the vehicle running, and to reduce the processing amount of the system, thereby improving the real-time performance of the system.
  • step 401 specifically includes the following sub-steps:
  • Step 501 Perform screening according to the identification information of the pedestrian object and the identification information of the vehicle object, and obtain a pedestrian object and a vehicle object satisfying the preset condition as candidates.
  • the preset condition may be, for example, that the object is located in the lane in which the current vehicle is located, because the object that is in danger of driving safety is the object located in the lane in which the vehicle is located.
  • edge pixels of each pedestrian object and each vehicle object may be acquired according to coordinate information in the identification information.
  • the coordinate information is the coordinates of the center point of the object, and the connected domain identification can be performed based on the image information of the center point, so that the boundary of each connected domain can be identified.
  • the edge pixel point may be determined from the boundary, and then it is determined whether the edge pixel point has at least one pixel point in the first area, and the first area is an area formed by the lane in which the vehicle is located in the target image.
  • the edge pixel When the edge pixel has at least one pixel in the first area, it indicates that the object is located in the first area, that is, in the lane where the vehicle is located, and therefore, the pedestrian object in which the edge pixel is located in the first area or Vehicle objects are candidates.
  • an identification frame may be marked for each pedestrian object and each vehicle object according to the coordinate information in the identification information.
  • the coordinate information is the coordinates of the center point of the object, and the identification frame centered on the coordinate point may be formed according to a preset size, and the identification frame may be a rectangle.
  • the identification frames 1, 2, 3, 4, and 5 respectively It is a different object, and each frame has four boundary points. Then, it can be determined whether each of the identification frames has at least one specified boundary point in the first area, and the designated boundary point can be the lower left boundary point and the lower right boundary point of the identification frame.
  • an identification frame having at least one specified boundary point in the first area may be used as the target recognition frame, and then the target is The object corresponding to the recognition box is used as a candidate.
  • the lower right boundary point of the identification frame 2 is located in the first area
  • the lower left boundary point and the lower right boundary point of the identification frame 3 are both located in the first area
  • the lower left boundary of the identification frame 4 The points are in the first area, and therefore, the recognition frames 2, 3, and 4 can be used as the target recognition frames.
  • Step 502 Identify the first object from all the candidate objects according to the distance between each candidate object and the vehicle.
  • the distance between each candidate object and the vehicle may be acquired according to the distance information in the identification information, and then the closest object is regarded as the first object with risk between the vehicle.
  • the safe driving method of the embodiment of the present disclosure by filtering according to the identification information of the pedestrian object and the identification information of the vehicle object, obtaining a pedestrian object and a vehicle object satisfying the preset condition as candidates, according to each candidate object and the vehicle The distance identifies the first object from all candidates. Thereby, it is possible to determine the first object having a risk with the vehicle, to ensure the safety of the vehicle running, and to reduce the processing amount of the system, thereby improving the real-time performance of the system.
  • FIG. 8 is a schematic structural diagram of a safe driving system according to an embodiment of the present disclosure.
  • FIG. 8 includes an imaging device A2011, an imaging device B2012, an image processing chip 202, and an actuator 203.
  • the image processing chip 202 includes an image fusion unit 2021, an image recognition unit 2022, and a system decision unit 2023.
  • the camera device A2011 and the camera device B2012 are both connected to the image processing chip 202, and the image processing chip 202 can be a SOC chip.
  • the SOC chip can integrate multiple central processing units (CPUs).
  • the main frequency of the above integrated CPU is divided into low, medium and high levels according to the frequency, the low frequency main frequency can be about 200 MHz, the medium level main frequency can be 500 to 700 MHz, and the high level main frequency can be It is 1 GHz or more.
  • Each CPU can be responsible for different image processing tasks.
  • different CPUs can share data such as image data and intermediate results through external DDR memory.
  • the system decision unit 2023 may generate a safe driving strategy according to the recognition result, and then control the execution mechanism 203 according to the safe driving strategy, and the execution mechanism 203 may issue an alarm reminder in the form of sound, light, or the like. And operations such as controlling steering wheel shake or automatic braking.
  • the present disclosure also proposes an image processing apparatus.
  • FIG. 9 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure.
  • the image processing apparatus 900 includes an image acquisition module 910, a forming module 920, and an identification module 930. among them,
  • the image acquisition module 910 is configured to acquire the first image and the second image based on the dual camera device on the vehicle.
  • the forming module 920 is configured to perform image fusion on the first image and the second image to form a target image.
  • the first identification module 930 is configured to identify an object from the target image.
  • the image processing apparatus 900 may further include:
  • the forming module 920 includes:
  • the adjustment sub-module 921 is configured to adjust the resolution of the first image and/or the second image so that the resolutions of the two images are the same.
  • the adjustment sub-module 921 is specifically configured to select one of the first image and the second image as a reference image; adjust the resolution of the other image according to the resolution of the reference image; or, according to the first image
  • the resolution and the resolution of the second image are used to obtain a target resolution; and the resolutions of the first image and the second image are adjusted to the target resolution.
  • the registration sub-module 922 is configured to register the first image and the second image with the same resolution.
  • the registration sub-module 922 is specifically configured to select one of the first image and the second image with the same resolution as the reference image; and obtain the transform coefficient for the affine transformation of the other image according to the reference image.
  • the transform coefficient is obtained by calibrating the dual imaging device in advance; and the other image is affine transformed according to the transform coefficient to obtain the first image and the second image after registration.
  • the registration sub-module 922 is specifically configured to perform multi-scale decomposition on the first image and the second image after registration, respectively, to obtain two sets of multi-scale decomposition coefficients; according to a preset fusion rule The two sets of multi-scale decomposition coefficients are fused to obtain the fusion coefficient; the multi-scale inverse transformation is performed according to the fusion coefficient to reconstruct the target image.
  • the fusion sub-module 923 fuses the first image and the second image after registration to obtain a target image.
  • the first identification module 930 includes:
  • the first processing sub-module 931 is configured to perform gray processing on the target image to obtain a grayscale image of the target image.
  • the second processing sub-module 932 is configured to perform histogram equalization processing on the grayscale image to obtain a balanced grayscale image.
  • the splitting sub-module 933 is configured to split the equalized grayscale image to form at least two equalized grayscale images.
  • the pedestrian recognition sub-module 934 is configured to perform pedestrian recognition on a balanced grayscale image, and obtain identification information of the pedestrian object and the pedestrian object.
  • the pedestrian recognition sub-module 934 is specifically configured to perform multi-level scaling processing on the equalized gray image by using a Laplacian pyramid decomposition algorithm; and perform hog feature extraction on the scaled image on each level; Based on the hog feature, the classification recognizes the pedestrian object identified from the object.
  • the vehicle identification sub-module 935 is configured to perform vehicle identification on another equalized grayscale image, and acquire identification information of the vehicle object and the vehicle object.
  • the vehicle identification sub-module 935 is specifically configured to perform multi-level scaling processing on the equalized grayscale image by using a Laplacian pyramid decomposition algorithm; and haar feature extraction on the scaled image on each level; The vehicle object identified from the object is classified and identified based on the haar feature.
  • the tracking culling sub-module 936 is configured to track the identified pedestrian object and the vehicle object, and reject the misidentified pedestrian object and the vehicle object.
  • An image processing apparatus acquires a first image and a second image based on a dual imaging device on a vehicle; performs image fusion on the first image and the second image to form a target image; and identifies an object from the target image.
  • image fusion on the image captured by the dual camera to form a target image, the quality of the target image can be ensured, thereby identifying the target image, and the accuracy of the object recognition can be improved, thereby ensuring the safety of the vehicle.
  • first and second are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated.
  • features defining “first” or “second” may include at least one of the features, either explicitly or implicitly.
  • the meaning of "a plurality” is at least two, such as two, three, etc., unless specifically defined otherwise.
  • Any process or method description in the flowcharts or otherwise described herein may be understood to represent a module, segment or portion of code comprising one or more executable instructions for implementing the steps of a custom logic function or process.
  • the scope of the preferred embodiments of the present disclosure includes additional implementations, in which the functions may be performed in a substantially simultaneous manner or in an inverse order depending on the functions involved, in the order shown or discussed. It will be understood by those skilled in the art to which the embodiments of the present disclosure pertain.
  • a "computer-readable medium” can be any apparatus that can contain, store, communicate, propagate, or transport a program for use in an instruction execution system, apparatus, or device, or in conjunction with the instruction execution system, apparatus, or device.
  • computer readable media include the following: electrical connections (electronic devices) having one or more wires, portable computer disk cartridges (magnetic devices), random access memory (RAM), Read only memory (ROM), erasable editable read only memory (EPROM or flash memory), fiber optic devices, and portable compact disk read only memory (CDROM).
  • the computer readable medium may even be a paper or other suitable medium on which the program can be printed, as it may be optically scanned, for example by paper or other medium, followed by editing, interpretation or, if appropriate, other suitable The method is processed to obtain the program electronically and then stored in computer memory.
  • portions of the present disclosure can be implemented in hardware, software, firmware, or a combination thereof.
  • multiple steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system.
  • a suitable instruction execution system For example, if implemented in hardware and in another embodiment, it can be implemented by any one or combination of the following techniques well known in the art: discrete with logic gates for implementing logic functions on data signals Logic circuits, application specific integrated circuits with suitable combinational logic gates, programmable gate arrays (PGAs), field programmable gate arrays (FPGAs), and the like.
  • each functional unit in various embodiments of the present disclosure may be integrated into one processing module, or each unit may exist physically separately, or two or more units may be integrated into one module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
  • the integrated modules, if implemented in the form of software functional modules and sold or used as stand-alone products, may also be stored in a computer readable storage medium.
  • the above mentioned storage medium may be a read only memory, a magnetic disk or an optical disk or the like. While the embodiments of the present disclosure have been shown and described above, it is understood that the foregoing embodiments are illustrative and are not to be construed as limiting the scope of the disclosure The embodiments are subject to variations, modifications, substitutions and variations.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Traffic Control Systems (AREA)

Abstract

本公开提出一种图像处理方法及其装置、安全驾驶方法,其中,图像处理方法包括:基于车辆上的双摄像装置获取第一图像和第二图像;对第一图像和第二图像进行图像融合,形成目标图像;从目标图像中识别对象;其中,对象为行人对象或者车辆对象。该图像处理方法通过对双摄像头拍摄的图像进行图像融合,形成目标图像,能够保证目标图像的质量,从而对目标图像进行识别,可以提升对象识别的准确度,进而保障车辆行驶的安全性。

Description

图像处理方法及其装置、安全驾驶方法
相关申请的交叉引用
本公开要求比亚迪股份有限公司于2017年10月31日提交的、发明名称为“图像处理方法及其装置、安全驾驶方法及其装置”的、中国专利申请号“201711050555.7”的优先权。
技术领域
本公开涉及车辆控制技术领域,尤其涉及一种图像处理方法及其装置、安全驾驶方法。
背景技术
随着车辆保有量的持续增加,交通事故的发生率也随之增加。为了有效保证车辆的驾驶人员以及车辆中乘客的生命与财产安全,车辆生产厂商均致力于开发更加可靠的安全辅助系统。
相关技术中,车辆的高级驾驶辅助系统(Advanced Driver Assistant System,ADAS)采用视觉方式,对车辆外部的环境数据进行采集,而后对采集的数据进行对象的识别。具体地,采用可见光摄像头获取车辆外部的图像,而后对采集到的图像进行对象识别。
发明内容
本公开提出一种图像处理方法及其装置、安全驾驶方法,通过对双摄像头拍摄的图像进行图像融合,形成目标图像,能够保证目标图像的质量,从而对目标图像进行识别,可以提升对象识别的准确度,进而保障车辆行驶的安全性,用于解决现有技术中由于可见光摄像装置只能在光线充足的场景下,拍摄高质量的图像,而在光线较弱的场景下,拍摄的图像画面较为模糊,且噪声较大,因此,在后续对图像进行对象识别的过程中,对象的误识别率以及漏识别率较大,直接影响车辆的行驶安全的技术问题。
本公开第一方面实施例提出了一种图像处理方法,包括:
基于车辆上的双摄像装置获取第一图像和第二图像;
对所述第一图像和所述第二图像进行图像融合,形成目标图像;
从所述目标图像中识别对象;其中,所述对象为行人对象或者车辆对象。
本公开实施例的图像处理方法,通过基于车辆上的双摄像装置获取第一图像和第二图像;对第一图像和第二图像进行图像融合,形成目标图像;从目标图像中识别对象。本实 施例中,通过对双摄像头拍摄的图像进行图像融合,形成目标图像,能够保证目标图像的质量,从而对目标图像进行识别,可以提升对象识别的准确度,进而保障车辆行驶的安全性。
本公开第二方面实施例提出了一种安全驾驶方法,包括:
获取目标图像;其中,所述目标图像是根据本公开第一方面实施例所述的图像处理方法得到的;
从所述目标图像中识别对象;
根据识别出的所述对象和所述车辆当前的运行信息,生成安全驾驶策略并执行。
本公开实施例的安全驾驶方法,通过获取目标图像,从目标图像中识别对象,而后根据识别出的对象和车辆的当前的运行信息,生成安全驾驶策略并执行,能够有效保障车辆行驶的安全性。
本公开第三方面实施例提出了一种图像处理装置,包括:
图像获取模块,用于基于车辆上的双摄像装置获取第一图像和第二图像;
形成模块,用于对所述第一图像和所述第二图像进行图像融合,形成目标图像;
第一识别模块,用于从所述目标图像中识别对象。
本公开实施例的图像处理装置,通过基于车辆上的双摄像装置获取第一图像和第二图像;对第一图像和第二图像进行图像融合,形成目标图像;从目标图像中识别对象。本实施例中,通过对双摄像头拍摄的图像进行图像融合,形成目标图像,能够保证目标图像的质量,从而对目标图像进行识别,可以提升对象识别的准确度,进而保障车辆行驶的安全性。
本公开附加的方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本公开的实践了解到。
附图说明
为了更清楚地说明本公开实施例中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本公开实施例所提供的第一种图像处理方法的流程示意图;
图2为本公开实施例所提供的第二种图像处理方法的流程示意图;
图3为本公开实施例中双摄像装置的标定模板示意图;
图4为本公开实施例所提供的第一种安全驾驶方法的流程示意图;
图5为本公开实施例所提供的第二种安全驾驶方法的流程示意图;
图6为本公开实施例所提供的第三种安全驾驶方法的流程示意图;
图7为本公开实施例中目标图像中的每个对象的位置示意图;
图8为本公开实施例中安全驾驶系统的结构示意图;
图9为本公开实施例所提供的一种图像处理装置的结构示意图;
图10为本公开实施例所提供的另一种图像处理装置的结构示意图。
具体实施方式
下面详细描述本公开的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,旨在用于解释本公开,而不能理解为对本公开的限制。
下面参考附图描述本公开实施例的图像处理方法及其装置、安全驾驶方法。在具体描述本公开实施例之前,为了便于理解,首先对常用技术词进行介绍:
YUV,是一种颜色编码方法,其中,“Y”表示明亮度(Luminance或Luma),即灰阶值;而“U”和“V”表示色度(Chrominance或Chroma),作用是描述图像色彩及饱和度,用于指定像素的颜色。YUV色彩空间的特点是它的亮度信号Y和色度信号U、V是分离的。如果只有Y信号分量而没有U、V分量,表明图像为黑白灰度图像。
多尺度分解(Multi-Scale decomposition,MSD),指对输入图像进行多个尺度的缩放,生成多个分辨率的缩小图像,而后在各个尺度的缩放图像上进行分析和处理。MSD可以将图像包含的高低频细节信息分离至各个尺度的缩放图像中,而后可以对图像不同频段的信息进行分析和处理。
图1为本公开实施例所提供的第一种图像处理方法的流程示意图。
如图1所示,该图像处理方法包括以下步骤:
步骤101,基于车辆上的双摄像装置获取第一图像和第二图像。
本公开实施例中,车辆上可以并排安装两个不同分辨率的摄像装置,例如可以记为摄像装置A和摄像装置B,摄像装置A和B所拍摄的视野范围相同,以对其拍摄的图像进行后续处理。例如,摄像装置A可以为可见光摄像装置,摄像装置B可以为红外摄像装置。红外摄像装置的分辨率可以低于可见光摄像装置的分辨率,其对场景中的细节信息描述相对不足,因此,可见光摄像装置可以选择高清摄像装置,这样能够保证在光线充足的情况下,可见光摄像装置所拍摄图像对场景的细节信息描述比较清晰,而在光线较弱的情况下,红外摄像装置所拍摄图像对场景的细节信息描述比较清晰。
因此,可以基于车辆上的双摄像装置的其中一个摄像装置获取第一图像,基于另一个 摄像装置获取第二图像。例如可以基于摄像装置A获取第一图像,基于摄像装置B获取第二图像,或者可以基于摄像装置B获取第一图像,基于摄像装置A获取第二图像。
本公开实施例以摄像装置A拍摄得到第一图像,摄像装置B拍摄得到第二图像示例。
步骤102,对第一图像和第二图像进行图像融合,形成目标图像。
本公开实施例中,由于双摄像装置的分辨率可以不同,在对第一图像和第二图像进行图像融合前,需要调整第一图像和第二图像的分辨率,使得第一图像和第二图像的分辨率相同。
例如,可以基于两个图像中的一个图像的分辨率,调整另一个图像的分辨率,使得两个图像的分辨率相同。或者,可以根据第一图像的分辨率和第二图像的分辨率,获取一个折中的分辨率中作为目标分辨率,而后同时调整第一图像和第二图像的分辨率为目标分辨率。例如,当第一图像的分辨率为1600*1200,第二图像的分辨率为1024*768时,目标分辨率可以为1280*960,同时调整第一图像的第二图像的分辨率为1280*960。
需要说明的是,虽然双摄像装置并排安装在一起,且拍摄的视野范围相同,但是由于双摄像装置的位置不同,分辨率调整后两个图像还是无法完全重合到一起。因此,本公开实施例中,可以将分辨率相同的两个图像进行配准,而后对配准后的第一图像和第二图像进行融合,得到目标图像。
作为一种可能的实现方式,可以选择一个图像作为基准图像,而后根据基准图像,对另外一个图像进行几何变换处理,将处理后的图像与该基准图像进行融合,从而使得两个图像完全重合。
步骤103,从目标图像中识别对象;其中,对象为行人对象或者车辆对象。
一般而言,摄像装置的输入图像是彩色图像,且色彩空间是YUV的,为了减少图像融合过程中的运算量,在图像融合的计算过程中,可以只对色彩空间中的Y分量进行计算,而UV分量不参与计算。
本公开实施例中,在对目标图像中的对象进行识别时,可以从融合后的目标图像中提取出Y分量,无论目标图像是彩色图像还是黑白图像,提取Y分量的过程即为图像的灰度处理过程,以此来减少运算量,提升系统的实时性能。
在从目标图像中提取出Y分量后,即对目标图像进行灰度处理后,可以得到目标图像的灰度图。为了提高灰度图的对比度以及灰度色调的变化,使得灰度图更加清晰,本公开实施例中,可以对灰度图进行直方图均衡化处理,得到均衡后的灰度图。
本公开实施例中,由于行人、车辆等对象的识别规则不同,因此,在得到均衡后的灰度图后,可以将均衡后灰度图分路,形成至少两路均衡后灰度图,然后可以对一路均衡后灰度图进行行人识别,获取行人对象和行人对象的识别信息,而对另一路均衡后灰度图进 行车辆识别,获取车辆对象和车辆对象的识别信息。需要说明的是,上述两路识别处理过程是同时进行的,以提升系统的实时性能。
本公开实施例中,识别信息可以包括:坐标信息、宽度信息、高度信息、距离信息等。
作为一种可能的实现方式,对于行人对象的识别,可以利用拉普拉斯金字塔分解算法,对均衡后灰度图进行多层次的缩放处理,而后在每个层次上的缩放图像上进行方向梯度直方图(Histogram of Oriented Gradient,本公开实施例记为hog)特征提取,进而可以基于hog特征进行分类识别,从对象中识别出的行人对象。对于车辆对象的识别,可以利用拉普拉斯金字塔分解算法,对均衡后灰度图进行多层次的缩放处理,而后对每个层次上的缩放图像进行哈尔(haar)特征提取,进而可以基于haar特征进行分类识别,从对象中识别出的车辆对象。
需要说明的是,为了提升行人对象或者车辆对象识别的准确性,在识别出目标图像中的行人对象或者车辆对象后,还可以采用行人对象和车辆对象的跟踪算法,例如卡尔曼滤波算法,对行人对象和车辆对象进行跟踪,剔除误识别的行人对象和车辆对象。
本公开实施例的图像处理方法,通过基于车辆上的双摄像装置获取第一图像和第二图像;对第一图像和第二图像进行图像融合,形成目标图像;从目标图像中识别对象。本实施例中,通过对双摄像头拍摄的图像进行图像融合,形成目标图像,能够保证目标图像的质量,从而对目标图像进行识别,可以提升对象识别的准确度,进而保障车辆行驶的安全性。
作为本公开实施例的一种可能的实现方式,参见图2,在图1所示实施例的基础上,步骤102具体包括以下子步骤:
步骤201,调整第一图像和/或第二图像的分辨率,使两个图像的分辨率相同。
作为本公开实施例的一种可能的实现方式,可以基于两个图像中的一个图像的分辨率,调整另一个图像的分辨率,使得两个图像的分辨率相同。本公开实施例中,可以从第一图像和第二图像中选取一个作为参考图像,而后根据参考图像的分辨率调整另一个图像的分辨率。例如,当参考图像为第一图像时,可以调整第二图像的分辨率,使得第二图像的分辨率与第一图像的分辨率相同,或者当参考图像为第二图像时,可以调整第一图像的分辨率,使得第一图像的分辨率与第二图像的分辨率相同。
再例如,可以从第一图像和第二图像中选取分辨率较小的一个作为参考图像,比如,当第一图像的分辨率低于第二图像的分辨率时,可以将第一图像作为参考图像,而后可以对第二图像进行缩放处理,减小第二图像的分辨率,使得两个图像的分辨率相同。由此,可以减少系统的运算量,提升系统的实时性能。
作为本公开实施例的另一种可能的实现方式,可以根据第一图像的分辨率和第二图像 的分辨率,获取一个目标分辨率;同时调整第一图像和第二图像的分辨率为目标分辨率。例如,当第一图像的分辨率为1600*1200,第二图像的分辨率为1024*768时,目标分辨率可以为1280*960,同时调整第一图像的第二图像的分辨率为1280*960。
步骤202,将分辨率相同的第一图像和第二图像进行配准。
本公开实施例中,可以从分辨率相同的两个图像中选取一个作为基准图像,而后根据基准图像,对另外一个图像进行几何变换处理,使得处理后的图像可以与基准图像较好地重合。
作为一种可能的实现方式,可以根据基准图像,获取对另一个图像进行仿射变换的变换系数,而后按照变换系数对另一个图像进行仿射变换,得到配准后的第一图像和第二图像,变换系数是预先对双摄像装置进行标定得到的。
本公开实施例以第一图像为基准图像示例,拍摄第一图像的摄像装置为摄像装置A。因此,可以根据摄像装置A所拍摄的第一图像,对第二图像进行几何变换处理,使得处理后的第二图像可以与第一图像较好地重合。即根据第一图像,获取对第二图像进行放射变换的变换系数,而后按照变换系数对第二图像进行仿射变换,得到配准后的第一图像和第二图像。
本公开实施例中,变换系数的标定过程可以如下所示:
可以按照图3所示制作标定模板(图3的标定模板只是示例,具体实现时,可以根据实际情况制作),而后用纸张打印出来。而后,将该标定模板放置在双摄像装置的正前方,调整标定模板与双摄像装置之间的距离,使得标定模板上面的4个角落的黑色矩形框均落入双摄像装置所拍摄图像的4个角落区域中。而后可以采集双摄像装置拍摄的图像,利用“角点检测”方法求解出4个角落的黑色矩形框的全部顶点坐标。
本公开实施例中,可以将摄像装置A拍摄的图像上所有的黑色矩形框的顶点坐标以及摄像装置B拍摄的图像上与之对应的黑色矩形框的顶点坐标代入到如公式(1)所示的仿射变换矩阵方程中,而后推导得到公式(2)。
Figure PCTCN2018112902-appb-000001
公式(1)中,x和y表示摄像装置A拍摄的图像上黑色矩形框的顶点坐标,x'和y'表示摄像装置B拍摄的图像上与摄像装置A拍摄的图像上对应的黑色矩形框的顶点坐标,m 1、m 2、m 3、m 4、m 5和m 6为仿射变换的变换系数。
Figure PCTCN2018112902-appb-000002
公式(2)中,k表示黑色矩形框顶点坐标的个数(图3中k的个数为28),x k和y k表示摄像装置A拍摄的图像上第k个黑色矩形框的顶点坐标,x k'和y k'表示摄像装置B拍摄的图像上与摄像装置A拍摄的图像上第k个黑色矩形框对应的顶点坐标。
最后,利用最小二乘法,可以求解得到仿射变换的变换系数m 1、m 2、m 3、m 4、m 5和m 6
在得到仿射变换的变换系数后,可以按照变换系数对摄像装置B所拍摄的第二图像进行仿射变换,得到配准后的第一图像和第二图像。
步骤203,对配准后的第一图像和第二图像进行融合,得到目标图像。
本公开实施例中,在对配准后的第一图像和第二图像进行融合时,首先需计算两个图像的融合系数。例如,可以使用MSD法,计算配准后的第一图像和第二图像的融合系数,而后可以根据融合系数得到目标图像。
本公开实施例中,可以对配准后的第一图像和第二图像进行多尺度分解,获取两组多尺度分解系数:
Figure PCTCN2018112902-appb-000003
公式(3)中,i=1,2,…,n,n表示多尺度分解的层数,
Figure PCTCN2018112902-appb-000004
表示第一图像的多尺度分解系数,
Figure PCTCN2018112902-appb-000005
表示第二图像的多尺度分解系数。
在得到两组多尺度分解系数后,可以按照预设的融合规则对两组多尺度分解系数进行融合,得到融合系数:
Figure PCTCN2018112902-appb-000006
公式(4)中,
Figure PCTCN2018112902-appb-000007
表示融合系数,θ表示预设的融入规则。
在得到融合系数
Figure PCTCN2018112902-appb-000008
后,可以根据融合系数
Figure PCTCN2018112902-appb-000009
进行多尺度反变换反向重构出目标图像,具体如下式所示:
Figure PCTCN2018112902-appb-000010
公式(5)中,image r表示融合后的目标图像。
本公开实施例的图像处理方法,通过调整第一图像和/或第二图像的分辨率,使两个图像的分辨率相同,将分辨率相同的第一图像和第二图像进行配准,对配准后的第一图像和第二图像进行融合,得到目标图像。由此,可以实现两个图像较好的重合,进而可以提升图像识别的准确性。
图4为本公开实施例所提供的第一种安全驾驶方法的流程示意图。
如图4所示,该安全驾驶方法可以包括以下步骤:
步骤301,获取目标图像。
本公开实施例中,由步骤102对第一图像和第二图像进行图像融合,形成目标图像后,可以获取目标图像。
步骤302,从目标图像中识别对象。
一般而言,摄像装置的输入图像是彩色图像,且色彩空间是YUV的,为了减少图像融合过程中的运算量,在图像融合的计算过程中,只对色彩空间中的Y分量进行计算,而UV分量不参与计算。
本公开实施例中,在对目标图像中的对象进行识别时,可以从融合后的目标图像中提取出Y分量,无论目标图像是彩色图像还是黑白图像,提取Y分量的过程即为图像的灰度处理过程,以此来减少运算量,提升系统的实时性能。
在从目标图像中提取出Y分量后,即对目标图像进行灰度处理后,可以得到目标图像的灰度图。为了提高灰度图的对比度以及灰度色调的变化,使得灰度图更加清晰,本公开实施例中,可以对灰度图进行直方图均衡化处理,得到均衡后的灰度图。
本公开实施例中,由于行人、车辆等对象的识别规则不同,因此,在得到均衡后的灰度图后,可以将均衡后灰度图分路,形成至少两路均衡后灰度图,然后可以对一路均衡后灰度图进行行人识别,获取行人对象和行人对象的识别信息,而对另一路均衡后灰度图进行车辆识别,获取车辆对象和车辆对象的识别信息。需要说明的是,上述两路识别处理过程是同时进行的,以提升系统的实时性能。
本公开实施例中,识别信息可以包括:坐标信息、宽度信息、高度信息、距离信息等。
作为一种可能的实现方式,对于行人对象的识别,可以利用拉普拉斯金字塔分解算法,对均衡后灰度图进行多层次的缩放处理,而后在每个层次上的缩放图像上进行方向梯度直方图(Histogram of Oriented Gradient,本公开实施例记为hog)特征提取,进而可以基于hog特征进行分类识别,从对象中识别出的行人对象。对于车辆对象的识别,可以利用拉普拉斯金字塔分解算法,对均衡后灰度图进行多层次的缩放处理,而后对每个层次上的缩 放图像进行哈尔(haar)特征提取,进而可以基于haar特征进行分类识别,从对象中识别出车辆对象。
需要说明的是,为了提升行人对象或者车辆对象识别的准确性,在识别出目标图像中的行人对象或者车辆对象后,还可以采用行人对象和车辆对象的跟踪算法,例如卡尔曼滤波算法,对行人对象和车辆对象进行跟踪,剔除误识别的行人对象和车辆对象。
步骤303,根据识别出的对象和车辆当前的运行信息,生成安全驾驶策略并执行。
本公开实施例中,车辆当前的运行信息可以包括:车辆当前的行驶速度(车速)、车辆当前所行驶的位置、加速踏板状态和/或制动踏板状态等。可以通过车辆的CAN总线采集车辆当前的运行信息。
可以理解的是,对象离车辆的距离不同,以及车辆当前的运行信息不同时,影响车辆安全行驶的风险等级不同。例如,当对象距离车辆较近,且车辆当前的行驶速度较高时,其对车辆安全行驶的风险等级较高,或者,当对象距离车辆较远,且车辆当前的行驶速度较低时,其对车辆安全行驶的风险等级较低。因此,本公开实施例中,可以根据对象和车辆当前的运行信息,生成车辆的安全驾驶策略并执行。
举例而言,当对象距离车辆较远,且车辆当前的行驶速度较低时,由于其对车辆安全行驶的风险等级较低,此时,可以通过语音提醒、报警灯等形式,提示驾驶员,以使驾驶员减速行驶。而当对象距离车辆较近,且车辆当前的行驶速度较高时,由于其对车辆安全行驶的风险等级较高,此时,可以通过方向盘振动提示驾驶员、自动刹车等,保障车辆和车上乘客的安全。
本公开实施例的安全驾驶方法,通过获取目标图像,从目标图像中识别对象,而后根据识别出的对象和车辆的当前的运行信息,生成安全驾驶策略并执行,能够有效保障车辆行驶的安全性。
作为本公开实施例的一种可能的实现方式,参见图5,在图4所示实施例的基础上,步骤303具体包括以下子步骤:
步骤401,从所有的对象中确定距离最近的对象作为与车辆之间存在风险的第一对象。
可以理解的是,当车辆行驶时,对驾驶安全造成威胁的是位于车辆所处车道中的对象,而其他车道中的对象对车辆的驾驶安全不构成威胁。因此,本公开实施例中,可以根据行人对象的识别信息和车辆对象的识别信息进行筛选,从中获取位于车辆所处车道中的对象作为候选对象,而后根据每个候选对象与车辆之间的距离,从所有的候选对象中识别出第一对象。
步骤402,根据第一对象和运行信息,生成车辆的安全驾驶策略并执行。
可以理解的是,第一对象离车辆的距离不同,以及车辆的运行信息不同时,影响车辆 安全行驶的风险等级不同。例如,当第一对象距离车辆较近,且车速较高时,第一对象对车辆安全行驶的风险等级较高,或者,当第一对象距离车辆较远,且车辆的行驶速度较低时,其对车辆安全行驶的风险等级较低。
因此,本公开实施例中,可以获取第一对象与车辆之间的第一距离,而后根据运行信息中的车速和第一距离,获取车辆到达第一对象所需的运行时间,进而可以根据运行时间确定第一对象与车辆之间的风险等级,通过根据风险等级生成安全驾驶策略并执行。
举例而言,当第一对象距离车辆较远,且车速较低时,车辆到达第一对象所需的运行时间较长,此时,第一对象对车辆安全行驶的风险等级较低,因此,可以通过语音提醒、报警灯等形式,提示驾驶员,以使驾驶员减速行驶。而当第一对象距离车辆较近,且车速较高时,车辆到达第一对象所需的运行时间较短,此时,第一对象对车辆安全行驶的风险等级较高,此时,可以通过方向盘振动提示驾驶员、自动刹车等,保障车辆和车上乘客的安全。
本公开实施例的安全驾驶方法,通过从识别出的所有对象中确定距离车辆最近的对象作为与车辆之间存在风险的第一对象;根据第一对象和运行信息,生成车辆的安全驾驶策略并执行。由此,能够在确定出与车辆之间存在风险的第一对象,保障车辆行驶的安全性的同时,减少系统的处理量,从而提升系统的实时性能。
作为本公开实施例的一种可能的实现方式,参见图6,在图5所示实施例的基础上,步骤401具体包括以下子步骤:
步骤501,根据行人对象的识别信息和车辆对象的识别信息进行筛选,从中获取满足预设条件的行人对象和车辆对象作为候选对象。
由于对驾驶安全造成威胁的是位于车辆所处车道中的对象,因此,本公开实施例中,预设条件例如可以为对象位于当前车辆所处车道中。
作为本公开实施例的一种可能的实现方式,可以根据识别信息中的坐标信息,获取每个行人对象和每个车辆对象的边缘像素点。坐标信息为对象的中心点坐标,可以基于中心点的图像信息,进行连通域识别,从而可以识别每个连通域的边界。本公开实施例中,可以从边界中确定出边缘像素点,而后,判断边缘像素点是否有至少一个像素点处于第一区域内,第一区域为目标图像中由车辆所处车道形成的区域。当边缘像素点有至少一个像素点处于第一区域内时,表明该对象位于第一区域中,即位于车辆所处车道中,因此,可以将边缘像素点均处于第一区域内的行人对象或者车辆对象作为候选对象。
作为本公开实施例的另一种可能的实现方式,可以根据识别信息中的坐标信息,为每个行人对象和每个车辆对象标记识别框。坐标信息为对象的中心点坐标,可以按照预设的尺寸形成以坐标点为中心的识别框,该识别框可以为矩形,例如,参见图7,识别框1、2、 3、4和5分别为不同的对象,且每个识别框均具有四个边界点。而后可以判断每个识别框是否有至少一个指定边界点处于第一区域内,指定边界点可以为识别框的左下边界点和右下边界点。当有至少一个指定边界点处于第一区域内时,表明该对象位于第一区域中,此时,可以将有至少一个指定边界点处于第一区域内的识别框作为目标识别框,而后将目标识别框对应的对象作为候选对象。
作为一种示例,参见图7,识别框2的右下边界点位于第一区域中、识别框的3的左下边界点和右下边界点均位于第一区域中,以及识别框4的左下边界点第一区域中,因此,可以将识别框2、3和4作为目标识别框。
步骤502,根据每个候选对象与车辆之间的距离,从所有的候选对象中识别出第一对象。
本公开实施例中,可以根据识别信息中的距离信息,获取每个候选对象与车辆之间的距离,而后将距离最近的对象作为与车辆之间存在风险的第一对象。
本公开实施例的安全驾驶方法,通过根据行人对象的识别信息和车辆对象的识别信息进行筛选,从中获取满足预设条件的行人对象和车辆对象作为候选对象,根据每个候选对象与车辆之间的距离,从所有的候选对象中识别出第一对象。由此,能够在确定出与车辆之间存在风险的第一对象,保障车辆行驶的安全性的同时,减少系统的处理量,从而提升系统的实时性能。
作为一种示例,参见图8,图8为本公开实施例中安全驾驶系统的结构示意图。图8包括:摄像装置A2011、摄像装置B2012、图像处理芯片202,以及执行机构203。图像处理芯片202包括:图像融合单元2021、图像识别单元2022和系统决策单元2023。
现有技术中,采用多芯片完成图像数据的不同处理过程,每个单独的芯片均需采用独占的DDR内存,不同的芯片之间无法进行内存共享,导致图像数据和中间结果等数据的传输较为困难。
而本公开实施例中,摄像装置A2011和摄像装置B2012均与图像处理芯片202相连,图像处理芯片202可以采用SOC芯片,SOC芯片上可以集成多个中央处理单元(Central Processing Unit,CPU),可以将上述集成的CPU的主频按照频率大小分为低、中、高等几个等级,低等级的主频可以为200MHz左右,中等级的主频可以为500~700MHz,而高等级的主频可以为1GHz以上。每个CPU可以负责不同的图像处理工作,此外,不同的CPU可以通过外部的DDR内存进行图像数据和中间结果等数据的共享。
在图像识别单元2022识别出对象后,系统决策单元2023可以根据识别结果生成安全驾驶策略,然后根据安全驾驶策略对执行机构203进行控制,执行机构203可以发出包括声、光等形式的报警提醒,以及诸如控制方向盘抖动或者自动刹车等操作。
为了实现上述实施例,本公开还提出一种图像处理装置。
图9为本公开实施例所提供的一种图像处理装置的结构示意图。
如图9所示,该图像处理装置900包括:图像获取模块910、形成模块920,以及识别模块930。其中,
图像获取模块910,用于基于车辆上的双摄像装置获取第一图像和第二图像。
形成模块920,用于对第一图像和第二图像进行图像融合,形成目标图像。
第一识别模块930,用于从目标图像中识别对象。
在本公开实施例的一种可能的实现方式中,参见图10,在图9所示实施例的基础上,该图像处理装置900还可以包括:
本公开实施例中,形成模块920,包括:
调整子模块921,用于调整第一图像和/或第二图像的分辨率,使两个图像的分辨率相同。
本公开实施例中,调整子模块921,具体用于从第一图像和第二图像中选取一个作为参考图像;根据参考图像的分辨率调整另一个图像的分辨率;或者,根据第一图像的分辨率和第二图像的分辨率,获取一个目标分辨率;同时调整第一图像和第二图像的分辨率为目标分辨率。
配准子模块922,用于将分辨率相同的第一图像和第二图像进行配准。
本公开实施例中,配准子模块922,具体用于从分辨率相同的第一图像和第二图像中选取一个作为基准图像;根据基准图像,获取对另一个图像进行仿射变换的变换系数;其中,变换系数是预先对双摄像装置进行标定得到的;按照变换系数对另一个图像进行仿射变换,得到配准后的第一图像和第二图像。
作为一种可能的实现方式,配准子模块922,具体用于分别对配准后的第一图像和第二图像进行多尺度分解,获取两组多尺度分解系数;按照预设的融合规则对两组多尺度分解系数进行融合,得到融合系数;根据融合系数进行多尺度反变换反向重构出目标图像。
融合子模块923,对配准后的第一图像和第二图像进行融合,得到目标图像。
本公开实施例中,第一识别模块930,包括:
第一处理子模块931,用于对目标图像进行灰度处理,得到目标图像的灰度图。
第二处理子模块932,用于对灰度图进行直方图均衡化处理,得到均衡后灰度图。
分路子模块933,用于将均衡后灰度图分路,形成至少两路均衡后灰度图。
行人识别子模块934,用于对一路均衡后灰度图进行行人识别,获取行人对象和行人对象的识别信息。
本公开实施例中,行人识别子模块934,具体用于利用拉普拉斯金字塔分解算法,对 均衡后灰度图进行多层次的缩放处理;对每个层次上的缩放图像进行hog特征提取;基于hog特征进行分类识别,从对象中识别出的行人对象。
车辆识别子模块935,用于对另一路均衡后灰度图进行车辆识别,获取车辆对象和车辆对象的识别信息。
本公开实施例中,车辆识别子模块935,具体用于利用拉普拉斯金字塔分解算法,对均衡后灰度图进行多层次的缩放处理;对每个层次上的缩放图像进行haar特征提取;基于haar特征进行分类识别,从对象中识别出的车辆对象。
跟踪剔除子模块936,用于对识别出的行人对象和车辆对象进行跟踪,剔除误识别的行人对象和车辆对象。
需要说明的是,前述对图像处理方法实施例的解释说明也适用于该实施例的图像处理装置900,此处不再赘述。
本公开实施例的图像处理装置,通过基于车辆上的双摄像装置获取第一图像和第二图像;对第一图像和第二图像进行图像融合,形成目标图像;从目标图像中识别对象。本实施例中,通过对双摄像头拍摄的图像进行图像融合,形成目标图像,能够保证目标图像的质量,从而对目标图像进行识别,可以提升对象识别的准确度,进而保障车辆行驶的安全性。
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本公开的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。在本公开的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现定制逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本公开的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本公开的实施例所属技术领域的技术人员所理解。
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理器的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。
应当理解,本公开的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。如,如果用硬件来实现和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。
本技术领域的普通技术人员可以理解实现上述实施例方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。
此外,在本公开各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。
上述提到的存储介质可以是只读存储器,磁盘或光盘等。尽管上面已经示出和描述了本公开的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本公开的限制,本领域的普通技术人员在本公开的范围内可以对上述实施例进行变化、修改、替换和变型。

Claims (17)

  1. 一种图像处理方法,其特征在于,包括:
    基于车辆上的双摄像装置获取第一图像和第二图像;
    对所述第一图像和所述第二图像进行图像融合,形成目标图像;
    从所述目标图像中识别对象;其中,所述对象为行人对象或者车辆对象。
  2. 根据权利要求1所述的图像处理方法,其特征在于,所述对所述第一图像和所述第二图像进行图像融合,形成目标图像,包括:
    调整所述第一图像和/或所述第二图像的分辨率,使两个图像的分辨率相同;
    将分辨率相同的所述第一图像和所述第二图像进行配准;
    对配准后的所述第一图像和所述第二图像进行融合,得到所述目标图像。
  3. 根据权利要求2所述的图像处理方法,其特征在于,所述调整所述第一图像和/或所述第二图像的分辨率,使两个图像的分辨率相同,包括:
    从所述第一图像和所述第二图像中选取一个作为参考图像;
    根据所述参考图像的分辨率调整另一个图像的分辨率;或者,
    根据所述第一图像的分辨率和所述第二图像的分辨率,获取一个目标分辨率;
    同时调整所述第一图像和所述第二图像的分辨率为所述目标分辨率。
  4. 根据权利要求2或3所述的图像处理方法,其特征在于,所述将分辨率相同的所述第一图像和所述第二图像进行配准,包括:
    从分辨率相同的所述第一图像和所述第二图像中选取一个作为基准图像;
    根据所述基准图像,获取对另一个图像进行仿射变换的变换系数;其中,所述变换系数是预先对所述双摄像装置进行标定得到的;
    按照所述变换系数对所述另一个图像进行仿射变换,得到配准后的所述第一图像和所述第二图像。
  5. 根据权利要求2-4任一项所述的图像处理方法,其特征在于,所述对配准后的所述第一图像和所述第二图像进行融合,得到所述目标图像,包括:
    分别对配准后的所述第一图像和所述第二图像进行多尺度分解,获取两组多尺度分解系数;
    按照预设的融合规则对所述两组多尺度分解系数进行融合,得到融合系数;
    根据所述融合系数进行多尺度反变换反向重构出所述目标图像。
  6. 根据权利要求1-5任一项所述的图像处理方法,其特征在于,所述从所述目标图像中识别对象,包括:
    对所述目标图像进行灰度处理,得到所述目标图像的灰度图;
    对所述灰度图进行直方图均衡化处理,得到均衡后灰度图;
    将所述均衡后灰度图分路,形成至少两路均衡后灰度图;
    对一路所述均衡后灰度图进行行人识别,获取行人对象和所述行人对象的识别信息;
    对另一路所述均衡后灰度图进行车辆识别,获取车辆对象和所述车辆对象的识别信息。
  7. 根据权利要求6所述的图像处理方法,其特征在于,所述对一路所述均衡后灰度图进行行人识别,获取行人对象,包括:
    利用拉普拉斯金字塔分解算法,对所述均衡后灰度图进行多层次的缩放处理;
    对每个层次上的缩放图像进行hog特征提取;
    基于所述hog特征进行分类识别,从所述对象中识别出所述行人对象。
  8. 根据权利要求6或7所述的图像处理方法,其特征在于,所述对另一路所述均衡后灰度图进行车辆识别,获取车辆对象,包括:
    利用拉普拉斯金字塔分解算法,对所述均衡后灰度图进行多层次的缩放处理;
    对每个层次上的缩放图像进行haar特征提取;
    基于所述haar特征进行分类识别,从所述对象中识别出车辆对象。
  9. 根据权利要求7或8所述的图像处理方法,其特征在于,从所述对象中识别出行人对象或者车辆对象之后,还包括:
    对识别出的所述行人对象和所述车辆对象进行跟踪,剔除误识别的所述行人对象和所述车辆对象。
  10. 一种安全驾驶方法,其特征在于,所述方法包括:
    获取目标图像;其中,所述目标图像是根据权利要求1-9任一项所述的图像处理方法得到的;
    从所述目标图像中识别对象;
    根据识别出的所述对象和所述车辆当前的运行信息,生成安全驾驶策略并执行。
  11. 根据权利要求10所述的安全驾驶方法,其特征在于,所述从所述目标图像中识别对象,包括:
    对所述目标图像进行灰度处理,得到所述目标图像的灰度图;
    对所述灰度图进行直方图均衡化处理,得到均衡后灰度图;
    将所述均衡后灰度图分路,形成至少两路均衡后灰度图;
    对一路所述均衡后灰度图进行行人识别,获取行人对象和所述行人对象的识别信息;
    对另一路所述均衡后灰度图进行车辆识别,获取车辆对象和所述车辆对象的识别信息。
  12. 根据权利要求10或11所述的安全驾驶方法,其特征在于,所述根据识别出的所 述对象和所述车辆当前的运行信息,生成安全驾驶策略并执行,包括:
    从识别出的所有对象中确定距离所述车辆最近的对象作为与所述车辆之间存在风险的第一对象;
    根据所述第一对象和所述运行信息,生成车辆的安全驾驶策略并执行。
  13. 根据权利要求12所述的安全驾驶方法,其特征在于,所述从识别出的所有对象中确定距离所述车辆最近的对象作为与所述车辆之间存在风险的第一对象,包括:
    根据所述行人对象的识别信息和所述车辆对象的识别信息进行筛选,从中获取满足预设条件的所述行人对象和所述车辆对象作为候选对象;
    根据每个候选对象与所述车辆之间的距离,从所有的候选对象中识别出所述第一对象。
  14. 根据权利要求13所述的安全驾驶方法,其特征在于,所述根据所述行人对象的识别信息和所述车辆对象的识别信息进行筛选,从中获取满足预设条件的所述行人对象和所述车辆对象作为候选对象,包括:
    根据所述识别信息中的坐标信息,获取每个行人对象和每个车辆对象的边缘像素点;
    判断所述边缘像素点是否有至少一个像素点处于第一区域内;所述第一区域为所述目标图像中由所述车辆所处车道形成的区域;
    将所述边缘像素点有至少一个像素点处于所述第一区域内的所述行人对象或者所述车辆对象作为所述候选对象。
  15. 根据权利要求13所述的安全驾驶方法,其特征在于,所述根据所述行人对象的识别信息和所述车辆对象的识别信息进行筛选,从中获取满足预设条件的所述行人对象和所述车辆对象作为候选对象,包括:
    根据所述识别信息中的坐标信息,为每个行人对象和每个车辆对象标记识别框;
    判断每个识别框是否有至少一个指定边界点处于第一区域内;所述第一区域为所述目标图像中由所述车辆所处车道形成的区域;
    将有至少一个所述指定边界点处于所述第一区域内的识别框作为目标识别框;
    将所述目标识别框对应的对象作为所述候选对象。
  16. 根据权利要求12-15任一项所述的安全驾驶方法,其特征在于,所述根据所述第一对象和所述运行信息,生成车辆的安全驾驶策略并执行,包括:
    获取所述第一对象与所述车辆之间的第一距离;
    根据所述运行信息中的车速和所述第一距离,获取所述车辆到达所述第一对象所需的运行时间;
    根据所述运行时间确定所述第一对象与所述车辆之间的风险等级;
    根据所述风险等级生成所述安全驾驶策略并执行。
  17. 一种图像处理装置,其特征在于,包括:
    图像获取模块,用于基于车辆上的双摄像装置获取第一图像和第二图像;
    形成模块,用于对所述第一图像和所述第二图像进行图像融合,形成目标图像;
    第一识别模块,用于从所述目标图像中识别对象。
PCT/CN2018/112902 2017-10-31 2018-10-31 图像处理方法及其装置、安全驾驶方法 WO2019085929A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711050555.7A CN109727188A (zh) 2017-10-31 2017-10-31 图像处理方法及其装置、安全驾驶方法及其装置
CN201711050555.7 2017-10-31

Publications (1)

Publication Number Publication Date
WO2019085929A1 true WO2019085929A1 (zh) 2019-05-09

Family

ID=66293630

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/112902 WO2019085929A1 (zh) 2017-10-31 2018-10-31 图像处理方法及其装置、安全驾驶方法

Country Status (2)

Country Link
CN (1) CN109727188A (zh)
WO (1) WO2019085929A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118097581A (zh) * 2024-04-28 2024-05-28 山东领军智能交通科技有限公司 一种道路边缘识别控制方法及装置

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112115777A (zh) * 2020-08-10 2020-12-22 杭州优行科技有限公司 一种交通标志类别的检测识别方法、装置和设备
CN114911062A (zh) * 2021-02-07 2022-08-16 浙江舜宇智能光学技术有限公司 具有双成像光路的光学系统和具有双成像光路的光学装置
CN115345777A (zh) * 2021-05-13 2022-11-15 南京大学 用于成像的方法、装置和计算机可读介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102542843A (zh) * 2010-12-07 2012-07-04 比亚迪股份有限公司 防止车辆碰撞的预警方法及装置
CN103714556A (zh) * 2014-01-06 2014-04-09 中国科学院自动化研究所 一种基于金字塔表观模型的运动目标跟踪方法
CN103886566A (zh) * 2014-03-18 2014-06-25 河海大学常州校区 一种恶劣天气下基于图像融合的城市交通调度系统及方法
CN104835130A (zh) * 2015-04-17 2015-08-12 北京联合大学 一种多曝光图像融合方法

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006017233A1 (en) * 2004-07-12 2006-02-16 Lehigh University Image fusion methods and apparatus
US7835594B2 (en) * 2006-12-01 2010-11-16 Harris Corporation Structured smoothing for superresolution of multispectral imagery based on registered panchromatic image
CN101699470A (zh) * 2009-10-30 2010-04-28 华南理工大学 一种对人脸图片进行笑脸识别的提取方法
CN102005037B (zh) * 2010-11-12 2012-06-06 湖南大学 结合多尺度双边滤波与方向滤波的多模图像融合方法
CN102685516A (zh) * 2011-03-07 2012-09-19 李慧盈 立体视觉主动安全辅助驾驶方法
CN102693427A (zh) * 2011-03-22 2012-09-26 日电(中国)有限公司 形成和使用用于检测图像的检测器的方法和设备
CN102663366A (zh) * 2012-04-13 2012-09-12 中国科学院深圳先进技术研究院 行人目标识别方法及系统
CN103516997B (zh) * 2012-06-18 2018-01-23 中兴通讯股份有限公司 多源视频图像信息实时融合方法及装置
CN203134149U (zh) * 2012-12-11 2013-08-14 武汉高德红外股份有限公司 基于不同波段成像融合图像处理的车辆辅助驾驶系统
US20150371109A1 (en) * 2013-01-17 2015-12-24 Sensen Networks Pty Ltd Automated vehicle recognition
FR3012784B1 (fr) * 2013-11-04 2016-12-30 Renault Sa Dispositif de detection de la position laterale d'un pieton par rapport a la trajectoire du vehicule
CN105138983B (zh) * 2015-08-21 2019-06-28 燕山大学 基于加权部件模型和选择性搜索分割的行人检测方法
CN105303159A (zh) * 2015-09-17 2016-02-03 中国科学院合肥物质科学研究院 一种基于显著性特征的远红外行人检测方法
CN105809649B (zh) * 2016-03-03 2019-03-26 西安电子科技大学 基于变分多尺度分解的sar图像与可见光图像融合方法
CN106096604A (zh) * 2016-06-02 2016-11-09 西安电子科技大学昆山创新研究院 基于无人平台的多波段融合探测方法
CN106599773B (zh) * 2016-10-31 2019-12-24 清华大学 用于智能驾驶的深度学习图像识别方法、系统及终端设备
CN106650615B (zh) * 2016-11-07 2018-03-27 深圳云天励飞技术有限公司 一种图像处理方法及终端
CN107253485B (zh) * 2017-05-16 2019-07-23 北京交通大学 异物侵入检测方法及异物侵入检测装置
CN107194905A (zh) * 2017-05-22 2017-09-22 阜阳师范学院 一种基于非下采样Cotourlet变换的图像处理方法及系统
CN107146247A (zh) * 2017-05-31 2017-09-08 西安科技大学 基于双目摄像头的汽车辅助驾驶系统及方法
CN107230199A (zh) * 2017-06-23 2017-10-03 歌尔科技有限公司 图像处理方法、装置和增强现实设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102542843A (zh) * 2010-12-07 2012-07-04 比亚迪股份有限公司 防止车辆碰撞的预警方法及装置
CN103714556A (zh) * 2014-01-06 2014-04-09 中国科学院自动化研究所 一种基于金字塔表观模型的运动目标跟踪方法
CN103886566A (zh) * 2014-03-18 2014-06-25 河海大学常州校区 一种恶劣天气下基于图像融合的城市交通调度系统及方法
CN104835130A (zh) * 2015-04-17 2015-08-12 北京联合大学 一种多曝光图像融合方法

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118097581A (zh) * 2024-04-28 2024-05-28 山东领军智能交通科技有限公司 一种道路边缘识别控制方法及装置

Also Published As

Publication number Publication date
CN109727188A (zh) 2019-05-07

Similar Documents

Publication Publication Date Title
Wu et al. Lane-mark extraction for automobiles under complex conditions
WO2019085929A1 (zh) 图像处理方法及其装置、安全驾驶方法
WO2019196130A1 (zh) 面向车载热成像行人检测的分类器训练方法和装置
US8798314B2 (en) Detection of vehicles in images of a night time scene
US10776946B2 (en) Image processing device, object recognizing device, device control system, moving object, image processing method, and computer-readable medium
KR101848019B1 (ko) 차량 영역 검출을 통한 차량 번호판 검출 방법 및 장치
JP6819996B2 (ja) 交通信号認識方法および交通信号認識装置
WO2019085930A1 (zh) 车辆中双摄像装置的控制方法和装置
JP2012038318A (ja) ターゲット検出方法及び装置
WO2020154990A1 (zh) 目标物体运动状态检测方法、设备及存储介质
US10984264B2 (en) Detection and validation of objects from sequential images of a camera
CN111027535A (zh) 一种车牌识别方法及相关设备
US20230021116A1 (en) Lateral image processing apparatus and method of mirrorless car
JP4826355B2 (ja) 車両周囲表示装置
US20120128211A1 (en) Distance calculation device for vehicle
JP7363504B2 (ja) オブジェクト検出方法、検出装置及び電子機器
JP2010041322A (ja) 移動体識別装置、画像処理装置、コンピュータプログラム及び光軸方向特定方法
Barua et al. An Efficient Method of Lane Detection and Tracking for Highway Safety
US11323633B2 (en) Automated creation of a freeform mask for automotive cameras
US20170286793A1 (en) Method and apparatus for performing registration plate detection with aid of edge-based sliding concentric windows
KR101370011B1 (ko) 영상 안정화 및 복원방법을 이용한 주행형 자동 단속 시스템 및 단속방법
CN107992789B (zh) 识别交通灯的方法、装置及车辆
JP2020126304A (ja) 車外物体検出装置
CN110321828B (zh) 一种基于双目摄像机和车底阴影的前方车辆检测方法
Bhope et al. Use of image processing in lane departure warning system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18873690

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18873690

Country of ref document: EP

Kind code of ref document: A1