CN107886036B - Vehicle control method and device and vehicle - Google Patents

Vehicle control method and device and vehicle Download PDF

Info

Publication number
CN107886036B
CN107886036B CN201610874368.XA CN201610874368A CN107886036B CN 107886036 B CN107886036 B CN 107886036B CN 201610874368 A CN201610874368 A CN 201610874368A CN 107886036 B CN107886036 B CN 107886036B
Authority
CN
China
Prior art keywords
image
vehicle
target vehicle
lane
identification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610874368.XA
Other languages
Chinese (zh)
Other versions
CN107886036A (en
Inventor
贺刚
黄忠伟
姜波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BYD Co Ltd
Original Assignee
BYD Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BYD Co Ltd filed Critical BYD Co Ltd
Priority to CN201610874368.XA priority Critical patent/CN107886036B/en
Publication of CN107886036A publication Critical patent/CN107886036A/en
Application granted granted Critical
Publication of CN107886036B publication Critical patent/CN107886036B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a vehicle control method, a vehicle control device and a vehicle, which can reduce the manufacturing cost of an adaptive cruise system. The method comprises the following steps: acquiring a first image and a second image, wherein the first image is a color image or a brightness image, and the second image is a depth image; identifying a target vehicle in the second image to acquire distance information of the target vehicle; acquiring an azimuth angle of the target vehicle according to the first image or the second image; determining the relative speed of the target vehicle according to the azimuth angle of the target vehicle and the intermediate frequency signal acquired by the constant carrier frequency radar; and controlling the motion parameters of the main vehicle according to the distance information and the relative speed.

Description

Vehicle control method and device and vehicle
Technical Field
The invention relates to the technical field of vehicles, in particular to a vehicle control method and device and a vehicle.
Background
With the continuous development of science and technology, people's trip is also more and more convenient, and various cars, electric motor cars etc. have become the essential vehicle in people's life. Some existing vehicles already have an adaptive cruise function.
At present, a millimeter wave radar, a laser radar, or a stereo camera may be used as a distance measuring sensor in an adaptive cruise system of a vehicle, and the vehicle may sense a plurality of target vehicles in front of the vehicle at the same time by installing these several types of distance measuring sensors, thereby adaptively adjusting a motion parameter of the cruise system.
However, the ranging algorithm using the stereo camera is complex, which may result in an increase in power consumption of a computer chip, and the way of using a single common camera in cooperation with the millimeter wave radar or the laser radar requires a large installation space in the vehicle and is costly.
Disclosure of Invention
The invention aims to provide a vehicle control method, a vehicle control device and a vehicle, which can reduce the manufacturing cost of an adaptive cruise system.
According to a first aspect of an embodiment of the present invention, there is provided a vehicle control method including:
acquiring a first image and a second image, wherein the first image is a color image or a brightness image, and the second image is a depth image;
identifying a target vehicle in the second image to acquire distance information of the target vehicle;
acquiring an azimuth angle of the target vehicle according to the first image or the second image;
determining the relative speed of the target vehicle according to the azimuth angle of the target vehicle and the intermediate frequency signal acquired by the constant carrier frequency radar;
and controlling the motion parameters of the main vehicle according to the distance information and the relative speed.
Optionally, the method further includes:
identifying a highway lane line according to the first image;
mapping the highway lane lines to the second image according to the mapping relation between the first image and the second image so as to determine at least one vehicle identification range in the second image, wherein one vehicle identification range is created for every two adjacent highway lane lines;
identifying a target vehicle in the second image, comprising:
the target vehicle is identified in the at least one vehicle identification range.
Optionally, the method further includes:
obtaining a slope of an initial straight line mapped to each highway lane line in the second image;
marking the vehicle identification ranges created by the road lane lines corresponding to the two initial straight lines with the maximum slope as the own lane, and marking the rest vehicle identification ranges as non-own lanes;
identifying a target vehicle in the at least one vehicle identification range, comprising:
the lane change recognition method includes recognizing a target vehicle of a local lane in a vehicle recognition range marked as the local lane, recognizing a target vehicle of a non-local lane in a vehicle recognition range marked as the non-local lane, and recognizing a target vehicle of a lane change in a vehicle recognition range in which two adjacent vehicle recognition ranges are combined.
Optionally, the method further includes:
determining a target vehicle region in the second image by identifying the target vehicle;
mapping the target vehicle region into the first image according to the mapping relation between the first image and the second image so as to generate a vehicle lamp identification region in the first image;
identifying a turn signal of the target vehicle in the vehicle light identification area;
controlling the motion parameters of the host vehicle according to the distance information and the relative speed, comprising:
and controlling the motion parameters of the host vehicle according to the distance information, the relative speed and the identified steering lamp of the target vehicle.
Optionally, acquiring the azimuth angle of the target vehicle according to the first image or the second image includes:
acquiring an azimuth angle of the target vehicle according to the position of the target vehicle area in the second image; or the like, or, alternatively,
and acquiring the azimuth angle of the target vehicle according to the position of the vehicle lamp identification area in the first image.
Optionally, the method further includes:
and automatically calibrating the constant carrier frequency radar according to the identified azimuth angle of the target vehicle.
According to a first aspect of an embodiment of the present invention, there is provided a vehicle control apparatus including:
the image acquisition module is used for acquiring a first image and a second image, wherein the first image is a color image or a brightness image, and the second image is a depth image;
the first identification module is used for identifying a target vehicle in the second image so as to acquire distance information of the target vehicle;
the first acquisition module is used for acquiring the azimuth angle of the target vehicle according to the first image or the second image;
the first determining module is used for determining the relative speed of the target vehicle according to the azimuth angle of the target vehicle and the intermediate frequency signal acquired by the constant carrier frequency radar;
and the control module is used for controlling the motion parameters of the main vehicle according to the distance information and the relative speed.
Optionally, the apparatus further comprises:
the second identification module is used for identifying a highway lane line according to the first image;
the first mapping module is used for mapping the highway lane lines to the second image according to the mapping relation between the first image and the second image so as to determine at least one vehicle identification range in the second image, wherein one vehicle identification range is established between every two adjacent highway lane lines;
the first identification module is configured to:
the target vehicle is identified in the at least one vehicle identification range.
Optionally, the apparatus further comprises:
a second obtaining module for obtaining a slope of an initial straight line mapped to each highway lane line in the second image;
the creating module is used for marking the vehicle identification ranges created by the road lane lines corresponding to the two initial straight lines with the maximum slope as the own lane and marking the rest vehicle identification ranges as non-own lanes;
the first identification module is configured to:
the lane change recognition method includes recognizing a target vehicle of a local lane in a vehicle recognition range marked as the local lane, recognizing a target vehicle of a non-local lane in a vehicle recognition range marked as the non-local lane, and recognizing a target vehicle of a lane change in a vehicle recognition range in which two adjacent vehicle recognition ranges are combined.
Optionally, the apparatus further comprises:
a second determination module for determining a target vehicle region in the second image by identifying the target vehicle;
the second mapping module is used for mapping the target vehicle area to the first image according to the mapping relation between the first image and the second image so as to generate a vehicle lamp identification area in the first image;
a third identification module for identifying a turn signal of the target vehicle in the vehicle light identification area;
the control module is used for:
and controlling the motion parameters of the host vehicle according to the distance information, the relative speed and the identified steering lamp of the target vehicle.
Optionally, the first obtaining module is configured to:
acquiring an azimuth angle of the target vehicle according to the position of the target vehicle area in the second image; or the like, or, alternatively,
and acquiring the azimuth angle of the target vehicle according to the position of the vehicle lamp identification area in the first image.
Optionally, the apparatus further comprises:
and the calibration module is used for automatically calibrating the constant carrier frequency radar according to the identified azimuth angle of the target vehicle.
According to a first aspect of embodiments of the present invention, there is provided a vehicle including:
the image acquisition device is used for acquiring a first image and a second image, wherein the first image is a color image or a brightness image, and the second image is a depth image; and a vehicle control apparatus according to the second aspect.
In the embodiment of the invention, the azimuth angle of the target vehicle can be acquired by identifying the image, the relative speed of the target vehicle is acquired by the constant carrier frequency radar, and the distance information of the target vehicle is acquired by combining the depth image, so that the target vehicle near the host vehicle can be sensed more accurately by combining the constant carrier frequency radar and the common camera, and the self-adaptive cruise can be better performed. Meanwhile, because the transmitter of the constant carrier frequency radar works on the nearly constant electromagnetic wave frequency, the electromagnetic wave bandwidth occupied by the constant carrier frequency radar relative to the ranging frequency modulation radar is very small, so that the constant carrier frequency radar can reduce the use of components and parts, and the cost of the self-adaptive cruise system is reduced.
Additional features and advantages of the invention will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a flow chart illustrating a vehicle control method according to an exemplary embodiment.
FIG. 2 is a flow chart illustrating another vehicle control method according to an exemplary embodiment.
FIG. 3 is a flow chart illustrating another vehicle control method according to an exemplary embodiment.
FIG. 4 is a flow chart illustrating another vehicle control method according to an exemplary embodiment.
FIG. 5 is a schematic diagram illustrating a target vehicle zone and a vehicle light identification zone in accordance with an exemplary embodiment.
FIG. 6 is a schematic diagram of a temporal micro-molecular image shown in accordance with an exemplary embodiment.
FIG. 7 is a block diagram illustrating a vehicle control apparatus according to an exemplary embodiment.
FIG. 8 is a block diagram of a vehicle shown in accordance with an exemplary embodiment.
Detailed Description
The following detailed description of embodiments of the invention refers to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present invention, are given by way of illustration and explanation only, not limitation.
Fig. 1 is a flowchart illustrating a vehicle control method, which may be applied to a subject vehicle, as shown in fig. 1, according to an exemplary embodiment, including the following steps.
Step S11: the method comprises the steps of acquiring a first image and a second image, wherein the first image is a color image or a brightness image, and the second image is a depth image.
Step S12: and identifying the target vehicle in the second image to acquire the distance information of the target vehicle.
Step S13: and acquiring the azimuth angle of the target vehicle according to the first image or the second image.
Step S14: and determining the relative speed of the target vehicle according to the azimuth angle of the target vehicle and the intermediate frequency signal acquired by the constant carrier frequency radar.
Step S15: and controlling the motion parameters of the main vehicle according to the distance information and the relative speed.
The first image may be a color image or a luminance image, the second image may be a depth image, and the first image and the second image may be acquired by the same image capturing device provided on the subject vehicle. For example, a first image is acquired by an image sensor of the image acquisition apparatus, and a second image is acquired by a TOF (Time of flight) sensor of the image acquisition apparatus.
In the embodiment of the present invention, the color or luminance image pixels and the depth image pixels may be interlaced according to a certain ratio, and the embodiment of the present invention is not limited to what the ratio is. For example, both the image sensor and the TOF sensor may be fabricated using Complementary Metal Oxide Semiconductor (CMOS) processes, and the luminance pixel and the TOF pixel may be scaled on the same substrate, e.g., 8 luminance pixels and 1 TOF pixel fabricated at an 8:1 ratio constitute one large interleaved pixel, where the photosensitive area of 1 TOF pixel may be equal to the photosensitive area of 8 luminance pixels, where 8 luminance pixels may be arranged in an array of 2 rows and 4 columns. For example, an array of 360 rows and 480 columns of active interlaced pixels can be fabricated on a substrate of a 1 inch optical target surface, an array of 720 rows and 1920 columns of active luminance pixels, and an array of 360 rows and 480 columns of active TOF pixels can be obtained, so that the same image acquisition device consisting of an image sensor and a TOF sensor can simultaneously acquire a color or luminance image and a depth image.
Optionally, referring to fig. 2, fig. 2 is a flowchart of another vehicle control method. After the first image and the second image are obtained, the method may further include step S16 of identifying a road lane line according to the first image; step S17 maps the highway lane lines to the second image according to the mapping relationship between the first image and the second image to determine at least one vehicle identification range in the second image, wherein each two adjacent highway lane lines may create one vehicle identification range. In this case, step S12 may be to identify the target vehicle in at least one vehicle identification range to acquire the distance information of the target vehicle.
The first image is a color or brightness image, and the position of the highway lane line is identified only by using the brightness difference between the highway lane line and the road surface, so that the highway lane line can be obtained only by the brightness information of the first image. Then, when the first image is a luminance image, the highway lane line may be directly identified according to luminance information of the first image, and when the first image is a color image, the highway lane line may be identified after the first image is converted into the luminance image.
Each two adjacent road lane lines create a vehicle recognition range, i.e., the vehicle recognition range corresponds to the actual lane, and the target vehicle is recognized in the vehicle recognition range, i.e., the target vehicle on the lane is recognized. Therefore, the range of the recognition target vehicle can be determined on the lane, so that the recognition target vehicle is ensured to be a vehicle running on the lane, the interference caused by other non-vehicle objects in the image is avoided, and the accuracy of the recognition target vehicle is improved.
Optionally, because the highway lane line has both a solid line lane line and a dashed line lane line, identifying the highway lane line in the first image may be to obtain all edge pixel positions of each solid line lane line included in the highway lane line and obtain all edge pixel positions of each dashed line lane line included in the highway lane line according to the first image. Therefore, the full-line lane lines and the dotted-line lane lines can be completely identified, and the accuracy of identifying the target vehicle is improved.
Alternatively, all edge pixel positions of each solid line lane line included in the highway lane line may be acquired, a binary image corresponding to the first image may be created, and then all edge pixel positions of each solid line lane line may be detected in the binary image.
The embodiment of the present invention is not limited to how to create the binary image corresponding to the first image, and several possible ways are illustrated below.
For example, using the brightness difference between the highway lane line and the road surface, some brightness threshold values can be found by searching, the brightness threshold values can be found by using a histogram statistics-bimodal algorithm, and a binary image which highlights the highway lane line is created by using the brightness threshold values and the brightness image.
Or for example, the luminance image may be divided into a plurality of luminance sub-images, a histogram statistics-bimodal algorithm is performed on each luminance sub-image to find out a plurality of luminance thresholds, each luminance threshold and the corresponding luminance sub-image are used to create a binary sub-image of the protruding highway lane, and the binary sub-image is used to create a complete binary image of the protruding highway lane, so that the situation of luminance change of the road surface or the lane can be dealt with.
After the binary image corresponding to the first image is created, all edge pixel positions of each solid lane line may be detected in the binary image, and the detection manner is not limited in the embodiment of the present invention.
For example, since the curvature radius of the highway lane line may not be too small and the number of imaging pixels of the lane line at a relatively far position from the near lane line is larger due to the camera projection principle, the pixels of the solid lane line of the curve arranged in a straight line in the luminance image also account for the majority of the imaging pixels of the solid lane line, and therefore, the entire edge pixel positions of the solid lane line of the straight road or the majority of the initial straight line edge pixel positions of the solid lane line of the curve may be detected in the binary image of the prominent highway lane line using a line detection algorithm like the Hough transform algorithm.
The line detection may also detect most of the line edge pixel positions of the isolation strip and the telegraph pole in the binary image. Then, for example, the slope range of the lane line in the binary image may be set according to the aspect ratio of the image sensor, the focal length of the camera lens, the road width range of the road design specification, and the installation position of the image sensor on the subject vehicle, so that the straight lines other than the lane line are filtered out according to the slope range.
Since the edge pixel positions of the solid line lane line of the curve are always continuously changed, the connected pixel positions of the edge pixel positions at the two ends of the detected initial straight line are searched and merged into the initial straight line edge pixel set, the searching and merging into the connected pixel positions are repeated, and finally, all the edge pixel positions of the solid line lane line of the curve are uniquely determined.
All edge pixel positions of the solid road lane line can be detected in the above manner.
Optionally, the first dotted-line highway lane line may be any dotted-line highway lane line included in the highway lane lines, the edge pixel position of the first dotted-line lane line is obtained, the first solid-line highway lane line may be identified according to the first image, and then all the edge pixel positions of the first solid-line highway lane line are projected to the edge pixel position of the initial straight line of the first dotted-line lane line, so as to obtain all the edge pixel positions of the first dotted-line lane line. The first solid highway lane line may be any solid highway lane line included in the highway lane lines.
In the embodiment of the invention, all edge pixel positions of the first solid line lane line can be projected to the initial straight line edge pixel position of the first dotted line lane line according to the priori knowledge of the solid line lane line, the principle that the lane lines are parallel to each other in reality and the projection parameters of the image sensor and the camera so as to connect the initial straight line edge pixel position of the first dotted line lane line and the edge pixel positions of other shorter lane lines belonging to the first dotted line lane line, thereby acquiring all edge pixel positions of the dotted line lane line.
Optionally, the first dotted-line highway lane line is any one of the dotted-line highway lane lines included in the highway lane line, the edge pixel position of the first dotted-line lane line is obtained, binary images corresponding to a plurality of continuously obtained first images may be superimposed, so that the first dotted-line lane line is superimposed to form a solid-line lane line, and then all the edge pixel positions of the superimposed solid-line lane line are obtained.
In the embodiment of the invention, prior knowledge of a straight road or a curve does not need to be obtained, and because the transverse offset of the dotted line lane can be almost ignored in a short continuous time but the longitudinal offset is large in the process of the vehicle cruising on the straight road or the curve cruising at a constant steering angle, the dotted line lane can be superposed into a solid line lane in continuous binary images of a plurality of outstanding road lane lines at different moments, and then all edge pixel positions of the dotted line lane can be obtained by the identification method of the solid line lane.
Since the longitudinal shift amount of the broken-line lane line is affected by the vehicle speed of the subject vehicle, when the first broken-line lane line is recognized, the minimum number of consecutive binary images of the prominent highway lane line at different times can be dynamically determined from the vehicle speed acquired from the wheel speed sensor to superimpose the first broken-line lane line into one solid-line lane line, thereby acquiring all edge pixel positions of the first broken-line lane line.
Optionally, referring to fig. 3, fig. 3 is a flowchart of another vehicle control method according to an embodiment of the present invention, and may further include step S18: acquiring the slope of an initial straight line mapped to each highway lane line in the second image; step S19: and marking the vehicle identification range created by the road lane lines corresponding to the two initial straight lines with the maximum slope as the own lane, and marking the rest vehicle identification ranges as non-own lanes. Then step S12 may be to identify the target vehicle of the own lane in the vehicle recognition range marked as the own lane, identify the target vehicle of the non-own lane in the vehicle recognition range marked as the non-own lane, and identify the target vehicle of the lane change in the vehicle recognition range combined by two adjacent vehicle recognition ranges to acquire the distance information of the target vehicle.
Due to the interleaved mapping relationship between the first image and the second image, the row-column coordinates of each pixel of the first image can determine at least one row-column coordinate of one pixel in the second image through the equal proportion adjustment, so that each edge pixel position of the highway lane line acquired according to the first image can determine at least one pixel position in the second image, and the equal proportion adjustment highway lane line is acquired in the second image. In the second image, one vehicle recognition range is created for every two adjacent road lane lines.
And according to the equal-proportion highway lane lines acquired in the second image, comparing the number of lines and the number of columns occupied by the initial straight line part of each highway lane line to obtain the slope of the initial straight line of the highway lane line, marking the vehicle identification range created according to the highway lane line where the initial straight line of the two highway lane lines with the maximum slope is positioned as the own lane, and marking the other created vehicle identification ranges as non-own lanes.
After marking the lane, it is possible to identify the target vehicle of the own lane in the vehicle recognition range marked as the own lane, identify the target vehicle of the non-own lane in the vehicle recognition range marked as the non-own lane, and identify the target vehicle of the lane change in the vehicle recognition range combined by two adjacent vehicle recognition ranges.
The embodiment of the present invention is not limited to the manner of identifying the target vehicle, and several possible manners are described below.
The first mode is as follows:
since the distance and position of the target vehicle relative to the TOF sensor always changes over time, the distance and position of the road surface, the isolation strip relative to the TOF sensor is approximately unchanged over time. Therefore, a time differential depth image can be created by using the two depth images acquired at different time points, so as to identify the position of the target vehicle in the second image, or the distance between the target vehicle and the body vehicle, and the like.
The second mode is as follows:
in the second image, namely the depth image, the depth sub-image formed by the light reflected by the back surface of the same target vehicle to the TOF sensor contains consistent distance information, so that the distance information of the target vehicle can be acquired as long as the position of the depth sub-image formed by the target vehicle in the depth image is identified.
The depth sub-image formed by reflecting the light of the back side of the same target vehicle to the TOF sensor contains consistent distance information, and the depth sub-image formed by reflecting the light of the road surface to the TOF sensor contains continuously-changed distance information, so that the depth sub-image containing the consistent distance information and the depth sub-image containing the continuously-changed distance information necessarily form abrupt differences at the junction of the two, and the junction of the abrupt differences forms a target boundary of the target vehicle in the depth image.
For example, various boundary detection methods of Canny, Sobel, Laplace, etc., which detect boundaries in image processing algorithms, may be employed to detect the target boundary of the target vehicle.
Further, the vehicle recognition range is determined by all pixel positions of the lane line, and therefore, detecting the target boundary of the target vehicle within the vehicle recognition range will reduce boundary interference caused by road facilities such as isolation zones, light poles, guard posts and the like.
In practical application, there may be a plurality of target vehicles, and therefore, the target boundary detected in each vehicle identification range may be projected onto the row coordinate axis of the image, and one-dimensional search is performed on the row coordinate axis, so that the number of rows and the row coordinate range occupied by the longitudinal target boundaries of all the target vehicles in the vehicle identification range may be determined, and the number of columns and the row coordinate position occupied by the transverse target boundaries may be determined, where a longitudinal target boundary refers to a target boundary with a large number of pixel rows and a small number of columns, and a transverse target boundary refers to a target boundary with a small number of pixel rows and a large number of columns. According to the column number and the row coordinate position occupied by all the transverse target boundaries in the vehicle identification range, the column coordinate positions of all the longitudinal target boundaries (namely the column coordinate starting positions and the column coordinate ending positions of the corresponding transverse target boundaries) are searched in the vehicle identification range, and the target boundaries of different target vehicles are distinguished according to the principle that the target boundaries contain consistent distance information, so that the positions and the distance information of all the target vehicles in the vehicle identification range are determined.
Therefore, the position of the depth sub-image formed by the target vehicle in the depth image can be uniquely determined by detecting and acquiring the target boundary of the target vehicle, so that the distance information of the target vehicle can be uniquely determined.
Of course, the target vehicle may be identified in other manners, which is not limited in the embodiment of the present invention as long as the target vehicle can be identified.
Optionally, referring to fig. 4, fig. 4 is a flowchart of another vehicle control method according to an embodiment of the present invention, and the method may further include step S20: determining a target vehicle region in the second image by identifying the target vehicle; step S21: mapping the target vehicle area into the first image according to the mapping relation between the first image and the second image so as to generate a vehicle lamp identification area in the first image; step S22: a turn signal of a target vehicle is identified in a vehicle light identification region. Of course, fig. 4 illustrates one execution sequence, and the execution sequence of the steps may be other, for example, step S20 to step S22 are executed after step S14, and the execution sequence of step S20 to step S22 is not limited in the embodiment of the present invention. In this case, step S15 may be to control the motion parameters of the subject vehicle based on the distance information, the relative speed, and the identified turn signal of the target vehicle.
After the target vehicle is identified, a target vehicle region may be determined in the second image. The target vehicle area, that is, the area where the target vehicle is located in the second image, may be a closed area enclosed by the boundary of the identified target vehicle, or may also be an extended closed area enclosed by the boundary of the identified target vehicle, or may also be a closed area enclosed by a line connecting several pixel positions of the target vehicle, and so on. The embodiment of the present invention does not limit what kind of target vehicle region is, and may be a region including a target vehicle.
Due to the interleaved mapping relationship between the first image and the second image, the row and column coordinates of each pixel of the target vehicle region in the second image are scaled to determine the row and column coordinates of at least one pixel in the first image. Referring to fig. 5, after the target vehicle area in the second image is mapped into the first image, the headlight recognition area may be generated at a corresponding position in the first image, and since the image of the headlight of the target vehicle is included in the target vehicle area, the turn signal of the target vehicle may be recognized in the headlight recognition area generated in the first image.
Alternatively, as to the manner of identifying the turn signal of the target vehicle in the lamp identification region, the embodiment of the present invention is not limited, and the lamp identification regions in the plurality of first images continuously acquired may be subjected to time differentiation processing to create a time differential sub-image corresponding to the target vehicle, and then the turn signal of the target vehicle may be identified according to the time differential sub-image.
For example, the rear turn signal may be identified based on the color, blinking frequency, or blinking sequence of the rear lights in the car light identification area.
The longitudinal displacement and the transverse displacement of the target vehicle at the initial stage of lane changing are small, which means that the size change of a vehicle lamp identification area of the target vehicle is small, and only the brightness of an image formed at the tail steering lamp of the vehicle is changed greatly due to flicker. Therefore, a time differential molecular image of the target vehicle is created by successively acquiring a plurality of first images at different times, that is, color or brightness images, and subjecting the lamp recognition region of the target vehicle therein to time differential processing. The time differential sub-image will highlight the successive flashing tail light sub-images of the target vehicle. Then, the time micro sub-image can be projected to a column coordinate axis, one-dimensional search is carried out to obtain the initial and end column coordinate positions of the tail lamp sub-image of the target vehicle, the initial and end column coordinate positions are projected to the time micro sub-image, the initial and end row coordinate positions of the tail lamp sub-image are searched, the initial and end row and column coordinate positions of the tail lamp sub-image are projected to the multiple color or brightness images at different moments to confirm the color, the flashing frequency or the flashing sequence of the tail lamp of the target vehicle, and therefore the row and column coordinate positions of the flashing tail lamp sub-image are determined.
Further, when the row and column coordinate positions of the flashing tail lamp sub-images are only on the left side of the car lamp identification area of the target vehicle, it can be determined that the target vehicle turns on the left turn lamp, when the row and column coordinate positions of the flashing tail lamp sub-images are only on the right side of the car lamp identification area of the target vehicle, it can be determined that the target vehicle turns on the right turn lamp, and when the row and column coordinate positions of the flashing tail lamp sub-images are on the two sides of the car lamp identification area of the target vehicle, it can be determined that the target vehicle turns on the double-flashing warning lamp.
In addition, when the size of the car light identification area of the target vehicle is changed greatly due to large longitudinal displacement or transverse displacement of the target vehicle in the lane changing process of the target vehicle, longitudinal displacement or transverse displacement compensation can be carried out on a plurality of car light identification areas of the target vehicle at different moments which are continuously obtained, the car light identification areas are scaled into car light identification areas with the same size, time differentiation processing is carried out on the adjusted car light identification areas of the target vehicle to create time differential sub-images of the target vehicle, the time differential sub-images are projected to a column coordinate axis, one-dimensional search is carried out to obtain the initial and end column coordinate positions of the car tail light sub-images of the target vehicle, the initial and end column coordinate positions are projected to the time differential car light identification area sub-images, the initial and end row coordinate positions of the car tail light sub-images are searched, and the rows, lines, and projecting the column coordinate position to the plurality of color or brightness images at different moments to confirm the color, the flashing frequency or the flashing sequence of the tail lamp of the target vehicle, thereby determining the row and column coordinate positions of flashing tail lamp sub-images and finally completing the identification of a left turn lamp, a right turn lamp or a double-flashing warning lamp.
For example, if the time differential sub-image corresponding to the car light recognition region shown in fig. 6 in which the continuously flashing car tail light sub-images are highlighted is determined to be located on the left of the car light recognition region by recognizing the coordinates, and the flashing frequency is 1 time/second, it can be determined that the target vehicle is currently turning on the left turn signal.
Through the mode, the steering lamp of the target vehicle can be well recognized, so that whether the target vehicle is to be steered or not and how to steer can be known in advance, and then adaptive cruise can be conducted better and safely.
Optionally, as to how to obtain the azimuth angle of the target vehicle according to the first image or the second image, the embodiment of the present invention is not limited thereto, for example, the azimuth angle of the target vehicle may be obtained according to the position of the target vehicle region in the second image; or acquiring the azimuth angle of the target vehicle according to the position of the vehicle lamp identification area in the first image.
Because the lens parameters and the installation position of the camera for acquiring the first image or the second image can be acquired by a camera calibration technology in advance, a relation lookup table of the coordinates of the road scene with the camera as an origin and the coordinates of the pixels of the first image or the second image can be established.
The pixel coordinates contained in the target vehicle range or the vehicle lamp identification area can be converted into target vehicle coordinates with the camera as the origin through the relation lookup table, so that the azimuth angle of the target vehicle with the camera as the origin is calculated according to the converted target vehicle coordinates with the camera as the origin.
When a relative speed exists between a target vehicle and a host vehicle, a reflection signal of the target vehicle received by a constant-frequency radar can generate an orthogonal reflection signal through a phase shifter, the orthogonal reflection signal and a transmission signal of the constant-frequency radar generate an orthogonal intermediate-frequency signal through a mixer, the orthogonal intermediate-frequency signal comprises a Doppler frequency related to the relative speed, the magnitude of the Doppler frequency is in direct proportion to the magnitude of the relative speed, and the sign of the Doppler frequency is the same as the sign of the relative speed.
The frequency spectrum of the orthogonal intermediate frequency signal highlighting the Doppler frequency can be created by utilizing an analog-to-digital converter and a complex fast Fourier algorithm; the magnitude and sign of the Doppler frequency of the frequency spectrum of the orthogonal intermediate frequency signal can be obtained by utilizing a peak value detection algorithm; and determining the magnitude and sign of the relative speed by using a Doppler velocity measurement formula according to the magnitude and sign of the acquired Doppler frequency.
The constant carrier frequency radar may comprise more than two receivers for acquiring the azimuth angle of the radar target. The mutual position difference of the receivers of the constant carrier frequency radar causes the phase of the orthogonal intermediate frequency signals acquired by the receivers to have a phase difference at the same Doppler frequency.
The phase difference of each receiver at the same Doppler frequency and the mutual position relation of each receiver, which are obtained according to the frequency spectrum of the orthogonal intermediate frequency signals, can be used for obtaining the azimuth angle of the radar target by using a phase method angle measurement formula. Namely, the relative speed and the azimuth angle of the target sensed by the constant carrier frequency radar can be obtained through the intermediate frequency signal acquired by the constant carrier frequency radar.
When there are multiple target vehicles, the azimuth angles of the multiple target vehicles can be obtained through step S13, the relative speeds and azimuth angles of the multiple radar targets can be obtained according to the intermediate frequency signal of the constant-load frequency radar, and the relative speed of a radar target can be determined as the relative speed of a target vehicle by using the principle that the azimuth angle of a single target vehicle is approximately equal to the azimuth angle of the target vehicle.
When the installation positions of the camera and the constant-frequency radar are far away from each other, errors may be caused by the principle that the azimuth angles are approximately equal, and the errors can be eliminated as long as the azimuth angle coordinates of different origins of the camera and the constant-frequency radar are calibrated to be the azimuth angle coordinate of the same origin according to the installation position relationship of the camera and the constant-frequency radar.
After the distance information and the relative speed of the target vehicle are acquired, in the adaptive cruise process, the motion parameters of the host vehicle may be controlled according to the acquired information, and how to control is not limited in the embodiment of the present invention. For example, it is recognized that the subject vehicle is traveling at a speed of-10 m/sec relative to the subject vehicle at a position 100 m directly in front of the subject vehicle, then in order to prevent a rear-end collision, the subject vehicle may be controlled to decelerate, etc.,
of course, if the turn signal of the target vehicle is also identified, the motion parameters of the host vehicle may also be controlled during adaptive cruise according to the distance information, relative speed, and turn signal of the target vehicle. For example, recognizing that the target vehicle is located on a lane to the left of the subject vehicle, traveling at a speed of-10 m/sec relative to the subject vehicle while the right turn lamp is lit, it may be considered that the target vehicle is likely to lane change to the subject vehicle own lane, and thus the subject vehicle may be controlled to decelerate, and so on.
Optionally, the constant-load frequency radar may be automatically calibrated according to the identified azimuth angle of the target vehicle.
When the installation position of the constant carrier frequency radar is outside the cab, the measurement result of the azimuth angle of the constant carrier frequency radar can be influenced by vibration, temperature change and rain, snow, mud and dirt covering, and automatic calibration is needed. For example, when a plurality of target vehicles with different azimuth angles in front of the main vehicle are identified according to the invention, the identified azimuth angles of the plurality of target vehicles and the azimuth angle of the radar target can be compared to determine whether the deviation is consistent, if so, the deviation is recorded into a memory of the constant frequency radar, and the constant frequency radar reads the deviation for automatic calibration and compensation during subsequent azimuth angle measurement. Of course, if there is an inconsistent deviation, a warning that the constant-load frequency radar is unavailable can be sent to the main vehicle driver to remind the main vehicle driver to check or clean the constant-load frequency radar.
Referring to fig. 7, based on the same inventive concept, an embodiment of the present invention provides a vehicle identification apparatus 100, where the apparatus 100 may include:
the image acquisition module 101 is configured to acquire a first image and a second image, where the first image is a color image or a luminance image, and the second image is a depth image;
the first identification module 102 is used for identifying the target vehicle in the second image so as to obtain the distance information of the target vehicle;
the first acquisition module 103 is used for acquiring the azimuth angle of the target vehicle according to the first image or the second image;
the first determining module 104 is configured to determine a relative speed of the target vehicle according to an azimuth angle of the target vehicle and an intermediate frequency signal obtained by a constant carrier frequency radar;
and the control module 105 is used for controlling the motion parameters of the main vehicle according to the distance information and the relative speed.
Optionally, the apparatus 100 further includes:
the second identification module is used for identifying the highway lane line according to the first image;
the first mapping module is used for mapping the highway lane lines to the second image according to the mapping relation between the first image and the second image so as to determine at least one vehicle identification range in the second image, wherein one vehicle identification range is established between every two adjacent highway lane lines;
the first identification module 102 is configured to:
the target vehicle is identified in at least one vehicle identification range.
Optionally, the apparatus 100 further includes:
a second obtaining module for obtaining a slope of an initial straight line mapped to each highway lane line in the second image;
the creating module is used for marking the vehicle identification ranges created by the road lane lines corresponding to the two initial straight lines with the maximum slope as the own lane and marking the rest vehicle identification ranges as non-own lanes;
the first identification module 102 is configured to:
the lane change recognition method includes recognizing a target vehicle of a local lane in a vehicle recognition range marked as the local lane, recognizing a target vehicle of a non-local lane in a vehicle recognition range marked as the non-local lane, and recognizing a target vehicle of a lane change in a vehicle recognition range in which two adjacent vehicle recognition ranges are combined.
Optionally, the apparatus 100 further includes:
a second determination module for determining a target vehicle region in the second image by identifying the target vehicle;
the second mapping module is used for mapping the target vehicle area to the first image according to the mapping relation between the first image and the second image so as to generate a vehicle lamp identification area in the first image;
the third identification module is used for identifying a steering lamp of the target vehicle in the vehicle lamp identification area;
the control module 105 is configured to:
and controlling the motion parameters of the host vehicle according to the distance information, the relative speed and the identified steering lamp of the target vehicle.
Optionally, the first obtaining module 103 is configured to:
acquiring the azimuth angle of the target vehicle according to the position of the target vehicle area in the second image; or the like, or, alternatively,
and acquiring the azimuth angle of the target vehicle according to the position of the vehicle lamp identification area in the first image.
Optionally, the apparatus 100 further includes:
and the calibration module is used for automatically calibrating the constant-load frequency radar according to the identified azimuth angle of the target vehicle.
Referring to fig. 8, based on the same inventive concept, an embodiment of the invention provides a vehicle 200, where the vehicle 200 may include an image capturing device configured to obtain a first image and a second image, where the first image is a color image or a luminance image, and the second image is a depth image; and a vehicle recognition device 100 of fig. 7.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed.
The functional modules in the embodiments of the present application may be integrated into one processing unit, or each module may exist alone physically, or two or more modules are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a ROM (Read-Only Memory), a RAM (Random Access Memory), a magnetic disk, or an optical disk.
The above embodiments are only used to describe the technical solutions of the present invention in detail, but the above embodiments are only used to help understanding the method and the core idea of the present invention, and should not be construed as limiting the present invention. Those skilled in the art should also appreciate that they can easily conceive of various changes and substitutions within the technical scope of the present disclosure.

Claims (7)

1. A vehicle control method characterized by comprising:
acquiring a first image and a second image, wherein the first image is a color image or a brightness image, and the second image is a depth image;
identifying a target vehicle in the second image to acquire distance information of the target vehicle;
acquiring an azimuth angle of the target vehicle according to the first image or the second image;
determining the relative speed of the target vehicle according to the azimuth angle of the target vehicle and the intermediate frequency signal acquired by the constant carrier frequency radar;
determining a target vehicle region in the second image by identifying the target vehicle;
mapping the target vehicle region into the first image according to the mapping relation between the first image and the second image so as to generate a vehicle lamp identification region in the first image;
identifying a turn signal of the target vehicle in the vehicle light identification area;
controlling the motion parameters of the host vehicle according to the distance information, the relative speed and the identified steering lamp of the target vehicle;
automatically calibrating the constant carrier frequency radar according to the identified azimuth angle of the target vehicle;
acquiring the azimuth angle of the target vehicle according to the first image or the second image, wherein the acquisition comprises:
acquiring an azimuth angle of the target vehicle according to the position of the target vehicle area in the second image; or acquiring the azimuth angle of the target vehicle according to the position of the car lamp identification area in the first image.
2. The method of claim 1, further comprising:
identifying a highway lane line according to the first image;
mapping the highway lane lines to the second image according to the mapping relation between the first image and the second image so as to determine at least one vehicle identification range in the second image, wherein one vehicle identification range is created for every two adjacent highway lane lines;
identifying a target vehicle in the second image, comprising:
identifying a target vehicle in the at least one vehicle identification range.
3. The method of claim 2, further comprising:
obtaining a slope of an initial straight line mapped to each highway lane line in the second image;
marking the vehicle identification ranges created by the road lane lines corresponding to the two initial straight lines with the maximum slope as the own lane, and marking the rest vehicle identification ranges as non-own lanes;
identifying a target vehicle in the at least one vehicle identification range, comprising:
the lane change recognition method includes recognizing a target vehicle of a local lane in a vehicle recognition range marked as the local lane, recognizing a target vehicle of a non-local lane in a vehicle recognition range marked as the non-local lane, and recognizing a target vehicle of a lane change in a vehicle recognition range in which two adjacent vehicle recognition ranges are combined.
4. A vehicle control apparatus characterized by comprising:
the image acquisition module is used for acquiring a first image and a second image, wherein the first image is a color image or a brightness image, and the second image is a depth image;
the first identification module is used for identifying a target vehicle in the second image so as to acquire distance information of the target vehicle;
the first acquisition module is used for acquiring the azimuth angle of the target vehicle according to the first image or the second image;
the first determining module is used for determining the relative speed of the target vehicle according to the azimuth angle of the target vehicle and the intermediate frequency signal acquired by the constant carrier frequency radar;
a second determination module for determining a target vehicle region in the second image by identifying the target vehicle;
the second mapping module is used for mapping the target vehicle area to the first image according to the mapping relation between the first image and the second image so as to generate a vehicle lamp identification area in the first image;
a third identification module for identifying a turn signal of the target vehicle in the vehicle light identification area;
the control module is used for controlling the motion parameters of the main vehicle according to the distance information, the relative speed and the identified steering lamp of the target vehicle;
the calibration module is used for automatically calibrating the constant carrier frequency radar according to the identified azimuth angle of the target vehicle;
the first obtaining module is specifically configured to obtain an azimuth angle of the target vehicle according to a position of the target vehicle region in the second image; or acquiring the azimuth angle of the target vehicle according to the position of the car lamp identification area in the first image.
5. The apparatus of claim 4, further comprising:
the second identification module is used for identifying a highway lane line according to the first image;
the first mapping module is used for mapping the highway lane lines to the second image according to the mapping relation between the first image and the second image so as to determine at least one vehicle identification range in the second image, wherein one vehicle identification range is established between every two adjacent highway lane lines;
the first identification module is configured to:
the target vehicle is identified in the at least one vehicle identification range.
6. The apparatus of claim 5, further comprising:
a second obtaining module for obtaining a slope of an initial straight line mapped to each highway lane line in the second image;
the creating module is used for marking the vehicle identification ranges created by the road lane lines corresponding to the two initial straight lines with the maximum slope as the own lane and marking the rest vehicle identification ranges as non-own lanes;
the first identification module is configured to:
the lane change recognition method includes recognizing a target vehicle of a local lane in a vehicle recognition range marked as the local lane, recognizing a target vehicle of a non-local lane in a vehicle recognition range marked as the non-local lane, and recognizing a target vehicle of a lane change in a vehicle recognition range in which two adjacent vehicle recognition ranges are combined.
7. A vehicle, characterized by comprising:
the vehicle control apparatus according to any one of claims 4 to 6.
CN201610874368.XA 2016-09-30 2016-09-30 Vehicle control method and device and vehicle Active CN107886036B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610874368.XA CN107886036B (en) 2016-09-30 2016-09-30 Vehicle control method and device and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610874368.XA CN107886036B (en) 2016-09-30 2016-09-30 Vehicle control method and device and vehicle

Publications (2)

Publication Number Publication Date
CN107886036A CN107886036A (en) 2018-04-06
CN107886036B true CN107886036B (en) 2020-11-06

Family

ID=61770063

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610874368.XA Active CN107886036B (en) 2016-09-30 2016-09-30 Vehicle control method and device and vehicle

Country Status (1)

Country Link
CN (1) CN107886036B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102627453B1 (en) * 2018-10-17 2024-01-19 삼성전자주식회사 Method and device to estimate position
CN110281923A (en) * 2019-06-28 2019-09-27 信利光电股份有限公司 A kind of vehicle auxiliary lane change method, apparatus and system
CN110596656B (en) * 2019-08-09 2021-11-16 山西省煤炭地质物探测绘院 Intelligent street lamp feedback compensation system based on big data

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104952254B (en) * 2014-03-31 2018-01-23 比亚迪股份有限公司 Vehicle identification method, device and vehicle
CN104112118B (en) * 2014-06-26 2017-09-05 大连民族学院 Method for detecting lane lines for Lane Departure Warning System

Also Published As

Publication number Publication date
CN107886036A (en) 2018-04-06

Similar Documents

Publication Publication Date Title
CN108528431B (en) Automatic control method and device for vehicle running
CN107886770B (en) Vehicle identification method and device and vehicle
CN111712731B (en) Target detection method, target detection system and movable platform
US11836989B2 (en) Vehicular vision system that determines distance to an object
EP2910971B1 (en) Object recognition apparatus and object recognition method
US8854458B2 (en) Object detection device
CN108528448B (en) Automatic control method and device for vehicle running
CN109891262B (en) Object detecting device
JP5399027B2 (en) A device having a system capable of capturing a stereoscopic image to assist driving of an automobile
CN110386065B (en) Vehicle blind area monitoring method and device, computer equipment and storage medium
US9827956B2 (en) Method and device for detecting a braking situation
US8848980B2 (en) Front vehicle detecting method and front vehicle detecting apparatus
EP2669844B1 (en) Level Difference Recognition System Installed in Vehicle and Recognition Method executed by the Level Difference Recognition System
CN108528433B (en) Automatic control method and device for vehicle running
JP6457278B2 (en) Object detection apparatus and object detection method
CN107886729B (en) Vehicle identification method and device and vehicle
CN111699406B (en) Millimeter wave radar tracking detection method, millimeter wave radar and vehicle
US11351997B2 (en) Collision prediction apparatus and collision prediction method
US20150055120A1 (en) Image system for automotive safety applications
CN107886036B (en) Vehicle control method and device and vehicle
CN108536134B (en) Automatic control method and device for vehicle running
JP2013250907A (en) Parallax calculation device, parallax calculation method and parallax calculation program
CN110705445A (en) Trailer and blind area target detection method and device
JP2020197506A (en) Object detector for vehicles
Kaempchen et al. Fusion of laserscanner and video for advanced driver assistance systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant