WO2024108754A1 - 障碍物测距方法、装置、车辆及介质 - Google Patents

障碍物测距方法、装置、车辆及介质 Download PDF

Info

Publication number
WO2024108754A1
WO2024108754A1 PCT/CN2023/072290 CN2023072290W WO2024108754A1 WO 2024108754 A1 WO2024108754 A1 WO 2024108754A1 CN 2023072290 W CN2023072290 W CN 2023072290W WO 2024108754 A1 WO2024108754 A1 WO 2024108754A1
Authority
WO
WIPO (PCT)
Prior art keywords
spot
light beam
obstacle
vehicle
light
Prior art date
Application number
PCT/CN2023/072290
Other languages
English (en)
French (fr)
Inventor
罗永官
Original Assignee
惠州市德赛西威汽车电子股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 惠州市德赛西威汽车电子股份有限公司 filed Critical 惠州市德赛西威汽车电子股份有限公司
Priority to KR1020237045435A priority Critical patent/KR20240079191A/ko
Publication of WO2024108754A1 publication Critical patent/WO2024108754A1/zh

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/14Measuring arrangements characterised by the use of optical techniques for measuring distance or clearance between spaced objects or spaced apertures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads

Definitions

  • the present application relates to the field of visual perception technology, for example, to obstacle ranging methods, devices, vehicles and media.
  • vehicle safety assisted driving technology provides vehicles with safety assisted driving functions, thereby providing intelligent technical services to reduce traffic accidents caused by subjective factors of drivers.
  • pure visual perception technology is used for ranging. Obstacles in the image are identified according to preset obstacle types, and the distance of the obstacle is determined based on the image.
  • pure visual perception technology has the problem of being unable to achieve intelligent tracking and precise ranging. Therefore, the related art must be equipped with external devices such as ultrasonic radars. However, external devices such as ultrasonic radars cannot perform intelligent tracking and precise ranging in combination with specific obstacles, and the detection distance is limited.
  • the present application provides an obstacle ranging method, device, vehicle and medium to achieve obstacle ranging based on pure vision.
  • a method for measuring obstacle distance comprising:
  • the light beam spot is the point where the light beam falls in the emission direction.
  • an obstacle ranging device comprising:
  • an information determination module configured to determine obstacle area information of an obstacle when determining that there is an obstacle in the driving direction of the vehicle according to the captured first driving image
  • a mode determination module configured to determine a light beam emission mode according to the current vehicle speed and the obstacle area information
  • a beam emitting module configured to control the radio frequency mechanisms on the left and right sides of the vehicle to emit beams in the beam emitting manner
  • An image acquisition module is configured to adjust a spot focusing parameter of a light beam spot in a spot focusing mode that matches the light beam emission mode, and obtain a second vehicle driving image including the adjusted light beam spot;
  • a distance determination module configured to determine the distance between the vehicle and the obstacle according to the second driving image
  • the light beam spot is the point where the light beam falls in the emission direction.
  • a vehicle comprising:
  • the radio frequency rotating component is connected to the left and right radio frequency mechanisms and is configured to control the light beam emission mode of the left and right radio frequency mechanisms;
  • the light spot focusing chip is configured to adjust the light spot focusing parameters of the light beam spot
  • the memory stores a computer program that can be executed by the at least one controller, and the computer program is executed by the at least one controller so that the at least one controller can execute the obstacle ranging method described in any embodiment of the present application.
  • a computer-readable storage medium stores computer instructions, and the computer instructions are configured to enable a processor to implement the obstacle ranging method described in any embodiment of the present application when executed.
  • FIG1 is a flow chart of an obstacle ranging method provided in Embodiment 1 of the present application.
  • FIG2 is a flow chart of an obstacle ranging method provided in Embodiment 2 of the present application.
  • FIG3 is an example diagram of a first driving image in an obstacle ranging method provided in Embodiment 2 of the present application.
  • FIG4a is an example diagram of determining the separation distance according to the light beam spot area in an obstacle ranging method provided in Embodiment 2 of the present application;
  • FIG4b is an example diagram of determining the interval distance according to the angle in an obstacle ranging method provided in Embodiment 2 of the present application;
  • FIG5 is a schematic diagram of the structure of an obstacle ranging device provided in Example 3 of the present application.
  • FIG6 is a schematic diagram of the structure of a vehicle for implementing the obstacle ranging method according to an embodiment of the present application.
  • FIG1 is a flow chart of an obstacle distance measurement method provided in the first embodiment of the present application.
  • the present embodiment is applicable to the case of obstacle distance measurement based on pure vision.
  • the method can be performed by an obstacle distance measurement device.
  • the obstacle distance measurement device can be implemented in the form of at least one of hardware and software.
  • the obstacle distance measurement device can be configured in a vehicle. As shown in FIG1 , the method includes:
  • the first driving image can be understood as an image captured by a front camera when the vehicle is driving.
  • the driving direction of the vehicle can be understood as the direction corresponding to the driving of the vehicle.
  • the obstacle can be understood as an object in the driving direction of the vehicle that may hinder the driving of the vehicle.
  • the obstacle area information can be understood as image information obtained by marking the area where the obstacle is located and the outline information of the obstacle.
  • the image of the vehicle's driving direction can be collected by the camera equipped on the vehicle, and can be transmitted in the form of video through a bus or other means.
  • the execution subject receives the first driving image
  • the video can be decomposed into each frame, and each frame can be used as the first driving image captured by the camera.
  • the first driving image is analyzed.
  • an object different from the environmental characteristics appears in the first driving image and the object blocks the driving direction, such as a high object, it can be understood that there is an obstacle in the driving direction. It can be assumed that the obstacle has a distance measurement requirement, and then the range and outline of the obstacle are marked in the first driving image, and the marked image is used as the obstacle area information of the obstacle.
  • S120 Determine the light beam emission mode according to the current vehicle speed and obstacle area information.
  • the current vehicle speed can be understood as the current driving speed of the vehicle.
  • the light beam emission mode can be understood as different angles of light beam emission, wherein the light beam can be a light beam emitted by an infrared light lamp, etc. This embodiment only takes the light beam emitted by an infrared light lamp as an example of the light beam, and does not limit the light beam.
  • a vehicle speed acquisition instruction may be sent to The corresponding sensor receives the current vehicle speed sent by the corresponding sensor.
  • the current vehicle speed can be compared with the preset speed threshold. When the current vehicle speed is less than or equal to the preset speed threshold, it can be considered that the vehicle speed is slow and the obstacle may be a pedestrian, etc. If the light beam emission angle is too high, it may irradiate the human eye and cause harm to pedestrians, etc., then the light beam can be emitted to the outline of the obstacle on the ground, and the light beam can be controlled to track the outline of the obstacle.
  • the light beam can be emitted at an angle of 90 degrees to the vehicle itself, that is, parallel to the ground where the vehicle is located, so that the light beam can irradiate the surface of the obstacle, such as the rear of the vehicle.
  • the left and right radio frequency mechanisms can be understood as mechanisms that emit light beams, such as being arranged at the left and right headlights.
  • the left and right radio frequency mechanisms only emit light beams and cannot adjust the angles of the light beams. Therefore, it is necessary to add radio frequency rotating components to control the rotation of the left and right radio frequency mechanisms.
  • the RF rotating component can be controlled to rotate according to the light beam emission mode, and a corresponding rotation angle instruction can be generated according to the light beam emission mode.
  • the rotation angle instruction is transmitted to the RF rotating component to make the RF rotating component rotate at a corresponding angle to control the left and right RF mechanisms to rotate to the corresponding angle to emit a light beam.
  • the light beam emission method may be to emit the light beam at an angle of 90 degrees to the own vehicle, and the RF rotating component may be controlled to rotate the left and right RF mechanisms to an angle of 90 degrees to the ground, so that the light beams emitted by the left and right RF mechanisms are irradiated to the surface of the obstacle; the light beam emission method may be to emit the left light beam at an angle of 60 degrees to the own vehicle, and the right light beam at an angle of 30 degrees to the own vehicle.
  • the RF rotating component may be controlled to rotate the left RF mechanism to an angle of 60 degrees to the ground, and the RF rotating component may be controlled to rotate the right RF mechanism to an angle of 30 degrees to the ground, so that the light beams emitted by the left and right RF mechanisms are irradiated to the outline of the obstacle on the ground.
  • the spot focusing method can be understood as a method for adjusting the spot focusing parameters of the light beam.
  • the light beam spot is the point where the light beam falls in the emission direction, that is, the spot presented when the light beam encounters an obstacle in the emission direction.
  • the spot focusing parameters can be understood as parameters for adjusting the intensity and size of the light beam.
  • the second driving image can be understood as an image captured after adjusting the spot focusing parameters of the light beam spot.
  • the left and right radio frequency mechanisms only emit light beams and cannot adjust the spot focusing parameters of the light beams. Therefore, it is necessary to add a spot focusing chip to adjust the spot focusing parameters of the light beam spot.
  • the light spot focusing parameter can be adjusted according to the clarity of the light beam spot in the driving image, so that the light beam spot can be clearly displayed in the driving image.
  • it can be in the form of preset gears, each gear corresponds to a different light spot focusing parameter, and the gear matching it can be found according to the clarity of the current light beam spot or the roughly calculated distance from the obstacle, and the light spot focusing chip can be adjusted to the corresponding gear, that is, the light spot focusing parameter of the light beam spot is adjusted to the target focusing parameter.
  • the light spot focusing parameter can be adjusted by the light spot focusing chip in the form of a preset threshold. At this time, there is no requirement for the clarity in the driving image, and it is only necessary to ensure that there is a light beam spot in the driving image. When there is no light beam spot in the driving image, that is, the obstacle may be too far away, the light spot focusing chip can be used to adjust the parameters again. When the adjustment is completed, a second driving image captured by the camera containing the adjusted light beam spot can be obtained.
  • the interval distance may be understood as the distance between the vehicle and the position of the obstacle closest to the vehicle.
  • a size detection instruction can be sent to a preset spot size detection unit, and the spot size detection unit can detect the spot area in the second driving image in the form of periodic detection, obtain the spot area of the left and right light panels, and bring the spot area and corresponding parameters into the preset first spacing distance formula to calculate the spacing distance between the vehicle and the obstacle.
  • an angle detection instruction can be sent to the preset If an angle detection unit is set, the angle detection unit can detect the angle between the left and right radio frequency mechanisms and the vehicle in the form of periodic detection, and bring the angle value and the predetermined setting heights of the left and right radio frequency mechanisms into a predetermined second spacing distance formula to calculate the spacing distance between the vehicle and the obstacle.
  • the spacing distance can be sent to the corresponding display screen in the vehicle to display the spacing distance.
  • the spacing distance between the vehicle and the obstacle can be marked on the left and right auxiliary lines in the form of left and right auxiliary lines (such as the left and right auxiliary lines in the reversing image) on the screen corresponding to the central control, so that the driver can intuitively see the spacing distance.
  • the left and right spacing distances are different.
  • the side with the shortest spacing distance can be used as the spacing distance between the obstacle and the vehicle itself, or the two spacing distances between the left and right sides of the vehicle and the obstacle can be displayed at the same time.
  • the first embodiment provides an obstacle ranging method.
  • the obstacle area information of the obstacle is determined in combination with the current vehicle speed to determine the light beam emission mode;
  • the left and right radio frequency mechanisms on the vehicle are controlled to emit light beams in a light beam emission mode;
  • the light spot focusing parameters of the light beam spot are adjusted in a light spot focusing mode that matches the light beam emission mode to obtain a second driving image containing the adjusted light beam spot; based on the second driving image, the interval distance between the vehicle and the obstacle is determined.
  • a light beam is emitted in a light beam emission mode and a light spot focusing mode that match the obstacle area information and the vehicle speed, and the image containing the light beam spot is analyzed to determine the interval distance.
  • Intelligent tracking and precise ranging of obstacles are achieved based on visual perception, which reduces the hardware architecture cost compared to the radar ranging method.
  • FIG2 is a flow chart of an obstacle ranging method provided in Example 2 of the present application. This embodiment is an optimization based on the above embodiment. As shown in FIG2, the method includes:
  • FIG3 is an example diagram of the first driving image in an obstacle ranging method provided in Example 2 of the present application.
  • A represents a vehicle
  • b represents an obstacle
  • c represents a baseline.
  • the obstacle area information of the obstacle can be determined based on the first driving image.
  • the upper layer is the first driving image captured by the camera, which includes an irregular obstacle b whose contour has been calibrated, and a baseline c obtained by connecting two points of the irregular obstacle closest to the vehicle a, thereby obtaining obstacle area information.
  • a vehicle speed acquisition instruction may be sent to a corresponding sensor to receive the current vehicle speed sent by the corresponding sensor.
  • S203 Determine whether the current vehicle speed is greater than a preset speed threshold.
  • the speed threshold may be understood as a threshold used to determine whether the vehicle speed is too fast.
  • a speed threshold may be preset, and when the current vehicle speed sent by the sensor is received, the current vehicle speed may be compared with the preset speed threshold to determine whether the current vehicle speed is greater than the preset speed threshold.
  • the beam emission mode is set at an angle of 90 degrees between the beam and the vehicle.
  • the vehicle speed when the current vehicle speed is greater than a preset speed threshold, it can be considered that the vehicle speed is fast and the obstacle may be a vehicle.
  • the light beam can be emitted at an angle of 90 degrees to the vehicle itself, that is, parallel to the ground where the vehicle is located, so that the light beam can illuminate the surface of the obstacle, such as the rear of the vehicle.
  • the spot of light can be understood as the spot when the light spot is irradiated on the ground in a direction perpendicular to the front of the vehicle.
  • the reference line can be understood as a straight line used to mark the outline of the obstacle projected on the ground.
  • the obstacle area information shows that the area closest to the vehicle where the obstacle is located is identified as a straight line (that is, the shape of the obstacle is relatively regular), then this straight line is used as the baseline; If the obstacle area information shows that the area closest to the vehicle is identified as a relatively scattered point (i.e., the shape of the obstacle is irregular), the tangent line of the obstacle area can be identified and used as the reference line. For example, the two scattered points closest to the vehicle are found and connected to obtain the tangent line.
  • the left and right radio frequency mechanisms each have a spot landing point perpendicular to the front of the vehicle. The angles of the left and right radio frequency mechanisms can be adjusted so that the two spots can fall on this straight line.
  • the radio frequency rotating component is connected to the radio frequency mechanism on the right side.
  • the RF rotating component can be controlled to rotate according to the light beam emission mode, and a corresponding rotation angle instruction can be generated according to the light beam emission mode.
  • the rotation angle instruction is transmitted to the RF rotating component to make the RF rotating component rotate at a corresponding angle to control the left and right RF mechanisms to rotate to the corresponding angle to emit a light beam.
  • the RF rotating component can be controlled to rotate according to the light beam emission mode, and an angle instruction corresponding to a rotation of 90 degrees can be generated according to the light beam emission mode.
  • the rotation angle instruction is transmitted to the RF rotating component to rotate the RF rotating component according to the corresponding angle to control the left and right RF mechanisms to rotate to 90 degrees to emit a light beam.
  • the light spot focusing mode is to focus the light spot according to the obstacle area information, adjust the light spot focusing parameters of the light beam spot, and obtain a second driving image including the adjusted light beam spot.
  • the light spot focusing parameter can be adjusted according to the clarity of the light beam spot in the obstacle area information, so that the light beam spot can be clearly displayed in the driving image.
  • a gear can be preset, and each gear corresponds to a different light spot focusing parameter.
  • the gear matching the current light beam spot clarity or the roughly calculated obstacle distance can be found, and the light spot focusing chip can be adjusted to the corresponding gear.
  • the focus parameter of the light beam spot is adjusted to the target focus parameter.
  • the step of adjusting the spot focusing parameter of the light beam spot and obtaining the second driving image including the adjusted light beam spot may include:
  • the spot clarity can be understood as the clarity of the light beam spot when displayed in the driving image.
  • the obstacle area information of each frame can be obtained in real time, the obstacle area information can include a light beam spot, and the spot clarity of the light beam spot can be identified according to a pre-set method.
  • the spot focusing parameters are adjusted through the spot focusing chip.
  • the first spot clarity standard can be understood as a standard used to indicate the spot clarity that needs to be achieved.
  • the spot clarity can be compared with a preset first spot clarity standard.
  • a preset gear can be used, and each gear corresponds to a different spot focusing parameter.
  • the difference between the current spot clarity and the first spot clarity standard can be calculated, or the gear at which the difference or distance is located can be determined based on a roughly calculated distance to the obstacle.
  • the spot focusing chip is adjusted to the corresponding gear, that is, the spot focusing parameters of the light beam spot are adjusted so that the light beam spot can be clearly displayed in the driving image.
  • a second driving image captured by the camera and containing the adjusted light beam spot can be obtained.
  • the light spot focusing mode is to focus the light spot with a preset focusing parameter threshold, adjust the light spot focusing parameter of the light beam spot, and obtain a second driving image containing the adjusted light beam spot.
  • the focusing parameter threshold can be understood as a parameter value that enables the beam spot to be displayed.
  • the light spot focusing mode is such that the light spot focusing parameters can be adjusted through a light spot focusing chip in the form of a preset threshold.
  • the clarity in the driving image there is no requirement for the clarity in the driving image, and it is only necessary to ensure that there is a light beam spot in the driving image.
  • a second driving image captured by the camera containing the adjusted light beam spot can be obtained.
  • a preset focus parameter threshold may be obtained, and the spot focus parameter may be adjusted to the focus parameter threshold through the spot focus chip.
  • the spot focusing parameter is adjusted for the second time through the spot focusing chip.
  • the second spot clarity standard can be understood as a standard for indicating the spot clarity that needs to be achieved, wherein the second spot clarity standard is different from the first spot clarity standard.
  • the second spot clarity standard can be lower than the first spot clarity standard.
  • the spot clarity does not meet the preset first spot clarity standard, such as when there is no light beam spot in the driving image, that is, the obstacle may be too far away, the parameters can be adjusted again through the light spot focusing chip.
  • each gear corresponds to a different light spot focusing parameter
  • the difference between the current spot clarity and the second spot clarity standard can be calculated, or the gear at which the difference or distance is located can be determined based on the roughly calculated distance to the obstacle, and the light spot focusing chip is adjusted to the corresponding gear, that is, the light spot focusing parameters of the light beam spot are adjusted so that the light beam spot can be displayed in the driving image.
  • a second driving image captured by the camera and containing the adjusted light beam spot can be obtained.
  • S210 Determine an area value of the light beam spot according to the second driving image.
  • the area value may be understood as the area value of the light beam spot displayed in the image.
  • a light spot area detection unit may be pre-set to identify the left and right light beam spots in the second driving image, detect the areas of the two light beam spots, and obtain the area values of the left and right light beam spots.
  • the relationship coefficient of the area value corresponding to the spacing distance can be set in advance according to the spot focusing parameters, and a correspondence table between the spot focusing parameters and the relationship coefficient can be established.
  • the relationship coefficient can be determined according to the current spot focusing parameters, and the spacing distance between the vehicle and the obstacle can be determined according to the product of the relationship coefficient and the area value.
  • the interval distance can be calculated by the following formula:
  • L is the distance between the vehicle and the obstacle
  • k is the relationship coefficient
  • X is the area value of the beam spot.
  • the installation height information can be understood as the distance from the left and right radio frequency mechanisms to the ground.
  • the installation height information of the left and right radio frequency mechanisms can be determined by measurement, the installation height information can be input into a memory for storage, and the memory can be searched to obtain the installation height information of the left and right radio frequency mechanisms.
  • S213 Determine angle information between the left and right radio frequency mechanisms and the vehicle according to the second driving image.
  • a rotation angle detection unit may be pre-set to detect the rotation angles of the left and right radio frequency mechanisms respectively, thereby determining the angle information between the left and right radio frequency mechanisms and the vehicle.
  • S214 Determine the distance between the vehicle and the obstacle based on the angle information and the installation height information.
  • angle information and the installation height information may be substituted into a trigonometric function formula to determine the distance between the vehicle and the obstacle.
  • the interval distance can be calculated by the following formula:
  • L represents the distance between the vehicle and the obstacle
  • H is the installation height value
  • is the angle between the RF mechanism and the vehicle.
  • the second embodiment provides an obstacle ranging method, which obtains obstacle area information by identifying the area where the obstacle is located, determines the beam emission mode according to the comparison result between the current vehicle speed and the speed threshold combined with the obstacle area information, controls the left and right radio frequency mechanisms to rotate according to the rotation angle in the beam emission mode through the radio frequency rotating component, so that the beam landing point can always fall on the reference line corresponding to the obstacle area, and adjusts the focusing parameters through the light spot focusing chip so that the light beam spot can be clearly displayed in the driving image, thereby realizing real-time tracking of the obstacle.
  • the area value or angle information of the light beam spot can be determined, and the area value and angle information are brought into the corresponding interval distance calculation formula to determine the interval distance between the vehicle and the obstacle.
  • the hardware architecture of the visual perception related technology the hardware architecture of radar ranging and fusion is deleted, which saves resources and costs. Only the hardware architecture of the light spot focusing chip and the radio frequency rotating component needs to be built separately to realize the accurate ranging of the obstacle, providing a complete set of pure visual accurate ranging implementation solutions, and the cost can be controlled.
  • a schematic diagram is provided for illustrating a method for calculating the interval distance based on the area of the light beam spot when the current vehicle speed is greater than a preset speed threshold
  • a schematic diagram is provided for illustrating a method for calculating the interval distance based on the angle when the current vehicle speed is less than or equal to the preset speed threshold.
  • FIG. 4 a is an example diagram of determining the spacing distance based on the light beam spot area in an obstacle ranging method provided in Embodiment 2 of the present application.
  • a single-sided radio frequency mechanism is taken as an example to determine the distance between the same obstacle and the vehicle at different times.
  • E3 is the radio frequency mechanism on one side
  • B3 is the obstacle
  • X1 is the beam spot area of obstacle B3 at the previous moment
  • X2 is the beam spot area of obstacle B3 at the current moment
  • L1 is the distance between obstacle B3 and the radio frequency mechanism at the previous moment
  • L2 is the distance between obstacle B3 and the radio frequency mechanism at present.
  • the radio frequency rotating component can be controlled to adjust the angle of the radio frequency mechanism E3 to 90 degrees, that is, to emit a light beam parallel to the ground.
  • FIG. 4 b is an example diagram of determining the interval distance based on the angle in an obstacle ranging method provided in Embodiment 2 of the present application.
  • A2 represents the own vehicle
  • E1 represents the left RF mechanism
  • E2 represents the right RF mechanism
  • F1 represents the right beam landing point at the previous moment
  • F2 represents the left beam landing point at the previous moment
  • B2 represents the obstacle
  • C1 represents the baseline at the previous moment
  • C2 represents the current baseline
  • F3 represents the current right beam landing point
  • F4 represents the current left beam landing point.
  • the baseline can be determined according to the obstacle area information, and E1 and E2 are controlled to rotate through the RF rotating component so that the light beam always falls on the baseline.
  • FIG5 is a schematic diagram of the structure of an obstacle distance measuring device provided in the third embodiment of the present application.
  • the device includes: an information determination module 41, a mode determination module 42, a light beam emission module 43, an image acquisition module 44 and a distance determination module 45.
  • the information determination module 41 is configured to determine obstacle area information of the obstacle when it is determined that there is an obstacle in the driving direction of the vehicle according to the captured first driving image.
  • the mode determination module 42 is configured to determine the light beam emission mode according to the current vehicle speed and obstacle area information.
  • the beam emitting module 43 is configured to control the radio frequency mechanisms on the left and right sides of the vehicle to emit beams. Beam emission.
  • the image acquisition module 44 is configured to adjust the light spot focusing parameters of the light beam spot in a light spot focusing manner that matches the light beam emission manner, and obtain a second vehicle driving image including the adjusted light beam spot.
  • a distance determination module 45 is configured to determine the distance between the vehicle and the obstacle according to the second driving image
  • the beam spot is the point where the light beam falls in the emission direction.
  • the obstacle distance measurement device provided in the third embodiment emits a light beam in a light beam emission mode and a light spot focusing mode that matches the obstacle area information and the vehicle speed, analyzes the image containing the light beam spot, and determines the interval distance. Based on visual perception, intelligent tracking and accurate distance measurement of obstacles are achieved, which reduces the hardware architecture cost compared to the radar distance measurement method.
  • the mode determination module 42 includes:
  • the beam is emitted at an angle of 90 degrees to the vehicle
  • the reference line corresponding to the obstacle is determined according to the obstacle area information, and the light spot corresponding to the light beam is placed on the reference line as the light beam emission mode.
  • the light beam emitting module 43 is configured as:
  • the radio frequency rotating member controls the radio frequency mechanisms on the left and right sides of the vehicle to emit light beams in a light beam emitting manner
  • the radio frequency rotating component is connected to the left and right radio frequency mechanisms.
  • the image acquisition module 44 is configured as follows:
  • the spot focusing parameter is adjusted through the spot focusing chip
  • a second driving image including the adjusted light beam spot is captured.
  • the image acquisition module 44 is configured as follows:
  • the spot focusing parameter is adjusted for the second time through the spot focusing chip
  • a second driving image including the adjusted light beam spot is captured.
  • the distance determination module 45 is configured as follows:
  • the area value determine the distance between the vehicle and the obstacle.
  • the distance determination module 45 may also be configured as:
  • the distance between the vehicle and the obstacle is determined according to the angle information and the installation height information.
  • the obstacle ranging device provided in the embodiments of the present application can execute the obstacle ranging method provided in any embodiment of the present application, and has the functional modules and effects corresponding to the execution method.
  • Figure 6 is a structural schematic diagram of a vehicle provided in Example 4 of the present application.
  • the vehicle includes a controller 51, a memory 52, an input device 53, an output device 54, a radio frequency rotating component 55 and a light spot focusing chip 56; the number of controllers 51 in the vehicle can be at least one, and Figure 5 takes one controller 51 as an example; the controller 51, memory 52, input device 53, output device 54, radio frequency rotating component 55 and light spot focusing chip 56 in the vehicle can be connected via a bus or other means, and Figure 5 takes the connection via a bus as an example.
  • the memory 5 is a computer-readable storage medium that can be configured to store software programs, computer executable programs, and modules, such as the program instructions/modules corresponding to the obstacle ranging method in the embodiment of the present application.
  • the controller 51 executes various functional applications and data processing of the vehicle by running the software programs, instructions and modules stored in the memory 52, that is, realizing the above-mentioned obstacle ranging method.
  • the memory 52 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system and at least one application required for a function; the data storage area may store data created according to the use of the terminal, etc.
  • the memory 52 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one disk storage device, a flash memory device, or other non-volatile solid-state storage device.
  • the memory 52 may include a memory remotely arranged relative to the controller 51, and these remote memories may be connected to the vehicle via a network. Examples of the above-mentioned network include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
  • the input device 53 may be configured to receive input digital or character information and generate key signal input related to user settings and function control of the cloud platform.
  • the output device 54 may include a display device such as a display screen.
  • the RF rotating member 54 is connected to the left and right RF mechanisms, and can be configured to control the beam emission mode of the left and right RF mechanisms.
  • the light spot focusing chip 55 can be configured to adjust the light spot focusing parameters of the light beam spot.
  • Embodiment 5 of the present application further provides a storage medium including computer executable instructions, wherein the computer executable instructions are configured to execute an obstacle ranging method when executed by a computer processor, the method comprising:
  • the spot focusing parameters of the beam spot are adjusted in a spot focusing mode that matches the beam emission mode.
  • the line is adjusted to obtain a second driving image including the adjusted light beam spot;
  • the light beam spot is the point where the light beam falls in the emission direction.
  • the computer executable instructions of a storage medium including computer executable instructions provided in an embodiment of the present application are not limited to the method operations described above, and can also execute related operations in the obstacle ranging method provided in any embodiment of the present application.
  • the technicians in the relevant field can clearly understand that the present application can be implemented with the help of software and necessary general hardware, and of course it can also be implemented by hardware, but in many cases the former is a better implementation method.
  • the technical solution of the present application can be essentially or the part that contributes to the relevant technology can be embodied in the form of a software product, and the computer software product can be stored in a computer-readable storage medium, such as a computer floppy disk, read-only memory (ROM), random access memory (RAM), flash memory (FLASH), hard disk or optical disk, etc., including a number of instructions for a computer device (which can be a personal computer, server, or network device, etc.) to execute the methods described in each embodiment of the present application.
  • a computer-readable storage medium such as a computer floppy disk, read-only memory (ROM), random access memory (RAM), flash memory (FLASH), hard disk or optical disk, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Mechanical Engineering (AREA)
  • Transportation (AREA)
  • Mathematical Physics (AREA)
  • Automation & Control Theory (AREA)
  • Traffic Control Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)

Abstract

一种障碍物测距方法、装置、车辆及介质。该方法包括:根据捕获的第一行车图像确定车辆行车方向存在障碍物时,确定障碍物的障碍物区域信息(S110);根据当前车速及障碍物区域信息,确定光束发射方式(S120);控制车辆上左右侧射频机构以该光束发射方式进行光束发射(S130);以与光束发射方式相匹配的光斑聚焦方式,对光束斑点的光斑聚焦参数进行调整,获得包含调整后光束斑点的第二行车图像(S140);根据第二行车图像,确定车辆与障碍物的间隔距离。通过该方法,以与障碍物区域信息及车速相匹配的光束发射方式及光斑聚焦方式发射光束,对包含光束斑点的图像进行分析,确定间隔距离。基于视觉感知实现了对障碍物的智能跟踪和精确测距,相较于雷达测距方法,降低了硬件架构成本。

Description

障碍物测距方法、装置、车辆及介质
本申请要求在2022年11月25日提交中国专利局、申请号为202211496847.4的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。
技术领域
本申请涉及视觉感知技术领域,例如涉及障碍物测距方法、装置、车辆及介质。
背景技术
随着公路交通特别是高速公路系统的发展,交通事故率也呈现上升趋势,交通安全越来越成为人们关注的焦点。因此,研究车辆安全辅助驾驶技术,为车辆提供安全辅助驾驶功能,从而为减少因驾驶者主观因素造成的交通事故提供智能技术服务。
相关技术中,采用纯视觉感知技术进行测距,根据预设的障碍物类型对图像中的障碍物进行识别,根据图像确定障碍物的距离,但纯视觉感知技术存在无法实现智能跟踪和精准测距的问题,因此相关技术中要配备超声波雷达等外接设备,但超声波雷达等外接设备无法结合具体障碍物进行智能跟踪及精准测距,并且探测距离有限。
发明内容
本申请提供了一种障碍物测距方法、装置、车辆及介质,以实现基于纯视觉对障碍物进行测距。
根据本申请的第一方面,提供了一种障碍物测距方法,所述方法包括:
根据捕获的第一行车图像确定车辆行车方向存在障碍物时,确定所述障碍物的障碍物区域信息;
根据当前车速及所述障碍物区域信息,确定光束发射方式;
控制车辆上左右侧射频机构以所述光束发射方式进行光束发射;
以与所述光束发射方式相匹配的光斑聚焦方式,对光束斑点的光斑聚焦参数进行调整,获得包含调整后光束斑点的第二行车图像;
根据所述第二行车图像,确定车辆与所述障碍物的间隔距离;
其中,所述光束斑点为光束在所述发射方向上的落光点。
根据本申请的第二方面,提供了一种障碍物测距装置,所述装置包括:
信息确定模块,设置为根据捕获的第一行车图像确定车辆行车方向存在障碍物时,确定所述障碍物的障碍物区域信息;
方式确定模块,设置为根据当前车速及所述障碍物区域信息,确定光束发射方式;
光束发射模块,设置为控制车辆上左右侧射频机构以所述光束发射方式进行光束发射;
图像获得模块,设置为以与所述光束发射方式相匹配的光斑聚焦方式,对光束斑点的光斑聚焦参数进行调整,获得包含调整后光束斑点的第二行车图像;
距离确定模块,设置为根据所述第二行车图像,确定车辆与所述障碍物的间隔距离;
其中,所述光束斑点为光束在所述发射方向上的落光点。
根据本申请的第三方面,提供了一种车辆,所述车辆包括:
至少一个控制器;
射频旋转构件;
光斑聚焦芯片;
以及
与所述至少一个控制器通信连接的存储器;
其中,所述射频旋转构件与左右侧射频机构连接,设置为控制左右侧射频机构的光束发射方式;
所述光斑聚焦芯片设置为调整光束斑点的光斑聚焦参数;
所述存储器存储有可被所述至少一个控制器执行的计算机程序,所述计算机程序被所述至少一个控制器执行,以使所述至少一个控制器能够执行本申请任一实施例所述的障碍物测距方法。
根据本申请的另一方面,提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机指令,所述计算机指令设置为使处理器执行时实现本申请任一实施例所述的障碍物测距方法。
附图说明
图1是本申请实施例一提供的一种障碍物测距方法的流程图;
图2是本申请实施例二提供的一种障碍物测距方法的流程图;
图3是本申请实施例二提供的一种障碍物测距方法中第一行车图像示例图;
图4a是本申请实施例二提供的一种障碍物测距方法中根据光束斑点面积确定间隔距离的示例图;
图4b是本申请实施例二提供的一种障碍物测距方法中根据角度确定间隔距离的示例图;
图5是本申请实施例三提供的一种障碍物测距装置的结构示意图;
图6是实现本申请实施例的障碍物测距方法的车辆的结构示意图。
具体实施方式
需要说明的是,本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那 些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
实施例一
图1为本申请实施例一提供的一种障碍物测距方法的流程图,本实施例可适用于基于纯视觉下的障碍物距离测量的情况,该方法可以由障碍物测距装置来执行,该障碍物测距装置可以采用硬件和软件中至少之一的形式实现,该障碍物测距装置可配置于车辆中。如图1所示,该方法包括:
S110、根据捕获的第一行车图像确定车辆行车方向存在障碍物时,确定障碍物的障碍物区域信息。
在本实施例中,第一行车图像可以理解为车辆行驶时前方摄像头拍摄的图像。车辆行车方向可以理解为车辆行驶时对应的方向。障碍物可以理解为在车辆行车方向的可能会阻碍车辆行驶的物体。障碍物区域信息可以理解为对障碍物所处的区域及障碍物的轮廓信息进行标记得到的图像信息。
示例性的,可以根据车辆配备的摄像头对车辆行驶方向的画面进行采集,可以以视频的形式通过总线等方式进行传输,当执行主体接收到第一行车图像时,可以将视频分解为每一帧画面,可以将每一帧画面作为摄像头捕获的第一行车图像。对第一行车图像进行分析,当第一行车图像中出现了与环境特征不同的物体并且该物体挡住了行车方向,如具有高度的物体,则可以理解为行车方向存在障碍物,可以默认为该障碍物有测距需求,进而在第一行车图像中对障碍物的范围及其轮廓进行标记,将标记后的图像作为障碍物的障碍物区域信息。
S120、根据当前车速及障碍物区域信息,确定光束发射方式。
在本实施例中,当前车速可以理解为车辆当前的行驶速度。光束发射方式可以理解为光束发射的不同角度,其中,光束可以为红外光灯发出的光束等。本实施例仅以红外光灯发出的光束作为光束的示例,并不对光束进行限定。
示例性的,当车辆行车方向存在障碍物时,可以发送一条车速获取指令至 相应的传感器,接收相应的传感器发送的当前车速。可以将当前车速与预先设定的速度阈值进行比对,当当前车速小于或等于预先设定的速度阈值时,可以认为车速较慢,障碍物可能是行人等,若光束发射角度过高可能会照射到人眼,对行人造成伤害等,则可以将光束发射至障碍物在地面的轮廓上,并控制光束追踪障碍物的轮廓。当当前车速大于预先设定的速度阈值时,可以认为车速较快,障碍物可能为车辆,则可以将光束以与自身车辆夹角为90度的方式发出,即平行于车辆所处的地面发出,使光束可以照射至障碍物表面,如照射至车尾处。
S130、控制车辆上左右侧射频机构以光束发射方式进行光束发射。
在本实施例中,左右侧射频机构可以理解为发出光束的机构,如可以设置于左右两侧车灯处。
需要知道的是,左右侧射频机构仅发出光束,不能进行光束发出角度的调整,则需要添加射频旋转构件,控制左右侧射频机构进行旋转。
示例性的,当根据当前车速确定了光束发射方式后,可以根据光束发射方式控制射频旋转构件进行旋转,可以根据光束发射方式生成对应的旋转角度指令,将旋转角度指令传送至射频旋转构件,使射频旋转构件按照相应的角度进行旋转,以控制左右侧射频机构旋转至相应的角度发射出光束。
示例性的,光束发射方式可以为将光束以与自身车辆夹角为90度的方式发出,则可以控制射频旋转构件将左右侧射频机构旋转至与地面夹角90度,使左右侧射频机构发射出的光束照射至障碍物的表面;光束发射方式可以为将左侧光束以与自身车辆夹角为60度的方式发出,右侧光束以与自身车辆夹角为30度的方式发出,此时,对应着障碍物为向右侧倾斜,则可以控制射频旋转构件将左侧射频机构旋转至与地面夹角60度,控制射频旋转构件将右侧射频机构旋转至与地面夹角30度,使左右侧射频机构发射出的光束照射至障碍物在地面上的轮廓。
S140、以与光束发射方式相匹配的光斑聚焦方式,对光束斑点的光斑聚焦 参数进行调整,获得包含调整后光束斑点的第二行车图像。
在本实施例中,光斑聚焦方式可以理解为光束斑点聚焦参数的调整方式。其中,光束斑点为光束在发射方向上的落光点,即光束在发射方向上遇到障碍物时呈现的斑点。光斑聚焦参数可以理解为用于调整光束强度及大小的参数。第二行车图像可以理解为对光束斑点的光斑聚焦参数进行调整之后捕获的图像。
需要知道的是,左右侧射频机构仅发出光束,不能对光束的光斑聚焦参数进行调整,因此需要添加光斑聚焦芯片,对光束斑点的光斑聚焦参数进行调整。
示例性的,当光束发射方式为将光束以与自身车辆夹角为90度时,可以根据光束斑点在行车图像中的清晰度,对光斑聚焦参数进行调整,使光束斑点在行车图像中可以保持清晰的显示,如可以通过预设档位的形式,每个档位对应有不同的光斑聚焦参数,可以根据当前光束斑点的清晰度或粗略计算出的与障碍物的距离,找出与之匹配的档位,可以通过光斑聚焦芯片调整至相应的档位,即将光束斑点的光斑聚焦参数调整至目标聚焦参数。当光束发射方式为将光束发射至障碍物在地面的轮廓上,并控制光束追踪障碍物的轮廓时,可以以预设的阈值的形式通过光斑聚焦芯片调整光斑聚焦参数,此时对行车图像中的清晰度没有要求,仅需保证行车图像中有光束斑点即可,则当行车图像中没有光束斑点时,即障碍物可能过远,则可以通过光斑聚焦芯片再次进行参数的调整。当调整完成后可以获得摄像头捕获的包含调整后光束斑点的第二行车图像。
S150、根据第二行车图像,确定车辆与障碍物的间隔距离。
在本实施例中,间隔距离可以理解为车辆与障碍物距离车辆最近的位置处之间的距离。
示例性的,当车速大于设定的阈值时,可以发送一条尺寸检测指令至预先设定的光斑尺寸检测单元,则光斑尺寸检测单元可以按照周期检测的形式对第二行车图像中的光斑面积进行检测,获取左右两个光板的光斑面积,将光斑面积及对应的参数带入预先设定的第一间隔距离公式,计算出车辆与障碍物的间隔距离。当车速小于或等于设定的阈值时,可以发送一条角度检测指令至预先 设定的角度检测单元,则角度检测单元可以按照周期检测的形式对左右侧射频机构与车辆的夹角进行检测,将夹角值及预先确定出的左右侧射频机构的设置高度带入预先设定的第二间隔距离公式,计算出车辆与障碍物的间隔距离。可以将间隔距离发送至车辆中相应的展示屏幕进行间隔距离的展示。如可以通过中控对应的屏幕以左右侧辅助线的形式(如倒车影像中的左右侧辅助线)将自身车辆与障碍物的间隔距离分别标注于左右侧辅助线,以使驾驶员可以直观的看出间隔距离。
示例性的,当障碍物相对于车辆为倾斜状态时,则对应的左右侧光斑面积不同或对应的左右侧射频机构的角度不同,则左右侧间隔距离不同,可以以间隔距离最近的一侧作为障碍物与自身车辆的间隔距离,也可以同时展示对自身车辆左右侧与障碍物的两个间隔距离。
本实施例一提供的一种障碍物测距方法。通过根据捕获的第一行车图像确定车辆行车方向存在障碍物时,确定障碍物的障碍物区域信息结合当前车速,确定光束发射方式;控制车辆上左右侧射频机构以光束发射方式进行光束发射;以与光束发射方式相匹配的光斑聚焦方式,对光束斑点的光斑聚焦参数进行调整,获得包含调整后光束斑点的第二行车图像;根据第二行车图像,确定车辆与障碍物的间隔距离。通过该方法,以与障碍物区域信息及车速相匹配的光束发射方式及光斑聚焦方式发射光束,对包含光束斑点的图像进行分析,确定间隔距离。基于视觉感知实现了对障碍物的智能跟踪和精确测距,相较于雷达测距方法,降低了硬件架构成本。
实施例二
图2为本申请实施例二提供的一种障碍物测距方法的流程图,本实施例是在上述实施例的基础上的优化。如图2所示,该方法包括:
S201、根据捕获的第一行车图像确定车辆行车方向存在障碍物时,确定障碍物的障碍物区域信息。
为了便于理解,对第一行车图像障碍物区域信息进行举例说明,图3为本申请实施例二提供的一种障碍物测距方法中第一行车图像示例图,a表示车辆,b表示障碍物,c表示基准线,可以根据第一行车图像确定障碍物的障碍区域信息。
如图3所示,上层为摄像头捕获的第一行车图像,图像中包括已对轮廓进行标定的形状不规则障碍物b,以及对不规则障碍物两个距离车辆a最近的点进行连线得到的基准线c,从而得到障碍物区域信息。
S202、获取车辆的当前车速。
示例性的,当车辆行车方向存在障碍物时,可以发送一条车速获取指令至相应的传感器,接收相应的传感器发送的当前车速。
S203、判断当前车速是否大于预设的速度阈值。
在本实施例中,速度阈值可以理解为用于判断车速是否过快的阈值。
示例性的,可以预先设定速度阈值,当接收到传感器发送的当前车速时,可以将当前车速与预先设定的速度阈值进行比对,判断当前车速是否大于预设的速度阈值。
S204、如果当前车速大于预设的速度阈值,则将光束与车辆夹角为90度作为光束发射方式。
示例性的,当当前车速大于预先设定的速度阈值时,可以认为车速较快,障碍物可能为车辆,则可以将光束以与自身车辆夹角为90度的方式发出,即平行于车辆所处的地面发出,使光束可以照射至障碍物表面,如照射至车尾处。
S205、否则,根据障碍物区域信息,确定障碍物对应的基准线,将光束对应的光斑落点处于基准线上作为光束发射方式。
在本实施例中,光斑落点可以理解为垂直车头方向照射于地面时的落点。基准线可以理解为用于标识障碍物在地面上投影的轮廓的直线。
示例性的,如果障碍物区域信息中显示障碍物所在区域距离车辆最近的区域识别为1根直线(即障碍物的形状较为规则),则将这根直线作为基准线;如 果障碍物区域信息中显示障碍物所在区域距离车辆最近的区域识别为较为分散的点(即障碍物的形状不规则),则可以识别出障碍物区域的相切线,将相切线作为基准线,如找出分散点中距离车辆最近的两个分散点,并进行连接,得到切线。左右侧射频机构垂直车头方向分别各有一个光斑落点,可以对左右侧射频机构的角度进行调整,使这两个光斑落点可以在这根直线上。
S206、通过射频旋转构件控制车辆上左右侧射频机构以光束对应的光斑落点处于基准线上的光束发射方式进行光束发射。
其中,射频旋转构件与所右侧射频机构相连接。
示例性的,当根据当前车速确定了光束发射方式后,可以根据光束发射方式控制射频旋转构件进行旋转,可以根据光束发射方式生成对应的旋转角度指令,将旋转角度指令传送至射频旋转构件,使射频旋转构件按照相应的角度进行旋转,以控制左右侧射频机构旋转至相应的角度发射出光束。
S207、通过射频旋转构件控制车辆上左右侧射频机构以光束与车辆夹角为90度的光束发射方式进行光束发射。
示例性的,当根据当前车速确定了光束发射方式后,可以根据光束发射方式控制射频旋转构件进行旋转,可以根据光束发射方式生成对应旋转90度的角度指令,将旋转角度指令传送至射频旋转构件,使射频旋转构件按照相应的角度进行旋转,以控制左右侧射频机构旋转至90度发射出光束。
S208、当光束发射方式为光束与车辆夹角为90度时,光斑聚焦方式为根据障碍物区域信息进行光斑聚焦,对光束斑点的光斑聚焦参数进行调整,获得包含调整后光束斑点的第二行车图像。
示例性的,当光束发射方式为将光束以与自身车辆夹角为90度时,可以根据光束斑点在障碍物区域信息中的清晰度,对光斑聚焦参数进行调整,使光束斑点在行车图像中可以保持清晰的显示,如可以通过预设档位的形式,每个档位对应有不同的光斑聚焦参数,可以根据当前光束斑点的清晰度或粗略计算出的障碍物距离,找出与之匹配的档位,可以通过光斑聚焦芯片调整至相应的档 位,即将光束斑点的光斑聚焦参数调整至目标聚焦参数。当调整完成后可以获得摄像头捕获的包含调整后光束斑点的第二行车图像。
示例性地,对光束斑点的光斑聚焦参数进行调整,获得包含调整后光束斑点的第二行车图像的步骤可以包括:
a1、提取障碍物区域信息中光束斑点的斑点清晰度。
在本实施例中,斑点清晰度可以理解为光束斑点在行车图像中展示时的清晰程度。
示例性的,在摄像头采集过程中,可以实时获取每一帧下的障碍物区域信息,障碍物区域信息中可以包含光束斑点,可以根据预先设定的方法识别出光束斑点的斑点清晰度。
b1、当斑点清晰度不满足预设的第一斑点清晰度标准时,通过光斑聚焦芯片对光斑聚焦参数进行调整。
在本实施例中,第一斑点清晰度标准可以理解为用于指示斑点清晰度需要达到的标准。
示例性的,可以将斑点清晰度与预设的第一斑点清晰度标准进行比对,当斑点清晰度不满足预设的第一斑点清晰度标准时,如可以通过预设档位的形式,每个档位对应有不同的光斑聚焦参数,可以通过计算出当前斑点清晰度与第一斑点清晰度标准的差值或者可以根据粗略计算出的与障碍物的距离,确定差值或距离所处的档位,通过光斑聚焦芯片调整至相应的档位,即对光束斑点的光斑聚焦参数进行调整,使光束斑点可以在行车图像中清晰的显示。
c1、获得捕获的包含调整后光束斑点的第二行车图像。
示例性的,当调整完成后可以获得摄像头捕获的包含调整后光束斑点的第二行车图像。
S209、当光束发射方式为光束对应的光斑落点处于基准线上时,光斑聚焦方式为以预设的聚焦参数阈值进行光斑聚焦,对光束斑点的光斑聚焦参数进行调整,获得包含调整后光束斑点的第二行车图像。
在本实施例中,聚焦参数阈值可以理解为使光束斑点可以显示的参数值。
示例性的,当光束发射方式为光束对应的光斑落点处于基准线上时,光斑聚焦方式为可以以预设的阈值的形式通过光斑聚焦芯片调整光斑聚焦参数,此时对行车图像中的清晰度没有要求,仅需保证行车图像中有光束斑点即可,当调整完成后可以获得摄像头捕获的包含调整后光束斑点的第二行车图像。
a2、提取障碍物区域信息中光束斑点的斑点清晰度。
b2、基于聚焦参数阈值通过光斑聚焦芯片对光斑聚焦参数进行调整。
示例性的,可以获取预先设定的聚焦参数阈值,通过光斑聚焦芯片将光斑聚焦参数调整至聚焦参数阈值。
c2、当斑点清晰度不满足预设的第二斑点清晰度标准时,通过光斑聚焦芯片对光斑聚焦参数进行二次调整。
在本实施例中,第二斑点清晰度标准可以理解为用于指示斑点清晰度需要达到的标准。其中,第二斑点清晰度标准与第一斑点清晰度标准不同。
示例性的,当光束发射方式为光束对应的光斑落点处于基准线上时,此时对行车图像中的清晰度没有要求,仅需保证行车图像中有光束斑点即可,则第二斑点清晰度标准可以低于第一斑点清晰度标准。当斑点清晰度不满足预设的第一斑点清晰度标准时,如当行车图像中没有光束斑点时,即障碍物可能过远,则可以通过光斑聚焦芯片再次进行参数的调整。如可以通过预设档位的形式,每个档位对应有不同的光斑聚焦参数,可以通过计算出当前斑点清晰度与第二斑点清晰度标准的差值或者可以根据粗略计算出的与障碍物的距离,确定差值或距离所处的档位,通过光斑聚焦芯片调整至相应的档位,即对光束斑点的光斑聚焦参数进行调整,使光束斑点可以在行车图像中进行显示。
d2、获得捕获的包含调整后光束斑点的第二行车图像。
示例性的,当调整完成后可以获得摄像头捕获的包含调整后光束斑点的第二行车图像。
S210、根据第二行车图像,确定光束斑点的面积值。
在本实施例中,面积值可以理解为光束斑点在图像中显示的面积值。
示例性的,可以通过预先设定光斑面积检测单元,识别出第二行车图像中的左右两个光束斑点,对两个光束斑点的面积进行检测,得到左右两个光束斑点的面积值。
S211、根据面积值,确定车辆与障碍物的间隔距离。
示例性的,可以预先根据光斑聚焦参数设定间隔距离所对应的面积值的关系系数,建立光斑聚焦参数与关系系数的对应关系表,可以根据当前的光斑聚焦参数,确定关系系数,根据关系系数与面积值的乘积,确定车辆与障碍物的间隔距离。
示例性的,可以通过如下公式计算间隔距离:
L=kX
其中,L为车辆与障碍物的间隔距离,k为关系系数,X为光束斑点的面积值。
S212、获取左右侧射频机构的安装高度信息。
在本实施例中,安装高度信息可以理解为左右侧射频机构到地面的距离。
示例性的,在安装左右侧射频机构时,可以通过测量的方式确定左右侧射频机构的安装高度信息,可以将安装高度信息输入至存储器中进行储存,可以在存储器中进行查找,获取左右侧射频机构的安装高度信息。
S213、根据第二行车图像,确定左右侧射频机构与车辆的夹角信息。
示例性的,可以通过预先设定旋转角度检测单元,分别对左右侧射频机构的旋转角度进行检测,确定左右侧射频机构与车辆的夹角信息。
S214、根据夹角信息及安装高度信息,确定车辆与障碍物的间隔距离。
示例性的,可以将夹角信息及安装高度信息带入三角函数公式,确定车辆与障碍物的间隔距离。
示例性的,可以通过如下公式计算间隔距离:
L=Htanα
其中,L表示车辆与障碍物的间隔距离,H为安装高度值,α为射频机构与车辆的夹角。
本实施例二提供的一种障碍物测距方法,通过对障碍物所处的区域进行识别,得到障碍物区域信息,根据当前车速与速度阈值的比对结果结合障碍物区域信息确定光束发射方式,通过射频旋转构件按照光束发射方式中的旋转角度,控制左右侧射频机构进行旋转,使光束落点可以始终落在障碍物区域对应的基准线上,通过光斑聚焦芯片调整聚焦参数,使光束斑点可以清晰显示在行车图像中,实现了对障碍物的实时跟踪。根据摄像头采集的包含光束斑点的行车图像,可以确定光束斑点的面积值或夹角信息,将面积值及夹角信息带入相应的间隔距离计算公式,确定车辆与障碍物的间隔距离。在视觉感知相关技术的硬件架构的基础上,删减雷达测距及融合的硬件架构,节约了资源及成本,只需要另外搭建光斑聚焦芯片及射频旋转构件的硬件架构,即可实现了对障碍物的精准测距,提供了整套纯视觉精确测距的实现方案,且能做到成本可控。
为了便于理解,分别对当前车速大于预先设定的速度阈值时,对根据光束斑点面积进行间隔距离求取的方法进行示意图展示,以及当前车速小于或等于预先设定的速度阈值时,对根据角度进行间隔距离求取的方法进行示意图展示。
图4a为本申请实施例二提供的一种障碍物测距方法中根据光束斑点面积确定间隔距离的示例图。
如图4a所示,为了便于描述以单侧射频机构为例,确定同一障碍物在不同时刻与自身车辆的间隔距离,E3为一侧射频机构,B3为障碍物,X1为障碍物B3在前一时刻下的光束斑点面积,X2为障碍物B3在当前时刻下的光束斑点面积,L1为障碍物B3上一时刻与射频机构的间隔距离,L2为障碍物B3当前与射频机构的间隔距离。当前车速大于预先设定的速度阈值时,可以控制射频旋转构件将射频机构E3调整角度至90度,即平行于地面发出光束,当光束照射于障碍物B3时,会形成光束斑点,通过光斑聚焦芯片对光斑聚焦参数进行调整,对光束斑点的面积进行检测,进而可以得到X1及X2,根据光斑聚焦参数可以 确定相应的关系系数k,将k、X1及X2带入公式L=kX中,则可以确定出间隔距离L1及L2。
图4b为本申请实施例二提供的一种障碍物测距方法中根据角度确定间隔距离的示例图。
如图4b所示,为了便于描述,确定同一障碍物在不同时刻与自身车辆的间隔距离,A2表示自身车辆,E1表示左侧射频机构,E2表示右侧射频机构,F1表示上一时刻的右侧光束落点,F2表示上一时刻的左侧光束落点,B2表示障碍物,C1表示上一时刻的基准线,C2表示当前的基准线,F3表示当前的右侧光束落点,F4表示当前的左侧光束落点。根据障碍物区域信息可以确定基准线,通过射频旋转构件控制E1及E2进行旋转,使光束落点始终落于基准线上,则E1与F2的连线可以认为是左侧射频机构在上一时刻发出的光束,可以对左侧射频机构的旋转角度进行检测,即确定光束与E1所处平面(可以以图中E1下面的虚线为基准)的夹角值α1,将E1的安装高度值H1(即E1到地面的距离)及夹角值α1带入公式L=Htanα,即可求出上一时刻下左侧射频机构与障碍物之间的距离,同理,上一时刻下的右侧射频机构与障碍物之间的距离以及当前时刻左右侧射频机构与障碍物之间的距离计算方式相同,不再赘述。
实施例三
图5为本申请实施例三提供的一种障碍物测距装置的结构示意图。如图5所示,该装置包括:信息确定模块41、方式确定模块42、光束发射模块43、图像获得模块44及距离确定模块45。其中,
信息确定模块41,设置为根据捕获的第一行车图像确定车辆行车方向存在障碍物时,确定障碍物的障碍物区域信息。
方式确定模块42,设置为根据当前车速及障碍物区域信息,确定光束发射方式。
光束发射模块43,设置为控制车辆上左右侧射频机构以光束发射方式进行 光束发射。
图像获得模块44,设置为以与光束发射方式相匹配的光斑聚焦方式,对光束斑点的光斑聚焦参数进行调整,获得包含调整后光束斑点的第二行车图像。
距离确定模块45,设置为根据所述第二行车图像,确定车辆与障碍物的间隔距离;
其中,光束斑点为光束在发射方向上的落光点。
本实施例三提供的一种障碍物测距装置,以与障碍物区域信息及车速相匹配的光束发射方式及光斑聚焦方式发射光束,对包含光束斑点的图像进行分析,确定间隔距离。基于视觉感知实现了对障碍物的智能跟踪和精确测距,相较于雷达测距方法,降低了硬件架构成本。
可选的,方式确定模块42,包括:
获取车辆的当前车速;
如果当前车速大于预设的速度阈值,则将光束与车辆夹角为90度作为光束发射方式;
否则,根据障碍物区域信息,确定障碍物对应的基准线,将光束对应的光斑落点处于基准线上作为光束发射方式。
可选的,光束发射模块43,设置为:
通过射频旋转构件控制车辆上左右侧射频机构以光束发射方式进行光束发射;
其中,射频旋转构件与左右侧射频机构相连接。
可选地,图像获取模块44,当光斑聚焦方式为根据障碍物区域信息进行光斑聚焦时,图像获取模块44设置为:
提取障碍物区域信息中光束斑点的斑点清晰度;
当斑点清晰度不满足预设的第一斑点清晰度标准时,通过光斑聚焦芯片对光斑聚焦参数进行调整;
获得捕获的包含调整后光束斑点的第二行车图像。
可选地,图像获取模块44,当光斑聚焦方式为以预设的聚焦参数阈值进行光斑聚焦时,图像获取模块44设置为:
提取障碍物区域信息中光束斑点的斑点清晰度;
基于聚焦参数阈值通过光斑聚焦芯片对光斑聚焦参数进行调整;
当斑点清晰度不满足预设的第二斑点清晰度标准时,通过光斑聚焦芯片对光斑聚焦参数进行二次调整;
获得捕获的包含调整后光束斑点的第二行车图像。
可选地,距离确定模块45,设置为:
根据第二行车图像,确定光束斑点的面积值;
根据面积值,确定车辆与障碍物的间隔距离。
可选地,距离确定模块45,还可以设置为:
获取左右侧射频机构的安装高度信息;
根据第二行车图像,确定左右侧射频机构与车辆的夹角信息;
根据夹角信息及所述安装高度信息,确定车辆与障碍物的间隔距离。
本申请实施例所提供的障碍物测距装置可执行本申请任意实施例所提供的障碍物测距方法,具备执行方法相应的功能模块和效果。
实施例四
图6为本申请实施例四提供的一种车辆的结构示意图,如图6所示,该车辆包括控制器51、存储器52、输入装置53、输出装置54、射频旋转构件55及光斑聚焦芯片56;车辆中控制器51的数量可以是至少一个,图5中以一个控制器51为例;车辆中的控制器51、存储器52、输入装置53、输出装置54、射频旋转构件55及光斑聚焦芯片56可以通过总线或其他方式连接,图5中以通过总线连接为例。
存储器5作为一种计算机可读存储介质,可设置为存储软件程序、计算机可执行程序以及模块,如本申请实施例中的障碍物测距方法对应的程序指令/模 块(例如,障碍物测距装置中的信息确定模块41、方式确定模块42、光束发射模块43、图像获得模块44及距离确定模块45)。控制器51通过运行存储在存储器52中的软件程序、指令以及模块,从而执行车辆的各种功能应用以及数据处理,即实现上述的障碍物测距方法。
存储器52可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序;存储数据区可存储根据终端的使用所创建的数据等。此外,存储器52可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实例中,存储器52可包括相对于控制器51远程设置的存储器,这些远程存储器可以通过网络连接至车辆。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
输入装置53可设置为接收输入的数字或字符信息,以及产生与云平台的用户设置以及功能控制有关的键信号输入。输出装置54可包括显示屏等显示设备。
射频旋转构件54与左右侧射频机构连接,可设置为控制左右侧射频机构的光束发射方式。
光斑聚焦芯片55可设置为调整光束斑点的光斑聚焦参数。
实施例五
本申请实施例五还提供一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时设置为执行一种障碍物测距方法,该方法包括:
根据捕获的第一行车图像确定车辆行车方向存在障碍物时,确定障碍物的障碍物区域信息;
根据当前车速及障碍物区域信息,确定光束发射方式;
控制车辆上左右侧射频机构以光束发射方式进行光束发射;
以与光束发射方式相匹配的光斑聚焦方式,对光束斑点的光斑聚焦参数进 行调整,获得包含调整后光束斑点的第二行车图像;
根据第二行车图像,确定车辆与障碍物的间隔距离;
其中,所述光束斑点为光束在所述发射方向上的落光点。
当然,本申请实施例所提供的一种包含计算机可执行指令的存储介质,其计算机可执行指令不限于如上所述的方法操作,还可以执行本申请任意实施例所提供的障碍物测距方法中的相关操作。
通过以上关于实施方式的描述,所属领域的技术人员可以清楚地了解到,本申请可借助软件及必需的通用硬件来实现,当然也可以通过硬件实现,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,如计算机的软盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、闪存(FLASH)、硬盘或光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。
值得注意的是,上述障碍物测距装置的实施例中,所包括的各个单元和模块只是按照功能逻辑进行划分的,但并不局限于上述的划分,只要能够实现相应的功能即可;另外,各功能单元的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。

Claims (11)

  1. 一种障碍物测距方法,包括:
    根据捕获的第一行车图像确定车辆行车方向存在障碍物时,确定所述障碍物的障碍物区域信息;
    根据当前车速及所述障碍物区域信息,确定光束发射方式;
    控制车辆上左右侧射频机构以所述光束发射方式进行光束发射;
    以与所述光束发射方式相匹配的光斑聚焦方式,对光束斑点的光斑聚焦参数进行调整,获得包含调整后光束斑点的第二行车图像;
    根据所述第二行车图像,确定车辆与所述障碍物的间隔距离;
    其中,所述光束斑点为光束在所述发射方向上的落光点。
  2. 根据权利要求1所述的方法,其中,所述根据当前车速及所述障碍物区域信息,确定光束发射方式,包括:
    获取车辆的当前车速;
    如果所述当前车速大于预设的速度阈值,则将光束与所述车辆夹角为90度作为光束发射方式;
    否则,根据所述障碍物区域信息,确定所述障碍物对应的基准线,将光束对应的光斑落点处于所述基准线上作为光束发射方式。
  3. 根据权利要求1所述的方法,其中,所述控制车辆上左右侧射频机构以所述光束发射方式进行光束发射,包括:
    通过射频旋转构件控制车辆上左右侧射频机构以所述光束发射方式进行光束发射;
    其中,所述射频旋转构件与所述左右侧射频机构相连接。
  4. 根据权利要求2所述的方法,其中,当所述光束发射方式为光束与所述车辆夹角为90度时,光斑聚焦方式为根据所述障碍物区域信息进行光斑聚焦;
    当所述光束发射方式为光束对应的光斑落点处于所述基准线上时,光斑聚焦方式为以预设的聚焦参数阈值进行光斑聚焦。
  5. 根据权利要求4所述的方法,其中,当光斑聚焦方式为根据所述障碍物 区域信息进行光斑聚焦时,所述对光束斑点的光斑聚焦参数进行调整,获得包含调整后光束斑点的第二行车图像,包括:
    提取所述障碍物区域信息中光束斑点的斑点清晰度;
    当所述斑点清晰度不满足预设的第一斑点清晰度标准时,通过光斑聚焦芯片对光斑聚焦参数进行调整;
    获得捕获的包含调整后光束斑点的第二行车图像。
  6. 根据权利要求4所述的方法,其中,当光斑聚焦方式为以预设的聚焦参数阈值进行光斑聚焦时,所述对光束斑点的光斑聚焦参数进行调整,获得包含调整后光束斑点的第二行车图像,包括:
    提取所述障碍物区域信息中光束斑点的斑点清晰度;
    基于所述聚焦参数阈值通过光斑聚焦芯片对光斑聚焦参数进行调整;
    当所述斑点清晰度不满足预设的第二斑点清晰度标准时,通过所述光斑聚焦芯片对光斑聚焦参数进行二次调整;
    获得捕获的包含调整后光束斑点的第二行车图像。
  7. 根据权利要求5所述的方法,其中,所述根据所述第二行车图像,确定车辆与所述障碍物的间隔距离,包括:
    根据所述第二行车图像,确定所述光束斑点的面积值;
    根据所述面积值,确定车辆与所述障碍物的间隔距离。
  8. 根据权利要求6所述的方法,其中,所述根据所述第二行车图像,确定车辆与所述障碍物的间隔距离,包括:
    获取所述左右侧射频机构的安装高度信息;
    根据所述第二行车图像,确定所述左右侧射频机构与车辆的夹角信息;
    根据所述夹角信息及所述安装高度信息,确定车辆与所述障碍物的间隔距离。
  9. 一种障碍物测距装置,包括:
    信息确定模块,设置为根据捕获的第一行车图像确定车辆行车方向存在障 碍物时,确定所述障碍物的障碍物区域信息;
    方式确定模块,设置为根据当前车速及所述障碍物区域信息,确定光束发射方式;
    光束发射模块,设置为控制车辆上左右侧射频机构以所述光束发射方式进行光束发射;
    图像获得模块,设置为以与所述光束发射方式相匹配的光斑聚焦方式,对光束斑点的光斑聚焦参数进行调整,获得包含调整后光束斑点的第二行车图像;
    距离确定模块,设置为根据所述第二行车图像,确定车辆与所述障碍物的间隔距离;
    其中,所述光束斑点为光束在所述发射方向上的落光点。
  10. 一种车辆,包括:
    至少一个控制器;
    射频旋转构件;
    光斑聚焦芯片;
    以及
    与所述至少一个控制器通信连接的存储器;
    其中,所述射频旋转构件与左右侧射频机构连接,设置为控制左右侧射频机构的光束发射方式;
    所述光斑聚焦芯片设置为调整光束斑点的光斑聚焦参数;
    所述存储器存储有可被所述至少一个控制器执行的计算机程序,所述计算机程序被所述至少一个控制器执行,以使所述至少一个控制器能够执行权利要求1-8中任一项所述的障碍物测距方法。
  11. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机指令,所述计算机指令设置为使处理器执行时实现权利要求1-8中任一项所述的障碍物测距方法。
PCT/CN2023/072290 2022-11-25 2023-01-16 障碍物测距方法、装置、车辆及介质 WO2024108754A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020237045435A KR20240079191A (ko) 2022-11-25 2023-01-16 장애물 거리 측정 방법, 장치, 차량 및 매체

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211496847.4A CN115790420A (zh) 2022-11-25 2022-11-25 障碍物测距方法、装置、车辆及介质
CN202211496847.4 2022-11-25

Publications (1)

Publication Number Publication Date
WO2024108754A1 true WO2024108754A1 (zh) 2024-05-30

Family

ID=85441927

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/072290 WO2024108754A1 (zh) 2022-11-25 2023-01-16 障碍物测距方法、装置、车辆及介质

Country Status (3)

Country Link
KR (1) KR20240079191A (zh)
CN (1) CN115790420A (zh)
WO (1) WO2024108754A1 (zh)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120076112A (ko) * 2010-12-29 2012-07-09 계명대학교 산학협력단 레이저 스캔 촛점을 이용한 거리 측정장치.
CN106291520A (zh) * 2016-07-14 2017-01-04 江苏大学 一种基于编码激光与双目视觉的辅助驾驶系统及方法
CN108323190A (zh) * 2017-12-15 2018-07-24 深圳市道通智能航空技术有限公司 一种避障方法、装置和无人机
CN110641366A (zh) * 2019-10-12 2020-01-03 爱驰汽车有限公司 行车时的障碍物追踪方法、系统、电子设备和存储介质
CN111103593A (zh) * 2019-12-31 2020-05-05 深圳市欢创科技有限公司 测距模组、机器人、测距方法及非易失性可读存储介质
WO2021251028A1 (ja) * 2020-06-10 2021-12-16 株式会社日立製作所 障害物検知システム、障害物検知方法および自己位置推定システム
CN113869268A (zh) * 2021-10-12 2021-12-31 广州小鹏自动驾驶科技有限公司 障碍物测距方法、装置、电子设备及可读介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120076112A (ko) * 2010-12-29 2012-07-09 계명대학교 산학협력단 레이저 스캔 촛점을 이용한 거리 측정장치.
CN106291520A (zh) * 2016-07-14 2017-01-04 江苏大学 一种基于编码激光与双目视觉的辅助驾驶系统及方法
CN108323190A (zh) * 2017-12-15 2018-07-24 深圳市道通智能航空技术有限公司 一种避障方法、装置和无人机
CN110641366A (zh) * 2019-10-12 2020-01-03 爱驰汽车有限公司 行车时的障碍物追踪方法、系统、电子设备和存储介质
CN111103593A (zh) * 2019-12-31 2020-05-05 深圳市欢创科技有限公司 测距模组、机器人、测距方法及非易失性可读存储介质
WO2021251028A1 (ja) * 2020-06-10 2021-12-16 株式会社日立製作所 障害物検知システム、障害物検知方法および自己位置推定システム
CN113869268A (zh) * 2021-10-12 2021-12-31 广州小鹏自动驾驶科技有限公司 障碍物测距方法、装置、电子设备及可读介质

Also Published As

Publication number Publication date
CN115790420A (zh) 2023-03-14
KR20240079191A (ko) 2024-06-04

Similar Documents

Publication Publication Date Title
EP3566903A1 (en) Method and apparatus for vehicle position detection
JP3123303B2 (ja) 車両用画像処理装置
JP6075331B2 (ja) 車両用照明装置
US20090254247A1 (en) Undazzled-area map product, and system for determining whether to dazzle person using the same
CN112793509B (zh) 一种盲区监测方法、设备、介质
US20200290606A1 (en) Autonomous driving assistance device
RU2720501C1 (ru) Способ определения помех, способ помощи при парковке, способ помощи при отъезде и устройство определения помех
US20200377087A1 (en) Lane keep control of autonomous vehicle
CN109840454B (zh) 目标物定位方法、装置、存储介质以及设备
JP2015195018A (ja) 画像処理装置、画像処理方法、運転支援システム、プログラム
US20200283024A1 (en) Vehicle, information processing apparatus, control methods thereof, and system
JP2020093766A (ja) 車両の制御装置、制御システム、及び制御プログラム
JP3857698B2 (ja) 走行環境認識装置
CN113895429B (zh) 一种自动泊车方法、系统、终端及存储介质
KR20190078944A (ko) 철도차량용 증강현실 헤드업 디스플레이 시스템
WO2020191619A1 (zh) 恶劣天气下的驾驶控制方法、装置、车辆及驾驶控制系统
WO2024108754A1 (zh) 障碍物测距方法、装置、车辆及介质
JP2020066246A (ja) 路面状態推定装置
CN113432615A (zh) 基于多传感器融合可驾驶区域的检测方法、系统和车辆
EP4397942A1 (en) Obstacle distance measurement method and apparatus, and vehicle and medium
JP7247974B2 (ja) 車両
US11702140B2 (en) Vehicle front optical object detection via photoelectric effect of metallic striping
US10204276B2 (en) Imaging device, method and recording medium for capturing a three-dimensional field of view
JPH05113482A (ja) 車載用追突防止装置
JP2022014269A (ja) 車両及び他車両の認識方法

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2023825347

Country of ref document: EP

Effective date: 20231228