WO2022160232A1 - 一种检测方法、装置和车辆 - Google Patents
一种检测方法、装置和车辆 Download PDFInfo
- Publication number
- WO2022160232A1 WO2022160232A1 PCT/CN2021/074328 CN2021074328W WO2022160232A1 WO 2022160232 A1 WO2022160232 A1 WO 2022160232A1 CN 2021074328 W CN2021074328 W CN 2021074328W WO 2022160232 A1 WO2022160232 A1 WO 2022160232A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- pixel block
- image
- vehicle
- ipm
- boundary
- Prior art date
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 16
- 238000000034 method Methods 0.000 claims abstract description 79
- 238000012545 processing Methods 0.000 claims description 36
- 230000003287 optical effect Effects 0.000 claims description 26
- 238000004422 calculation algorithm Methods 0.000 claims description 11
- 238000004590 computer program Methods 0.000 claims description 10
- 238000003860 storage Methods 0.000 claims description 8
- 238000012937 correction Methods 0.000 claims description 7
- 230000003068 static effect Effects 0.000 claims description 7
- 230000000717 retained effect Effects 0.000 claims description 6
- 238000005520 cutting process Methods 0.000 abstract description 5
- 238000005516 engineering process Methods 0.000 abstract description 2
- 230000008569 process Effects 0.000 description 30
- 238000010586 diagram Methods 0.000 description 10
- 230000000694 effects Effects 0.000 description 10
- 230000006870 function Effects 0.000 description 8
- 230000033001 locomotion Effects 0.000 description 8
- 230000011218 segmentation Effects 0.000 description 7
- 230000008859 change Effects 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 239000007787 solid Substances 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 241000251468 Actinopterygii Species 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/24—Aligning, centring, orientation detection or correction of the image
- G06V10/247—Aligning, centring, orientation detection or correction of the image by affine transforms, e.g. correction due to perspective effects; Quadrilaterals, e.g. trapezoids
Definitions
- the present invention relates to the technical field of intelligent driving, and in particular, to a detection method, device and vehicle.
- ADAS advanced driving assistance system
- ultrasonic sensors are used to transmit ultrasonic waves outward and receive ultrasonic waves reflected back through obstacles, and then calculate the distance between obstacles and sensors according to the time difference between sending and receiving ultrasonic signals and the propagation speed of ultrasonic waves.
- This method of detecting the distance of obstacles around the vehicle through the ultrasonic sensor realizes the principle of automatic parking. Because the ultrasonic wave emitted by the ultrasonic sensor has the characteristics of large scattering angle and poor directivity, when measuring the target with a long distance, its echo signal relatively weak, thus affecting the accuracy of the distance measurement. In addition, the propagation speed of ultrasonic waves is relatively slow. When the vehicle is traveling at high speed, the detected time difference between sending and receiving ultrasonic signals due to the movement of the vehicle is too small, resulting in a large error in the measured distance.
- the lidar sensor is used to replace the ultrasonic sensor, although the weak echo signal and the small time difference between sending and receiving ultrasonic signals due to the slow propagation speed of the ultrasonic wave are solved, the scanning area of the laser beam emitted by the lidar sensor is small in the near distance and far away. Due to the large characteristics, blind spots are prone to occur in the short range close to the vehicle body, thus posing challenges to the safety of the vehicle.
- embodiments of the present application provide a detection method, device and vehicle.
- the present application provides a detection method, comprising: acquiring images collected from at least two cameras; converting the images into inverse perspective transform IPM images, wherein at least one object is located in the IPM images; at least one pixel block in the IPM image, where the pixel block is a set of pixels corresponding to the object in the IPM image; determine a first boundary of the pixel block, where the first boundary is the object and the ground intersecting boundaries.
- multiple cameras are used to collect information on the surrounding environment of the vehicle, and then the distance between the vehicle and obstacles in the surrounding environment is calculated by processing the images.
- the distance value is more accurate.
- the image is a fisheye camera image
- the converting the image into an inverse perspective transform IPM image includes: performing distortion correction on the image to obtain an undistorted image; performing inverse perspective transform IPM an algorithm to convert the undistorted image into the IPM image.
- the fisheye camera image is first converted into an undistorted image, so that the subsequent IPM algorithm can process the image; then the undistorted image is converted into an IPM image, and the finally obtained IPM image is compared to directly using the fisheye camera image. , which is convenient for subsequent calculation of the distance to other obstacles.
- the method further includes: splicing the IPM images corresponding to the images collected by the at least two cameras at the same moment to obtain a spliced IPM image.
- the method before the determining the first boundary of the pixel block, the method further comprises: eliminating the object projection in the pixel block, the object projection is that the object is formed by the existence of height Area.
- the display effect of the spliced IPM image due to the display effect of the spliced IPM image, it gives the impression of a top-down viewing angle extending from the center of the image to the surrounding, so for objects with height, the back area of the object is blocked by objects with height , so that it cannot be displayed in the image, resulting in that some boundaries in the pixel blocks cut out by semantics are not boundaries that contact the ground, so the boundary needs to be eliminated, so that the first boundary can be identified more easily.
- the eliminating the object projection in the pixel block includes: when a first pixel block is included in the IPM image after splicing at least two frames, removing the first pixel in the at least two frames The blocks are superimposed, and the object corresponding to the first pixel block is an object in a static state.
- the method further includes: eliminating pixels whose luminance values are less than a set threshold in the superimposed first pixel block.
- the superimposed pixel block still includes the projection area. Therefore, according to the principle that the brightness superimposed multiple times in the same area is deeper than the brightness superimposed in different areas, the brightness is filtered out. The lower value of the area, the resulting area is closer to the real obstacle area.
- the method further includes: when the IPM image after the splicing of the at least two frames includes a second pixel block, retaining the second pixel block in the frame with the latest acquisition time, and the second pixel block is retained in the frame with the latest acquisition time.
- the object corresponding to the two-pixel block is an object in a motion state.
- the determining the first boundary of the pixel block includes: acquiring, according to extrinsic parameters of the at least two cameras, the optical centers of the at least two cameras in the spliced IPM image. position; the optical centers of the at least two cameras scan the pixels at the boundary in the pixel block, and the position of the pixel at the boundary is the first boundary.
- the method further comprises: calculating the distance between the current vehicle and the first boundary of the pixel block.
- the method further includes: outputting warning indication information when detecting that the distance between the current vehicle and the first boundary of the pixel block is less than a set threshold.
- the method further includes: determining a distance between a first camera and the ground and a height value of the chassis of other vehicles, where the first camera is the at least two cameras that photograph the other vehicles
- the at least one object includes the other vehicle; according to the distance between the first camera and the ground and the height value of the chassis of the other vehicle, the calculated distance between the vehicle and the other vehicle is The distance is corrected to obtain the first distance value.
- the distance calculated by the above method may not be accurate.
- the chassis height of the vehicle can be known, and then combined with the The known camera height value and the previously calculated distance value are used to further calculate a more accurate distance value.
- an embodiment of the present application provides a detection device, comprising: a transceiver unit for acquiring images collected from at least two cameras; a processing unit for converting the images into inverse perspective transform IPM images, wherein , at least one object is located in the IPM image; obtain at least one pixel block in the IPM image, where the pixel block is a set of pixels corresponding to the object in the IPM image; determine the first pixel block of the pixel block A boundary, the first boundary is a boundary where the object intersects with the ground.
- the image is a fisheye camera image
- the processing unit is specifically configured to perform distortion correction on the image to obtain an undistorted image; and convert the undistorted image through an inverse perspective transform IPM algorithm. into the IPM image.
- the processing unit is further configured to stitch the IPM images corresponding to the images collected by the at least two cameras at the same moment to obtain a stitched IPM image.
- the processing unit is further configured to eliminate the object projection in the pixel block, where the object projection is an area formed by the existence of the height of the object.
- the processing unit is specifically configured to superimpose the first pixel block in the at least two frames when the first pixel block is included in the IPM image after the splicing of at least two frames, so that the The object corresponding to the first pixel block is an object in a static state.
- the processing unit is further configured to eliminate pixels whose luminance values are less than a set threshold in the superimposed first pixel block.
- the processing unit is further configured to retain the second pixel block in the frame with the latest acquisition time when the IPM image after splicing the at least two frames includes a second pixel block, and the The object corresponding to the second pixel block is an object in a motion state.
- the processing unit is specifically configured to acquire the positions of the optical centers of the at least two cameras in the spliced IPM image according to the external parameters of the at least two cameras; the at least two cameras The optical center of each camera scans the pixels located at the boundary in the pixel block, and the position of the pixel located at the boundary is the first boundary.
- the processing unit is further configured to calculate the distance between the current vehicle and the first boundary of the pixel block.
- the processing unit is further configured to output warning indication information when it is detected that the distance between the current vehicle and the first boundary of the pixel block is less than a set threshold.
- the processing unit is further configured to determine the distance between the first camera and the ground and the height value of the chassis of other vehicles, where the first camera is the at least two images of the other vehicles.
- the at least one object includes the other vehicle; according to the distance between the first camera and the ground and the height value of the chassis of the other vehicle, the calculated comparison between the vehicle and the other vehicle The distance between them is corrected to obtain the first distance value.
- embodiments of the present application provide a vehicle, including: at least two cameras; a memory; and a processor for executing the various possible implementations of the first aspect.
- the embodiments of the present application provide a computer-readable storage medium on which a computer program is stored.
- the computer program is executed in a computer, the computer is caused to execute each possible implementation of the first aspect.
- an embodiment of the present application provides a computing device, including a memory and a processor, wherein the memory stores executable code, and when the processor executes the executable code, the implementation of the On the one hand various possible embodiments.
- FIG. 1 is a schematic diagram of a vehicle architecture provided by an embodiment of the present application.
- FIG. 2 is a schematic diagram of a scene of four fisheye camera shooting areas on a vehicle provided by an embodiment of the present application;
- FIG. 3 is a schematic flowchart of a processor implementing a detection method according to an embodiment of the present application
- FIG. 4 is a schematic diagram of the effect of splicing four fisheye cameras according to an embodiment of the present application.
- Figure 5(a) is an image captured by a normal non-fisheye camera
- Figure 5(b) is an image captured by a fisheye camera
- Figure 6(a) is an IPM image
- Figure 6(b) is a schematic diagram of the effect of each pixel block after semantic segmentation of the IPM
- FIG. 8 is a schematic diagram of a display effect of scanning a first boundary by an optical center emission ray according to an embodiment of the present application
- FIG. 9 is a schematic diagram of a distance scene between a measurement and other vehicles provided by an embodiment of the present application.
- FIG. 10 is a schematic structural diagram of a detection apparatus provided by an embodiment of the present application.
- FIG. 1 is a schematic structural diagram of a vehicle according to an embodiment of the present application.
- a vehicle 100 includes a sensor 101 , a processor 102 , a memory 103 and a bus 104 .
- the sensor 101 , the processor 102 , and the memory 103 may establish a communication connection through the bus 104 .
- the sensor 101 is a fisheye camera, a pinhole camera, or the like.
- a fisheye camera is taken as an example to describe the technical solution to be protected in this application.
- a fisheye camera is a lens with a focal length of 16mm or less and a viewing angle close to or equal to 180°.
- the shorter the focal length the larger the angle of view, so in order to achieve the maximum photographic angle of view, the front lens of this camera is very short in diameter and protrudes in a parabolic shape toward the front of the lens, similar to the eyes of a fish.
- the processor 102 may be a vehicle-mounted central control unit, a central processing unit (CPU), a cloud server, etc., and is used to process the image collected by the sensor 101 to obtain the distance between each obstacle in the image and the vehicle. .
- CPU central processing unit
- cloud server etc.
- the memory 103 may include volatile memory (volatile memory), such as random-access memory (RAM); the memory may also include non-volatile memory (non-volatile memory), such as read-only memory (read-only memory) only memory, ROM), flash memory, hard disk drive (HDD) or solid state drive (solid state drive, SSD); the memory 103 may also include a combination of the above-mentioned types of memory.
- volatile memory such as random-access memory (RAM)
- non-volatile memory such as read-only memory (read-only memory) only memory, ROM), flash memory, hard disk drive (HDD) or solid state drive (solid state drive, SSD)
- the memory 103 may also include a combination of the above-mentioned types of memory.
- the data stored in the memory 103 is not only data such as the image collected by the sensor 101 and the distance calculated by the processor 102, but also various instructions, application programs and the like corresponding to the execution of the detection method are stored in the memory 103.
- the processor 102 executes the specific process of the detection method. This application will describe the specific implementation process in conjunction with the execution flow steps shown in FIG. 3 as follows:
- Step S301 acquiring images from at least two cameras.
- the vehicle 100 uses four fish-eye cameras, which are respectively disposed at the front of the vehicle 100 , the left and right rear-view mirrors, and the rear of the vehicle.
- the processor 102 controls the fisheye cameras to work, so that each fisheye camera can monitor the corresponding area of the vehicle 100 . to shoot. Since the fisheye camera has a super wide angle and the viewing angle is close to 180°, it is only necessary to install four fisheye cameras on the vehicle 100 to achieve 360° coverage around the vehicle 100 and close to the vehicle body, and the image captured by the fisheye camera can be achieved.
- the visible distance is farther than ultrasonic sensors and lidar sensors, so it is more accurate in subsequent calculation of obstacle distances.
- Step S303 converting the image into an inverse perspective transform IPM image.
- FIG. 3( a ) is an image captured by a normal camera
- FIG. 3( b ) is an image captured by a fisheye camera (hereinafter referred to as “fisheye camera image”).
- the processor 102 needs to convert the images of the fisheye cameras into IPM images through an inverse perspective mapping (IPM) algorithm.
- IPM inverse perspective mapping
- the fisheye camera image distortion correction The fisheye camera image has radial distortion due to the special lens shape of the camera, so the fisheye camera image is first de-distorted to obtain a de-distorted image.
- the existing methods for correcting image distortion of fisheye cameras include bilinear interpolation method, improved spherical perspective projection method, etc. There are no special requirements in this application, and any existing method can be used to achieve it.
- the undistorted image is converted from the space coordinate system to the camera coordinate system.
- the conversion process is as follows:
- (X C , Y C , Z C ) represent the coordinates in the camera coordinate system
- (X W , Y W , Z W ) represent the coordinates in the space coordinate system
- R represents the rotation matrix transformation
- t represents the displacement transformation
- (x, y) represents the coordinates in the image coordinate system
- f represents the aperture of the fisheye camera
- ( ⁇ , ⁇ ) represents the coordinates in the pixel coordinate system
- dx, dy represent the physical size of each pixel in the X-axis and Y-axis directions.
- the processor 102 first converts the fisheye camera image into an undistorted image, so that the subsequent IPM algorithm can process the image; and then converts the undistorted image into an IPM image through the above formulas (1)-(3), and finally obtains the Compared with the IPM image, the fisheye camera image is directly used, which is convenient for subsequent calculation of the distance to other obstacles.
- the processor 102 After obtaining the IPM image, the processor 102 splices the IPM images corresponding to the four fisheye camera images at the same time to obtain an IPM image covering 360° around the vehicle 100 and covering the vicinity of the vehicle body, and then caches the IPM image in the memory 103.
- the specific splicing effect is shown in Figure 4.
- the left side of FIG. 4 is four fisheye camera images captured by four fisheye cameras in the vehicle 100
- the right side of FIG. 4 is the spliced IPM image.
- Image stitching refers to the seamless stitching of two or more partially overlapping images to obtain a seamless panorama or high-resolution image.
- the IPM images corresponding to the four fisheye camera images are stitched together to obtain an IPM image covering 360° around the vehicle 100 and covering a wider viewing angle near the vehicle body. Therefore, there is no special requirement for the IPM image splicing method in this application, and any existing method such as a feature point splicing method, a phase correlation method, etc. is used to implement, which is not limited in this application.
- Step S305 acquiring at least one pixel block in the IPM image.
- semantic cutting is used to cut each obstacle in the spliced IPM image.
- semantic segmentation is a typical computer vision problem that involves taking some raw data (such as flat images) as input and transforming them into masks with highlighted regions of interest, i.e. using the The image block divides each pixel into corresponding categories.
- the neural network adopted in this application adopts semantic segmentation as an encoder-decoder structure.
- the task of the encoder is to obtain the feature map of the input image through neural network learning after given the input image; the decoder gradually realizes the category labeling of each pixel after the feature map is provided by the encoder, that is, segmentation.
- the encoder uses the pooling layer to gradually reduce the spatial dimension of the input IPM image, and the decoder uses the deconvolution layer, etc.
- the network layers gradually recover the details of the target and the corresponding spatial dimensions. From the encoder to the decoder, there is usually a direct information connection to help the decoder better recover the target details, thereby outputting more than a dozen different pixel blocks.
- the processor 102 cuts the spliced IPM image through semantic cutting, and obtains each pixel block, it is then identified by an artificial intelligence (artificial intelligence, AI) algorithm, thereby identifying the attribute corresponding to each pixel block, which may be: Wheel blocks (the levers that limit the continued backward movement of the wheels in the parking space), pillars, walls, other vehicles, pedestrians, cones, ground signs, etc., to help the vehicle 100 understand the scene in an automatic parking scenario.
- AI artificial intelligence
- the spliced IPM image includes vehicle 100, column 1, column 2, column 3, column 4, parking space 1, parking space 2, parking space 3, wheel block, logo and so on.
- the obtained image display effect is shown in Figure 6(b).
- pixel block 1 corresponds to vehicle 100
- pixel block 2 corresponds to column 1
- pixel block 3 corresponds to parking space 1
- Pixel block 4 corresponds to column 2
- pixel block 5 corresponds to parking space 2
- pixel block 6 corresponds to wheel block
- pixel block 7 corresponds to column 3
- pixel block 8 corresponds to parking space 3
- pixel block 9 corresponds to logo
- pixel block 10 corresponds to column 4.
- the processor 102 classifies each pixel block. Among them, the processor 102 identifies objects that the vehicle 100 with a height cannot collide with, such as wheel blocks, walls, other vehicles, pedestrians, cone buckets, etc., as obstacles; according to whether the obstacles are in a static state, the moving obstacles are divided into Moving obstacles, dividing stationary obstacles into stationary obstacles; in the parking process, the vehicle 100 needs to park according to ground signs (such as parking signs, lane lines, etc.) category.
- the objects corresponding to the first pixel block include stationary obstacles and identification categories, and the objects corresponding to the second pixel block include moving obstacles.
- Step S307 eliminating the object projection in the pixel block.
- the effect displayed by the spliced IPM image gives the impression that it is a top-down viewing angle extending from the center of the image (actually from the optical center of the four fisheye cameras) to the surrounding.
- the pillar 3 since the pillar 3 is a high object, the area behind the pillar 3 (the side of the pillar 3 facing away from the vehicle 100 ) is blocked by the pillar 3 and cannot be displayed in the image. Therefore, the occlusion of the area behind such an object with height is defined as "projection".
- the processor 102 uses the principle of motion parallax to obtain each frame of the fisheye camera image captured at different positions during the movement of the vehicle 100, that is, each frame of the fisheye camera.
- the angle of image shooting is also different, so the shape of the pixel block of the same obstacle in the obtained IPM image after splicing of each frame will change, and this change is caused by the change of the projection with the shooting angle.
- the unchanged area in the pixel block of the same obstacle in all frames is regarded as an obstacle, and the superposition is retained.
- this application adopts different multi-frame stacking strategies for different types of pixel blocks, specifically:
- the processor 102 detects that a pixel of this category appears in the spliced IPM image of all frames , and detect whether the position of the pixels in the same pixel block in multiple frames has changed. In the process of multi-frame superposition, the unchanged pixels in the same pixel block in all frames are retained, and the pixels that have changed are regarded as projections. eliminate.
- the superimposed pixel block still includes the projection area, so the present application filters out the brightness value according to the principle that the brightness of multiple overlapping of the same area is deeper than that of different areas. The lower area, the resulting area is closer to the real obstacle area. As shown in FIG. 7 , the effect obtained by stacking the columns 2 and 3 on both sides of the vehicle 100 in multiple frames is obtained.
- the processor 102 For moving obstacles (such as pedestrians, moving vehicles, etc., which have a high height and are in motion pixel blocks), if the processor 102 detects that the position of the same pixel block in multiple frames has partially or completely changed, it will perform multiple frames. During the superposition process, the pixel block of the moving obstacle is processed, and the position of the pixel block in the spliced IPM image of the last frame is directly used as the position of the pixel block after multiple frames are superimposed.
- moving obstacles such as pedestrians, moving vehicles, etc., which have a high height and are in motion pixel blocks
- the selected frame for superimposition is not based on time, but based on distance, that is, every time the vehicle 100 moves a fixed unit distance, a frame of the spliced IPM image is selected. This is because during the multi-frame stacking process, the stacking mainly relies on the IPM images captured at different positions. If the vehicle 100 is stationary or moving at a non-uniform speed, the frames are selected according to time as a reference, and there is no meaning of stacking.
- Step S309 determining the first boundary of the pixel block.
- the first boundary is the boundary where the object intersects with the ground in the IPM image after multiple frames are superimposed.
- the camera optical center emits ray scanning to detect the boundary of the obstacle. Since the superimposed IPM images are obtained by splicing the images captured by the four fisheye cameras, the processor 102 first calculates the superimposed optical centers of the four fisheye cameras according to the external parameters of the four fisheye cameras. position in the IPM image, and then emit rays from the four optical center points to the image area captured by the respective fisheye camera to scan, scan out the pixels located on the boundary of each pixel block, and then connect the boundary pixels in turn to connect the connected pixels.
- each boundary pixel where the straight line intersects with the corresponding optical center is regarded as a projected boundary
- each boundary pixel where the connected straight line does not intersect with the corresponding optical center is regarded as the boundary of the obstacle that may cause collision.
- the optical center of the fisheye camera on the right side of the vehicle 100 emits rays to scan the column 1 to scan out the boundaries A-B-C-D-E of the column 1 .
- the connection between the boundary A-B and the boundary E-D intersects with the optical center, so the boundary of the projection is considered, and the connection line between the boundary B-C and the boundary C-D does not intersect with the optical center, so it is considered that the column 1 will cause the boundary of collision.
- the density of the scanned boundary pixels it is determined by the density of the scanned boundary pixels.
- the number of scanned boundary pixels is sparse, which is considered to be the boundary of projection;
- boundary B-C and boundary C-D the scanned boundary pixels are dense, and it is considered that the boundary of column 1 will cause collision. .
- the boundaries of each pixel block segmented by semantics may have cutting deviations, so the scanning may have false detections. Since the boundary of the projection of the obstacle must be collinear with the optical center, when the distance of the obstacle obtained by one ray is far from the distance between the two adjacent rays, the high probability can be considered that the point is located on the edge of the projection, which can be regarded as an outlier Points, with a high probability of being points on the projection boundary, will eventually be filtered out.
- the first boundary pixel is the pixel at point D, and the boundary pixel located on the D-E connection is scanned (except the pixel at point D) considered outliers.
- Step S311 calculating the distance between the vehicle 100 and the first boundary of the pixel block.
- the processor 100 according to the coordinates of the pixel corresponding to the boundary of the obstacle causing the collision in the superimposed IPM image and the coordinates of the pixel corresponding to the fisheye camera responsible for photographing the obstacle, combined with the physical size data of the pixel, The distance D between the collision-causing boundary of the obstacle and the fisheye camera (ie, the distance between the collision-causing boundary of the obstacle and the vehicle 100 ) is calculated.
- the present application calculates the distance of each obstacle in the superimposed IPM image, provided that each obstacle is grounded. However, the chassis of the vehicle is not grounded, only the wheels are grounded, so the body of the vehicle is in a suspended state. If the above solution of the present application is used to calculate the distance between the vehicle and other vehicles, there will be a certain error.
- the IPM image the bottom surface of the suspended area is displayed in the image. In the process of calculating the distance, the distance between the vehicle 100 and other vehicles is increased. distance between other vehicles. If there are other vehicles on both sides when the vehicle 100 is reversed into the garage, this error may easily cause the vehicle 100 to rub against the vehicles on both sides.
- the present application proposes a correction scheme.
- the body and wheel categories are distinguished in the semantic segmentation result.
- the distance between the vehicle 100 and the vehicle 200 is calculated on the superimposed IPM image as d1
- the distance between the actual vehicle 100 and the vehicle 200 is d2
- a similar triangle relationship can be obtained, which can be calculated
- the distance d2 between the actual vehicle 100 and the vehicle 200 is:
- d3 can be automatically stored in the memory when the vehicle 100 is produced, or it can be calculated according to the pixel coordinates in the superimposed IPM image and the extrinsic parameter data of the fisheye camera, and d4 can be the average of the chassis heights of all vehicles.
- a safe distance is set before the vehicle 100 leaves the factory. After obtaining the distance between the vehicle 100 and other surrounding objects, the processor 102 detects whether it is greater than the safe distance. If the distance between the vehicle 100 and other objects is less than the safe distance, the processor 102 will generate a warning indication message to remind the user.
- a multi-domain controller (MDC) in the vehicle 100 accesses signals from different sensors, analyzes and processes it, determines that the distance between the vehicle 100 and other objects is less than a safe distance, and sends the dynamic control to the vehicle. (Vehicle dynamics control, VDC) The system sends warning indication information, and the VDC controls the speaker or display screen to output actual visual or audible information, so as to remind the user.
- MDC multi-domain controller
- This application uses a fisheye camera to collect information on the surrounding environment of the vehicle, and then processes the image of the fisheye camera to calculate the distance between the vehicle and obstacles in the surrounding environment. Compared with other sensors such as ultrasonic radar and lidar, the measured distance Distance values are more accurate.
- FIG. 10 is a schematic structural diagram of a detection apparatus provided by an embodiment of the present application. As shown in FIG. 10 , the apparatus 100 includes a transceiver unit 1001 and a processing unit 1002 .
- the transceiver unit 1001 is configured to acquire images collected from at least two cameras.
- the processing unit 1002 is configured to convert the image into an inverse perspective transform IPM image, wherein at least one object is located in the IPM image; obtain at least one pixel block in the IPM image, and the pixel block is the corresponding object in the IPM image. Pixel set; determine the first boundary of the pixel block, where the first boundary is the boundary between the object and the ground.
- the image is a fisheye camera image
- the processing unit 1002 is specifically configured to perform distortion correction on the image to obtain an undistorted image; through an inverse perspective transform IPM algorithm, the undistorted image is Convert to the IPM image.
- the processing unit 1002 is further configured to splicing the IPM images corresponding to the images collected by the at least two cameras at the same moment to obtain the spliced IPM image.
- the processing unit 1002 is further configured to eliminate the object projection in the pixel block, where the object projection is an area formed by the existence of the height of the object.
- the processing unit 1002 is specifically configured to superimpose the first pixel block in the at least two frames when the first pixel block is included in the spliced IPM image of the at least two frames,
- the object corresponding to the first pixel block is an object in a static state.
- the processing unit 1002 is further configured to eliminate pixels whose luminance values are less than a set threshold in the superimposed first pixel block.
- the processing unit 1002 is further configured to retain the second pixel block in the frame with the latest acquisition time when the IPM image after splicing the at least two frames includes a second pixel block,
- the object corresponding to the second pixel block is an object in a motion state.
- the processing unit 1002 is specifically configured to acquire the positions of the optical centers of the at least two cameras in the spliced IPM image according to the external parameters of the at least two cameras; the at least two cameras The optical centers of the two cameras scan the pixels at the boundary in the pixel block, and the position of the pixel at the boundary is the first boundary.
- the processing unit 1002 is further configured to calculate the distance between the current vehicle and the first boundary of the pixel block.
- the processing unit 1002 is further configured to output warning indication information when it is detected that the distance between the current vehicle and the first boundary of the pixel block is less than a set threshold.
- the processing unit 1002 is further configured to determine the distance between the first camera and the ground and the height value of the chassis of other vehicles, where the first camera is the at least one of the other vehicles photographed by the first camera One of the two cameras, the at least one object includes the other vehicle; according to the distance between the first camera and the ground and the height value of the chassis of the other vehicle, compare the calculated relationship between the vehicle and the other vehicle. The distance between the vehicles is corrected to obtain a first distance value.
- the present invention provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed in a computer, the computer is made to execute any one of the above methods.
- the present invention provides a computing device, including a memory and a processor, wherein executable codes are stored in the memory, and when the processor executes the executable codes, any one of the above methods is implemented.
- various aspects or features of the embodiments of the present application may be implemented as a method, apparatus or article of manufacture using standard programming and/or engineering techniques.
- article of manufacture encompasses a computer program accessible from any computer readable device, carrier or medium.
- computer readable media may include, but are not limited to: magnetic storage devices (eg, hard disks, floppy disks, or magnetic tapes, etc.), optical disks (eg, compact discs (CDs), digital versatile discs (DVDs) etc.), smart cards and flash memory devices (eg, erasable programmable read-only memory (EPROM), card, stick or key drives, etc.).
- various storage media described herein can represent one or more devices and/or other machine-readable media for storing information.
- the term "machine-readable medium” may include, but is not limited to, wireless channels and various other media capable of storing, containing, and/or carrying instructions and/or data.
- the detection apparatus 1000 in FIG. 10 may be implemented in whole or in part by software, hardware, firmware or any combination thereof.
- software When implemented in software, it can be implemented in whole or in part in the form of a computer program product.
- the computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, all or part of the processes or functions described in the embodiments of the present application are generated.
- the computer may be a general purpose computer, special purpose computer, computer network, or other programmable device.
- the computer instructions may be stored in or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be downloaded from a website site, computer, server or data center Transmission to another website site, computer, server, or data center is by wire (eg, coaxial cable, fiber optic, digital subscriber line (DSL)) or wireless (eg, infrared, wireless, microwave, etc.).
- the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that includes an integration of one or more available media.
- the usable media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, DVD), or semiconductor media (eg, Solid State Disk (SSD)), and the like.
- the size of the sequence numbers of the above-mentioned processes does not mean the sequence of execution, and the execution sequence of each process should be determined by its functions and internal logic, and should not be The implementation process of the embodiments of the present application constitutes any limitation.
- the disclosed systems, devices and methods may be implemented in other manners.
- the apparatus embodiments described above are only illustrative.
- the division of the units is only a logical function division. In actual implementation, there may be other division methods.
- multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
- the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Evolutionary Biology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
- Traffic Control Systems (AREA)
Abstract
一种检测方法、装置和车辆,该方法包括:在得到多个相机采集的图像后,将同一时刻采集的图像转换成IPM图像;然后通过语义切割技术将转换后的IPM图像中的各个物体切割出来,得到每个物体对应的像素块;最后检测出每个像素块中与地面接触到边界,从而根据检测出的边界计算车辆与每个像素块对应到物体之间的距离,相比较其它如超声波雷达、激光雷达等传感器,测量的距离值更加精准。
Description
本发明涉及智能驾驶技术领域,尤其涉及一种检测方法、装置和车辆。
随着汽车的智能化发展,自动泊车技术是高级驾驶辅助系统(advanced driving assistance system,ADAS)的重要功能之一。在自动泊车过程中,车辆的安全性是非常重要的,不仅要保证车辆不能剐蹭到静态障碍物,如其它已停泊车辆、墙壁等,也要保证车辆具有避让突然出现动态障碍物的功能,如行人、移动的车辆等等。
现有实现自动泊车方式中,利用超声波传感器向外发射超声波与接收经过障碍物反射回超声波,然后根据收发超声波信号的时间差和超声波的传播速度,计算出障碍物和传感器之间的距离。这种通过超声波传感器检测车辆周围障碍物距离的方式实现自动泊车原理,由于超声波传感器发出的超声波具有散射角大、方向性较差等特性,在测量较远距离的目标时,其回波信号比较弱,从而影响测量距离的精度。另外,超声波的传播速度较慢,当车辆高速行驶时,因车辆移动导致检测出的收发超声波信号的时间差过小,造成测量的距离误差较大。
如果采用激光雷达传感器来替代超声波传感器,虽然解决了回波信号弱和因超声波传播速度慢造成收发超声波信号的时间差过小的问题,但是激光雷达传感器发射的激光束扫描面积呈近处小远处大的特性,所以在靠近车身的近距离范围处,容易产生盲区,因此对车辆的安全性带来挑战。
发明内容
为了解决上述的问题,本申请的实施例提供了一种检测方法、装置和车辆。
第一方面,本申请提供一种检测方法,包括:获取来自至少两个相机采集的图像;将所述图像转换成逆透视变换IPM图像,其中,至少一个物体位于所述IPM图像中;获取所述IPM图像中的至少一个像素块,所述像素块为所述物体在所述IPM图像中对应的像素集合;确定所述像素块的第一边界,所述第一边界为所述物体与地面相交的边界。
在该实施方式中,利用多个相机采集车辆周围环境信息,然后通过对图像进行处理,计算出车辆与周围环境中障碍物之间的距离,相比较其它如超声波雷达、激光雷达等传感器,测量的距离值更加精准。
在一种实施方式中,所述图像为鱼眼相机图像,所述将所述图像转换成逆透视变换IPM图像,包括:对所述图像进行畸变校正,得到去畸变图像;通过逆透视变换IPM算法,将所述去畸变图像转换成所述IPM图像。
在该实施方式中,先将鱼眼相机图像转换去畸变图像,以便后续IPM算法对图像进行处理;然后将去畸变图像转换为IPM图像,最终得到的IPM图像,相比较直接使用鱼眼相机图像,方便后续计算与其它障碍物的距离。
在一种实施方式中,还包括:将同一时刻所述至少两个相机采集的图像对应的IPM图像进行拼接,得到拼接后的IPM图像。
在一种实施方式中,在所述确定所述像素块的第一边界之前,所述方法还包括:消除所述像素块中物体投影,所述物体投影为所述物体因高度的存在而形成的区域。
在该实施方式中,由于拼接后的IPM图像显示的效果,给人的感觉是从图像的中心向四周延伸的俯视观看视角,所以对于有高度的物体,其的背后区域被有高度的物体遮挡,导致在图像中无法显示,导致语义切割出的像素块中有的边界并非是与地面接触的边界,所以需要将该边界消除,从而更加方面后续识别出第一边界。
在一种实施方式中,所述消除所述像素块中物体投影,包括:当至少两帧拼接后的IPM图像中包括第一像素块时,将所述至少两帧中的所述第一像素块进行叠加,所述第一像素块对应的物体为静止状态的物体。
在一种实施方式中,所述方法还包括:消除叠加后的所述第一像素块中亮度值小于设定阈值的像素。
在该实施方式中,如果进行叠加的帧数量不多,叠加出来的像素块中仍包括投影区域,所以再根据相同区域多次叠加的亮度比不同区域叠加的亮度要深的原则,过滤掉亮度值较低的区域,从而得到的区域更加接近真实障碍物区域。
在一种实施方式中,所述方法还包括:当所述至少两帧拼接后的IPM图像中包括第二像素块时,保留获取时间最晚的帧中所述第二像素块,所述第二像素块对应的物体为运动状态的物体。
在一种实施方式中,所述确定所述像素块的第一边界,包括:根据所述至少两个相机的外参,获取所述至少两个相机的光心在拼接后的IPM图像中的位置;所述至少两个相机的光心扫描像素块中的处在边界的像素,所述处在边界的像素的位置为所述第一边界。
在一种实施方式中,所述方法还包括:计算当前车辆与所述像素块的第一边界之间的距离。
在一种实施方式中,所述方法还包括:检测到所述当前车辆与所述像素块的第一边界之间的距离小于设定的阈值时,输出警告指示信息。
在一种实施方式中,所述方法还包括:确定第一相机与地面之间的距离和其它车辆底盘的高度值,所述第一相机为拍摄到所述其它车辆的所述至少两个相机中一个,所述至少一个物体包括所述其它车辆;根据所述第一相机与地面之间的距离和所述其它车辆底盘的高度值,对计算出的所述车辆与所述其它车辆之间的距离进行修正,得到第一距离值。
在该实施方式中,如果其它物体并非与地面直接接触,而是处于悬空状态,此时通过上述方式计算出的距离可能并不准确,针对车辆来说,可以知道车辆的底盘高度,然后结合已知的相机高度值和原先计算的距离值,进一步计算出更为精准的距离值。
第二方面,本申请实施例提供了一种检测装置,包括:收发单元,用于获取来自至少两个相机采集的图像;处理单元,用于将所述图像转换成逆透视变换IPM图像,其中,至少一个物体位于所述IPM图像中;获取所述IPM图像中的至少一个像素块,所述像素块为所述物体在所述IPM图像中对应的像素集合;确定所述像素块的第一边界,所述第一边界为所述物体与地面相交的边界。
在一种实施方式中,所述图像为鱼眼相机图像,所述处理单元具体用于对所述图像进 行畸变校正,得到去畸变图像;通过逆透视变换IPM算法,将所述去畸变图像转换成所述IPM图像。
在一种实施方式中,所述处理单元,还用于将同一时刻所述至少两个相机采集的图像对应的IPM图像进行拼接,得到拼接后的IPM图像。
在一种实施方式中,所述处理单元,还用于消除所述像素块中物体投影,所述物体投影为所述物体因高度的存在而形成的区域。
在一种实施方式中,所述处理单元,具体用于当至少两帧拼接后的IPM图像中包括第一像素块时,将所述至少两帧中的所述第一像素块进行叠加,所述第一像素块对应的物体为静止状态的物体。
在一种实施方式中,所述处理单元,还用于消除叠加后的所述第一像素块中亮度值小于设定阈值的像素。
在一种实施方式中,所述处理单元,还用于当所述至少两帧拼接后的IPM图像中包括第二像素块时,保留获取时间最晚的帧中所述第二像素块,所述第二像素块对应的物体为运动状态的物体。
在一种实施方式中,所述处理单元,具体用于根据所述至少两个相机的外参,获取所述至少两个相机的光心在拼接后的IPM图像中的位置;所述至少两个相机的光心扫描像素块中的处在边界的像素,所述处在边界的像素的位置为所述第一边界。
在一种实施方式中,所述处理单元,还用于计算当前车辆与所述像素块的第一边界之间的距离。
在一种实施方式中,所述处理单元,还用于检测到所述当前车辆与所述像素块的第一边界之间的距离小于设定的阈值时,输出警告指示信息。
在一种实施方式中,所述处理单元,还用于确定第一相机与地面之间的距离和其它车辆底盘的高度值,所述第一相机为拍摄到所述其它车辆的所述至少两个相机中一个,所述至少一个物体包括所述其它车辆;根据所述第一相机与地面之间的距离和所述其它车辆底盘的高度值,对计算出的所述车辆与所述其它车辆之间的距离进行修正,得到第一距离值。
第三方面,本申请实施例提供了一种车辆,包括:至少两个相机;存储器;处理器,用于执行如第一方面各个可能实现的实施例。
第四方面,本申请实施例提供了一种计算机可读存储介质,其上存储有计算机程序,当所述计算机程序在计算机中执行时,令计算机执行如第一方面各个可能实现的实施例。
第五方面,本申请实施例提供了一种计算设备,包括存储器和处理器,其特征在于,所述存储器中存储有可执行代码,所述处理器执行所述可执行代码时,实现如第一方面各个可能实现的实施例。
下面对实施例或现有技术描述中所需使用的附图作简单地介绍。
图1为本申请实施例提供的一种车辆架构示意图;
图2为本申请实施例提供的车辆上四个鱼眼相机拍摄区域的场景示意图;
图3为本申请实施例提供的处理器实现检测方法的流程示意图;
图4为本申请实施例提供的四个鱼眼相机进行拼接的效果示意图;
图5(a)为正常非鱼眼相机拍摄的图像;
图5(b)为鱼眼相机拍摄的图像;
图6(a)为一种IPM图像;
图6(b)为对IPM进行语义切割后的各个像素块的效果示意图;
图7为本申请实施例提供的车辆两侧的立柱叠加后的效果示意图;
图8为本申请实施例提供的光心发射射线扫描出第一边界的显示效果示意图;
图9为本申请实施例提供的一种测量与其它车辆之间的距离场景示意图;
图10为本申请实施例提供的一种检测装置的结构示意图。
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述。
图1为本申请实施例提供的一种车辆的结构示意图。如图1所示的一种车辆100,该车辆100包括传感器101、处理器102、存储器103和总线104。其中,传感器101、处理器102、存储器103可以通过总线104建立通信连接。
传感器101为鱼眼相机、针孔相机等等。本申请在此以鱼眼相机为例,来讲述本申请所要保护的技术方案。其中,鱼眼相机是一种焦距为16mm或更短的并且视角接近或等于180°的镜头。众所周知,焦距越短,视角越大,所以为使镜头达到最大的摄影视角,这种相机的前镜片直径很短且呈抛物状向镜头前部凸出,与鱼的眼睛颇为相似。
本申请中,只需要四个超大视角的鱼眼相机作为接收设备,分别设置车辆100的车头、车辆100两侧的后视镜和车辆100的车尾上,如图2所示,通过开启四个鱼眼相机进行拍照,然后将四个鱼眼相机拍摄到的图像进行拼接,即可实现对车辆100对车辆100四周环境进行检测。
处理器102可以为车载中控单元、中央处理器102(central processing unit,CPU)、云服务器等等,用于对传感器101采集的图像进行处理,得到图像中各个障碍物与车辆之间的距离。
存储器103可以包括易失性存储器(volatile memory),例如随机存取存储器(random-access memory,RAM);存储器也可以包括非易失性存储器(non-volatile memory),例如只读存储器(read-only memory,ROM)、快闪存储器、硬盘(hard disk drive,HDD)或固态硬盘(solid state drive,SSD);存储器103还可以包括上述种类的存储器的组合。
存储器103存储的数据不仅传感器101采集的图像、处理器102计算的距离等数据,存储器103还存储执行检测方法对应的各种指令、应用程序等等。
处理器102执行检测方法具体过程,本申请将结合图3所示的执行流程步骤来讲述具体实现过程如下:
步骤S301,获取来自至少两个相机采集的图像。一种实现中,车辆100使用四个鱼眼相机,其分别设置在车辆100的车头、左右后视镜和车尾。
具体地,在车辆100进行进入自动泊车、启动车辆、倒车入库等需要检测车辆100周围环境的功能时,处理器102控制鱼眼相机进行工作,让各个鱼眼相机对车辆100相应的区域进行拍摄。由于鱼眼相机具有超大广角,视角接近180°,所以只需要在车辆100上安装四个鱼眼相机,即可实现覆盖车辆100周围360°和覆盖车身近处,且鱼眼相机拍摄出来的图 像的可视距离比超声波传感器和激光雷达传感器要远,所以在后续计算障碍物距离时更加准确。
步骤S303,将图像转换成逆透视变换IPM图像。
由于鱼眼相机为了追求超大视角,在大视场的物点成像过程中,光束以较大的入射角打在相机的前光组的光学面上,经光学系统成像后,子午和弧矢平面内的聚焦位置与波阵面参数可能完全不一致,导致图像出现变形(桶形畸变),具体效果如图3(a)和图3(b)所示。其中,图3(a)为正常相机拍摄出的图像,图3(b)为鱼眼相机拍摄出的图像(后续称为“鱼眼相机图像”)。
因此,处理器102在得到四个鱼眼相机拍摄的图像后,需要将鱼眼相机的图像通过逆透视变换(inverse perspective mapping,IPM)算法转换成IPM图像。具体实现过程如下:
1、鱼眼相机图像畸变校正。鱼眼相机图像由于相机的透镜形状特殊导致带有径向畸变,所以先对鱼眼相机图像进行去畸变操作,得到去畸变图像。现有的鱼眼相机图像畸变校正的方法有双线性插值法、改进球面透视投影法等等,本申请在此无特殊要求,可以采用现有任意一种方法来实现。
2、利用IPM算法,将去畸变图像转换成IPM图像。具体转换过程如下所示:
(1)根据鱼眼相机的外参,将去畸变图像从空间坐标系转换成相机坐标系,转换过程为:
其中,(X
C,Y
C,Z
C)表示相机坐标系中坐标,(X
W,Y
W,Z
W)表示空间坐标系中坐标,R表示旋转矩阵变换,t表示位移变换。
(2)根据鱼眼相机的内参,将转换后的去畸变图像从相机坐标系转换成图像坐标系,转换过程为:
其中,(x,y)表示图像坐标系中坐标,f表示鱼眼相机的光圈。
(3)根据像素的物理尺寸,将转换后的去畸变图像从图像坐标系转换成像素坐标系,从而得到IPM图像,转换过程为:
其中,(μ,ν)表示像素坐标系中坐标,dx,dy表示每一个像素在X轴与Y轴方向上的物理尺寸。
本申请中,处理器102先将鱼眼相机图像转换去畸变图像,以便后续IPM算法对图像进行处理;然后通过上述公式(1)-(3)将去畸变图像转换为IPM图像,最终得到的IPM图像,相比较直接使用鱼眼相机图像,方便后续计算与其它障碍物的距离。
处理器102在得到IPM图像后,将同一时刻的四个鱼眼相机图像对应的IPM图像进行拼接,得到一幅包括覆盖车辆100周围360°和覆盖车身近处的IPM图像,然后缓存在存储器103中,具体拼接效果如图4所示。其中,图4左侧为车辆100中四个鱼眼相机拍摄的四幅鱼眼相机图像,图4右侧为拼接后的IPM图像。
图像拼接是指将两个或两个以上的具有部分重叠的图像进行无缝拼接,得到一幅无缝的全景图或高分辨率图像。本申请中将四个鱼眼相机图像对应的IPM图像进行拼接,也是为了得到覆盖车辆100周围360°和覆盖车身近处的更宽视角的IPM图像。所以本申请IPM图像拼接方法也并无特殊要求,采用现有如特征点拼接法、相位相关法等任意一种方法实现,本申请在此不做限定。
步骤S305,获取IPM图像中的至少一个像素块。
一种设计中,采用语义切割实现对拼接后的IPM图像中各个障碍物进行切割。其中,语义分割是一种典型的计算机视觉问题,其涉及将一些原始数据(如平面图像)作为输入并将它们转换为具有突出显示的感兴趣区域的掩模,也即利用每个像素周围的图像块分别将各像素分成对应的类别。
示例性地,本申请采用的采用语义切割为编码器-解码器(encoder-decoder)结构的神经网络。其中,编码器的任务是在给定输入图像后,通过神经网络学习得到输入图像的特征图谱;解码器则在编码器提供特征图后,逐步实现每个像素的类别标注,也就是分割。本申请在语义切割过程中,将拼接后的IPM图像输入到编码器-解码器神经网络中后,编码器使用池化层逐渐缩减输入IPM图像的空间维度,而解码器通过反卷积层等网络层逐步恢复目标的细节和相应的空间维度。从编码器到解码器之间,通常存在直接的信息连接,来帮助解码器更好地恢复目标细节,从而输出十几类不同的像素块。
处理器102在通过语义切割对拼接后的IPM图像进行切割,得到的各个像素块后,再通过人工智能(artificial intelligence,AI)算法进行识别,从而识别出每个像素块对应的属性,可能为轮档(停车位的限制车轮继续向后移动的档杆)、立柱、墙壁、其它车辆、行人、锥桶、地面标识等等,从而帮助车辆100在自动泊车场景下对场景的理解。
如图6(a)所示拼接后的IPM图像中,包括车辆100、立柱1、立柱2、立柱3、立柱4、车位1、车位2、车位3、轮挡、标识等等。通过语义切割后,得到的图像显示效果如图6(b)所示,图像中有10个像素块,其中,像素块1对应车辆100、像素块2对应立柱1、像素块3对应车位1、像素块4对应立柱2、像素块5对应车位2、像素块6对应轮挡、像素块7对应立柱3、像素块8对应车位3、像素块9对应标识和像素块10对应立柱4。
最后,处理器102识别出各个像素块属性后,对各个像素块进行类别分类。其中,对于轮档、墙壁、其它车辆、行人、锥桶等具有高度的车辆100不能碰撞的物体,处理器102认定为障碍物;根据障碍物是否处在静止状态,将运动的障碍物分为运动障碍物,将静止的障碍物分成静止障碍物;在泊车过程中,车辆100需要根据地面标识(如停车标识、车道线等等)进行规范停车,所以处理器102单独进行分类,作别标识类别。其中,第一像素块对应的物体包括静止障碍物和标识类别,第二像素块对应的物体包括运动障碍物。
步骤S307,消除像素块中物体投影。
如图6(a)所示,拼接后的IPM图像显示的效果,给人的感觉是从图像的中心(实际是从四个鱼眼相机的光心)向四周延伸的俯视观看视角。以立柱3为例,由于立柱3是有高度的物体,所以立柱3的背后(立柱3背离车辆100的一侧)区域被立柱3遮挡,导致在图像中无法显示。因此,将这种有高度的物体将背后区域遮挡定义为“投影”。
处理器102为了解决障碍物的投影带来的干扰,利用运动视差原理,在车辆100运动过程中,得到每一帧鱼眼相机图像为不同位置下拍摄得到的,也即每一帧鱼眼相机图像拍摄的角度也就不同,所以得到的每一帧拼接后的IPM图像中的同一个障碍物的像素块形状都会发生变化,而这种变化都是投影随拍摄角度变化产生的,在多帧IPM叠加过程中,将所有帧中同一障碍物的像素块中未发生变化的区域认为障碍物,叠加保留下来,发生变化的区域认为投影,叠加消除,从而实现减轻投影的影响。
但是,对于行人、移动的车辆等具有移动的障碍物,由于此类型的障碍物的位置会发生变化,在多帧IPM图像叠加过程中,会因为多帧叠加而被消除。所以本申请确定各个像素块属性后,针对不同类别的像素块,采用不同的多帧叠加策略,具体为:
1、对于标识类别(如停车标识、车道线等在地面上的没有高度的像素块),如果处理器102检测到在一帧或少数帧的拼接后的IPM图像中出现该类别的像素时,在进行多帧叠加过程中,由于该类型的像素块没有高度,不具有投影,所以直接进行多帧叠加并保留下来。
2、对于静止障碍物(如立柱、墙壁、锥桶等有高度的且处在静止状态的像素块),如果处理器102检测到一个在全部帧的拼接后的IPM图像中出现该类别的像素,并检测多帧中同一像素块中的像素的位置是否发生变化,在进行多帧叠加过程中,将所有帧中同一像素块中未发生变化的像素保留,发生变化的像素认为投影,通过叠加消除。
可选地,如果进行叠加的帧数量不多,叠加出来的像素块中仍包括投影区域,所以本申请根据相同区域多次叠加的亮度比不同区域叠加的亮度要深的原则,过滤掉亮度值较低的区域,从而得到的区域更加接近真实障碍物区域。如图7所示,以位于车辆100两侧的立柱2和立柱3进行多帧叠加得到的效果。
3、对于运动障碍物(如行人、移动的车辆等有高度的且处在运动状态的像素块),如果处理器102检测到多帧中同一像素块位置部分或全部发生变化,在进行多帧叠加过程中,对运动障碍物的像素块做处理,直接将该像素块出现在最后一帧的拼接后的IPM图像中的位置,作为多帧叠加后的该像素块所处的位置。
本申请中,选取的进行叠加的帧,不是以时间为参照,而是以距离为参照,也即车辆100每移动一个固定单位距离,选取一帧拼接后的IPM图像。这是由于多帧叠加过程中,主要依靠不同位置下拍摄得到的IPM图像进行叠加,如果车辆100处于静止、或非匀速运动时,按照时间为参照选取帧,则没有进行叠加的意义。
步骤S309,确定像素块的第一边界。其中,第一边界为物体在多帧叠加后的IPM图像中与地面相交的边界。
本申请通过多帧叠加的方式,虽然消除了部分投影的影响,但是由于叠加的帧中很少会包括到障碍物的360°全范围,所以并不能完全消除,得到的像素块中仍还有很多投影造成的为障碍物像素,这些区域只是可能为障碍,所以需要确定各个障碍物的会引起碰撞的边界。
本申请采用相机光心发射射线扫描的方式检测障碍物的边界。由于进行叠加后的IPM图像时通过四个鱼眼相机拍摄的图像拼接出来的,所以处理器102先根据四个鱼眼相机的外参,计算出四个鱼眼相机的光心在叠加后的IPM图像中的位置,然后从四个光心点向各自鱼眼相机拍摄的图像区域内发射射线进行扫描,扫描出各个像素块的位于边界的像素,然后将边界像素依次连接起来,将连接的直线与对应的光心相交的各个边界像素认为投影边界,将连接的直线不与对应的光心相交的各个边界像素认为障碍物的会引起碰撞的边界。示例性地,如图8所示,车辆100右侧鱼眼相机的光心发射射线对立柱1进行扫描,扫描出立柱1的边界A-B-C-D-E。其中,边界A-B和边界E-D的连接均与光心相交,所以认为投影的边界,而边界B-C和边界C-D的连线并不与光心相交,所以认为立柱1的会引起碰撞的边界。
但是,在实际操作过程中,是以扫描的边界像素点的密集度来确定的。其中,在边界A-B和边界E-D处,扫描得到的边界像素点数量稀少,认为投影的边界;在边界B-C和边界C-D处,扫描得到的边界像素点数量稠密,认为立柱1的会引起碰撞的边界。
示例性地,在实际检测中,通过语义分割出的各个像素块边界可能存在切割偏差,所以扫描有可能出现误检。由于障碍物产生投影的边界一定与光心共线,所以一条射线得到的障碍物距离与相邻两条射线距离较远时,大概率可以认为该点位于投影边缘上,可以被认作离群点,很大概率为投影边界上的点,这种点最后会被过滤掉。如图8所示,如果光心向D-E连线方向发射射线时,第一边界像素点为D点处的像素,而扫描到位于D-E连线上的边界像素点(除D点的像素点)认为离群点。
步骤S311,计算车辆100与像素块的第一边界之间的距离。
具体地,处理器100根据叠加后的IPM图像中的障碍物的引起碰撞的边界对应的像素的坐标和负责拍摄该障碍物的鱼眼相机对应的像素的坐标,并结合像素的物理尺寸数据,计算出障碍物的引起碰撞的边界与鱼眼相机之间的距离D(也即障碍物的引起碰撞的边界与车辆100之间的距离)。
本申请计算叠加后的IPM图像中的各个障碍物的距离,前提是各个障碍物时接地的。但是,车辆的底盘并非接地,只是车轮接地,所以车辆的车体处于悬空状态。如果采用本申请上述方案来计算本车与其它车辆之间的距离,则存在一定的误差,在IPM图像中,悬空区域的底面呈现在图像中,在计算距离过程中,增大了车辆100与其它车辆之间的距离。如果车辆100进行倒车入库时两侧都有其它车辆时,这种误差很容易造成车辆100与两侧车辆发生刮擦。
本申请为了减少这种误差,提出一种修正方案,首先语义分割结果中区分车身和车轮类别,对于车身类别的像素如图9所示,假设车辆100的鱼眼相机的高度为d3,车辆200的底盘的高度为d4,叠加后的IPM图像上计算出车辆100与车辆200之间的距离为d1,而实际车辆100与车辆200之间的距离为d2,可得到一个相似三角形关系,可以算出实际车辆100与车辆200之间的距离d2为:
其中,d3可以在车辆100出产时自动存储在存储器中,也可以根据叠加后的IPM图像中像素坐标和鱼眼相机的外参数据计算得到,d4可以取所有车辆的底盘高度的平均值。
为了避免车辆100与周围其它物体发生碰撞,车辆100在出厂之前,设置一个安全距离。处理器102在得到车辆100与周围其它物体之间的距离后,检测是否大于安全距离,如果车辆100与其它物体之间的距离小于安全距离时,处理器102将生成警告指示信息来提醒用户。示例性地,车辆100中的多域控制器(multi domain controller,MDC)接入不同传感器的信号,进行分析和处理,确定车辆100与其它物体之间的距离小于安全距离时,向车辆动态控制(vehicle dynamics control,VDC)系统发送警告指示信息,VDC控制扬声器或者显示屏输出实际的可视或者可听信息,从而达到提醒用户。
本申请利用鱼眼相机采集车辆周围环境信息,然后通过对鱼眼相机图像进行处理,计算出车辆与周围环境中障碍物之间的距离,相比较其它如超声波雷达、激光雷达等传感器,测量的距离值更加精准。
图10为本申请实施例提供的一种检测装置的结构示意图。如图10所示,该装置100包括收发单元1001和处理单元1002。
收发单元1001,用于获取来自至少两个相机采集的图像。
处理单元1002,用于将图像转换成逆透视变换IPM图像,其中,至少一个物体位于所述IPM图像中;获取IPM图像中的至少一个像素块,像素块为所述物体在IPM图像中对应的像素集合;确定像素块的第一边界,第一边界为物体与地面相交的边界。
在一种实施方式中,所述图像为鱼眼相机图像,所述处理单元1002具体用于对所述图像进行畸变校正,得到去畸变图像;通过逆透视变换IPM算法,将所述去畸变图像转换成所述IPM图像。
在一种实施方式中,所述处理单元1002,还用于将同一时刻所述至少两个相机采集的图像对应的IPM图像进行拼接,得到拼接后的IPM图像。
在一种实施方式中,所述处理单元1002,还用于消除所述像素块中物体投影,所述物体投影为所述物体因高度的存在而形成的区域。
在一种实施方式中,所述处理单元1002,具体用于当至少两帧拼接后的IPM图像中包括第一像素块时,将所述至少两帧中的所述第一像素块进行叠加,所述第一像素块对应的物体为静止状态的物体。
在一种实施方式中,所述处理单元1002,还用于消除叠加后的所述第一像素块中亮度值小于设定阈值的像素。
在一种实施方式中,所述处理单元1002,还用于当所述至少两帧拼接后的IPM图像中包括第二像素块时,保留获取时间最晚的帧中所述第二像素块,所述第二像素块对应的物体为运动状态的物体。
在一种实施方式中,所述处理单元1002,具体用于根据所述至少两个相机的外参,获取所述至少两个相机的光心在拼接后的IPM图像中的位置;所述至少两个相机的光心扫描像素块中的处在边界的像素,所述处在边界的像素的位置为所述第一边界。
在一种实施方式中,所述处理单元1002,还用于计算当前车辆与所述像素块的第一边界之间的距离。
在一种实施方式中,所述处理单元1002,还用于检测到所述当前车辆与所述像素块的第一边界之间的距离小于设定的阈值时,输出警告指示信息。
在一种实施方式中,所述处理单元1002,还用于确定第一相机与地面之间的距离和其 它车辆底盘的高度值,所述第一相机为拍摄到所述其它车辆的所述至少两个相机中一个,所述至少一个物体包括所述其它车辆;根据所述第一相机与地面之间的距离和所述其它车辆底盘的高度值,对计算出的所述车辆与所述其它车辆之间的距离进行修正,得到第一距离值。
本发明提供一种计算机可读存储介质,其上存储有计算机程序,当所述计算机程序在计算机中执行时,令计算机执行上述任一项方法。
本发明提供一种计算设备,包括存储器和处理器,所述存储器中存储有可执行代码,所述处理器执行所述可执行代码时,实现上述任一项方法。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请实施例的范围。
此外,本申请实施例的各个方面或特征可以实现成方法、装置或使用标准编程和/或工程技术的制品。本申请中使用的术语“制品”涵盖可从任何计算机可读器件、载体或介质访问的计算机程序。例如,计算机可读介质可以包括,但不限于:磁存储器件(例如,硬盘、软盘或磁带等),光盘(例如,压缩盘(compact disc,CD)、数字通用盘(digital versatile disc,DVD)等),智能卡和闪存器件(例如,可擦写可编程只读存储器(erasable programmable read-only memory,EPROM)、卡、棒或钥匙驱动器等)。另外,本文描述的各种存储介质可代表用于存储信息的一个或多个设备和/或其它机器可读介质。术语“机器可读介质”可包括但不限于,无线信道和能够存储、包含和/或承载指令和/或数据的各种其它介质。
在上述实施例中,图10中检测装置1000可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。
应当理解的是,在本申请实施例的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通 过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
以上所述,仅为本申请实施例的具体实施方式,但本申请实施例的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请实施例揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请实施例的保护范围之内。
Claims (25)
- 一种检测方法,其特征在于,包括:获取来自至少两个相机采集的图像;将所述图像转换成逆透视变换IPM图像,其中,至少一个物体位于所述IPM图像中;获取所述IPM图像中的至少一个像素块,所述像素块为所述物体在所述IPM图像中对应的像素集合;确定所述像素块的第一边界,所述第一边界为所述物体与地面相交的边界。
- 根据权利要求1所述的方法,其特征在于,所述图像为鱼眼相机图像,所述将所述图像转换成逆透视变换IPM图像,包括:对所述图像进行畸变校正,得到去畸变图像;通过逆透视变换IPM算法,将所述去畸变图像转换成所述IPM图像。
- 根据权利要求1或2所述的方法,其特征在于,还包括:将同一时刻所述至少两个相机采集的图像对应的IPM图像进行拼接,得到拼接后的IPM图像。
- 根据权利要求1-3任意一项所述的方法,其特征在于,在所述确定所述像素块的第一边界之前,所述方法还包括:消除所述像素块中物体投影,所述物体投影为所述物体因高度的存在而形成的区域。
- 根据权利要求4所述的方法,其特征在于,所述消除所述像素块中物体投影,包括:当至少两帧拼接后的IPM图像中包括第一像素块时,将所述至少两帧中的所述第一像素块进行叠加,所述第一像素块对应的物体为静止状态的物体。
- 根据权利要求5所述的方法,其特征在于,所述方法还包括:消除叠加后的所述第一像素块中亮度值小于设定阈值的像素。
- 根据权利要求5-6任意一项所述的方法,其特征在于,所述方法还包括:当所述至少两帧拼接后的IPM图像中包括第二像素块时,保留获取时间最晚的帧中所述第二像素块,所述第二像素块对应的物体为运动状态的物体。
- 根据权利要求1-7任意一项所述的方法,其特征在于,所述确定所述像素块的第一边界,包括:根据所述至少两个相机的外参,获取所述至少两个相机的光心在拼接后的IPM图像中的位置;所述至少两个相机的光心扫描像素块中的处在边界的像素,所述处在边界的像素的位置为所述第一边界。
- 根据权利要求1-8任意一项所述的方法,其特征在于,所述方法还包括:计算当前车辆与所述像素块的第一边界之间的距离。
- 根据权利要求1-9任意一项所述的方法,其特征在于,所述方法还包括:检测到所述当前车辆与所述像素块的第一边界之间的距离小于设定的阈值时,输出警告指示信息。
- 根据权利要求1-10任意一项所述的方法,其特征在于,所述方法还包括:确定第一相机与地面之间的距离和其它车辆底盘的高度值,所述第一相机为拍摄到所述其它车辆的所述至少两个相机中一个,所述至少一个物体包括所述其它车辆;根据所述第一相机与地面之间的距离和所述其它车辆底盘的高度值,对计算出的所述车辆与所述其它车辆之间的距离进行修正,得到第一距离值。
- 一种检测装置,其特征在于,包括:收发单元,用于获取来自至少两个相机采集的图像;处理单元,用于将所述图像转换成逆透视变换IPM图像,其中,至少一个物体位于所述IPM图像中;获取所述IPM图像中的至少一个像素块,所述像素块为所述物体在所述IPM图像中对应的像素集合;以及确定所述像素块的第一边界,所述第一边界为所述物体与地面相交的边界。
- 根据权利要求12所述的装置,其特征在于,所述图像为鱼眼相机图像,所述处理单元具体用于对所述图像进行畸变校正,得到去畸变图像;通过逆透视变换IPM算法,将所述去畸变图像转换成所述IPM图像。
- 根据权利要求12或13所述的方法,其特征在于,所述处理单元,还用于将同一时刻所述至少两个相机采集的图像对应的IPM图像进行拼接,得到拼接后的IPM图像。
- 根据权利要求12-14任意一项所述的装置,其特征在于,所述处理单元,还用于消除所述像素块中物体投影,所述物体投影为所述物体因高度的存在而形成的区域。
- 根据权利要求15所述的装置,其特征在于,所述处理单元,具体用于当至少两帧拼接后的IPM图像中包括第一像素块时,将所述至少两帧中的所述第一像素块进行叠加,所述第一像素块对应的物体为静止状态的物体。
- 根据权利要求16所述的装置,其特征在于,所述处理单元,还用于消除叠加后的所述第一像素块中亮度值小于设定阈值的像素。
- 根据权利要求16-17任意一项所述的装置,其特征在于,所述处理单元,还用于当所述至少两帧拼接后的IPM图像中包括第二像素块时,保留获取时间最晚的帧中所述第二像素块,所述第二像素块对应的物体为运动状态的物体。
- 根据权利要求12-18任意一项所述的装置,其特征在于,所述处理单元,具体用于根据所述至少两个相机的外参,获取所述至少两个相机的光心在拼接后的IPM图像中的位置;所述至少两个相机的光心扫描像素块中的处在边界的像素,所述处在边界的像素的位置为所述第一边界。
- 根据权利要求12-19任意一项所述的装置,其特征在于,所述处理单元,还用于计算当前车辆与所述像素块的第一边界之间的距离。
- 根据权利要求12-20任意一项所述的装置,其特征在于,所述处理单元,还用于检测到所述当前车辆与所述像素块的第一边界之间的距离小于设定的阈值时,输出警告指示信息。
- 根据权利要求12-21任意一项所述的装置,其特征在于,所述处理单元,还用于确定第一相机与地面之间的距离和其它车辆底盘的高度值,所述第一相机为拍摄到所述其它车辆的所述至少两个相机中一个,所述至少一个物体包括所述其它车辆;根据所述第一相机与地面之间的距离和所述其它车辆底盘的高度值,对计算出的所述车辆与所述其它车辆之间的距离进行修正,得到第一距离值。
- 一种车辆,其特征在于,包括:至少两个相机;至少一个存储器,用于存储指令或者程序;至少一个处理器,用于执行所述指令或者程序,以实现如权利要求1-11中的任一项所述的方法。
- 一种计算机可读存储介质,其上存储有计算机程序,当所述计算机程序在计算机中执行时,令计算机执行权利要求1-11中任一项的所述的方法。
- 一种计算设备,包括存储器和处理器,其特征在于,所述存储器中存储有可执行代码,所述处理器执行所述可执行代码时,实现权利要求1-11中任一项所述的方法。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202180000434.XA CN112912895B (zh) | 2021-01-29 | 2021-01-29 | 一种检测方法、装置和车辆 |
PCT/CN2021/074328 WO2022160232A1 (zh) | 2021-01-29 | 2021-01-29 | 一种检测方法、装置和车辆 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2021/074328 WO2022160232A1 (zh) | 2021-01-29 | 2021-01-29 | 一种检测方法、装置和车辆 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022160232A1 true WO2022160232A1 (zh) | 2022-08-04 |
Family
ID=76109095
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/074328 WO2022160232A1 (zh) | 2021-01-29 | 2021-01-29 | 一种检测方法、装置和车辆 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112912895B (zh) |
WO (1) | WO2022160232A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024124865A1 (zh) * | 2022-12-14 | 2024-06-20 | 天津所托瑞安汽车科技有限公司 | 车辆姿态检测方法、装置、车辆及存储介质 |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116079759A (zh) * | 2023-04-07 | 2023-05-09 | 西安零远树信息科技有限公司 | 一种基于服务机器人用识别系统 |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012017021A (ja) * | 2010-07-08 | 2012-01-26 | Panasonic Corp | 駐車支援装置および車両 |
CN105539427A (zh) * | 2014-10-28 | 2016-05-04 | 爱信精机株式会社 | 停车辅助系统及停车辅助方法 |
US20170270370A1 (en) * | 2014-09-29 | 2017-09-21 | Clarion Co., Ltd. | In-vehicle image processing device |
CN107933427A (zh) * | 2017-11-09 | 2018-04-20 | 武汉华安科技股份有限公司 | 一种嵌入式大型车辆泊车辅助系统 |
CN110390832A (zh) * | 2019-06-25 | 2019-10-29 | 东风柳州汽车有限公司 | 自动代客泊车方法 |
CN111369439A (zh) * | 2020-02-29 | 2020-07-03 | 华南理工大学 | 基于环视的自动泊车车位识别全景环视图像实时拼接方法 |
CN111976601A (zh) * | 2019-05-24 | 2020-11-24 | 北京四维图新科技股份有限公司 | 自动泊车方法、装置、设备和存储介质 |
CN112171675A (zh) * | 2020-09-28 | 2021-01-05 | 深圳市丹芽科技有限公司 | 一种移动机器人的避障方法、装置、机器人及存储介质 |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109977776B (zh) * | 2019-02-25 | 2023-06-23 | 驭势(上海)汽车科技有限公司 | 一种车道线检测方法、装置及车载设备 |
CN110084133B (zh) * | 2019-04-03 | 2022-02-01 | 百度在线网络技术(北京)有限公司 | 障碍物检测方法、装置、车辆、计算机设备和存储介质 |
-
2021
- 2021-01-29 CN CN202180000434.XA patent/CN112912895B/zh active Active
- 2021-01-29 WO PCT/CN2021/074328 patent/WO2022160232A1/zh active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012017021A (ja) * | 2010-07-08 | 2012-01-26 | Panasonic Corp | 駐車支援装置および車両 |
US20170270370A1 (en) * | 2014-09-29 | 2017-09-21 | Clarion Co., Ltd. | In-vehicle image processing device |
CN105539427A (zh) * | 2014-10-28 | 2016-05-04 | 爱信精机株式会社 | 停车辅助系统及停车辅助方法 |
CN107933427A (zh) * | 2017-11-09 | 2018-04-20 | 武汉华安科技股份有限公司 | 一种嵌入式大型车辆泊车辅助系统 |
CN111976601A (zh) * | 2019-05-24 | 2020-11-24 | 北京四维图新科技股份有限公司 | 自动泊车方法、装置、设备和存储介质 |
CN110390832A (zh) * | 2019-06-25 | 2019-10-29 | 东风柳州汽车有限公司 | 自动代客泊车方法 |
CN111369439A (zh) * | 2020-02-29 | 2020-07-03 | 华南理工大学 | 基于环视的自动泊车车位识别全景环视图像实时拼接方法 |
CN112171675A (zh) * | 2020-09-28 | 2021-01-05 | 深圳市丹芽科技有限公司 | 一种移动机器人的避障方法、装置、机器人及存储介质 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024124865A1 (zh) * | 2022-12-14 | 2024-06-20 | 天津所托瑞安汽车科技有限公司 | 车辆姿态检测方法、装置、车辆及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN112912895A (zh) | 2021-06-04 |
CN112912895B (zh) | 2022-07-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7511673B2 (ja) | 車両走行可能領域検出方法、システム、及びそのシステムを使用する自律車両 | |
JP2018534699A (ja) | 誤りのある深度情報を補正するためのシステムおよび方法 | |
JP7072641B2 (ja) | 路面検出装置、路面検出装置を利用した画像表示装置、路面検出装置を利用した障害物検知装置、路面検出方法、路面検出方法を利用した画像表示方法、および路面検出方法を利用した障害物検知方法 | |
JP6906567B2 (ja) | 障害物の検知方法、システム、コンピュータ装置、コンピュータ記憶媒体 | |
JP2017220923A (ja) | 画像生成装置、画像生成方法、およびプログラム | |
CN109993060B (zh) | 深度摄像头的车辆全向障碍物检测方法 | |
JP5959264B2 (ja) | 画像処理装置及び方法、並びにコンピュータプログラム | |
KR102118066B1 (ko) | 주행 안전을 위한 차량 제어방법 | |
WO2022160232A1 (zh) | 一种检测方法、装置和车辆 | |
US20240185547A1 (en) | Image processing apparatus, image processing method, and recording medium | |
KR101278654B1 (ko) | 차량의 주변 영상 디스플레이 장치 및 방법 | |
US20220172490A1 (en) | Image processing apparatus, vehicle control apparatus, method, and program | |
WO2017033518A1 (ja) | 車両用表示装置および車両用表示方法 | |
JP7047291B2 (ja) | 情報処理装置、撮像装置、機器制御システム、移動体、情報処理方法およびプログラム | |
CN116101174A (zh) | 车辆的碰撞提醒方法、装置、车辆及存储介质 | |
JP2007233487A (ja) | 歩行者検出方法、装置、およびプログラム | |
JP5580062B2 (ja) | 障害物検知警報装置 | |
JP6565674B2 (ja) | 車両用表示装置および車両用表示方法 | |
JP2013161433A (ja) | 車両の周辺監視装置 | |
US20230037900A1 (en) | Device and Method for Determining Objects Around a Vehicle | |
WO2017086057A1 (ja) | 車両用表示装置および車両用表示方法 | |
CN112270311B (zh) | 一种基于车载环视逆投影的近目标快速检测方法及系统 | |
EP3540640A1 (en) | Information processing apparatus, system, movable body, method of information processing, and program | |
CN118747880A (zh) | 一种识别方法、电子设备、车辆、介质和产品 | |
CN114119576A (zh) | 图像处理方法、装置、电子设备及交通工具 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21921838 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21921838 Country of ref document: EP Kind code of ref document: A1 |