WO2021185284A1 - 一种全景鸟瞰图像生成方法 - Google Patents
一种全景鸟瞰图像生成方法 Download PDFInfo
- Publication number
- WO2021185284A1 WO2021185284A1 PCT/CN2021/081330 CN2021081330W WO2021185284A1 WO 2021185284 A1 WO2021185284 A1 WO 2021185284A1 CN 2021081330 W CN2021081330 W CN 2021081330W WO 2021185284 A1 WO2021185284 A1 WO 2021185284A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- images
- mask
- debugging
- image sensor
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 56
- 238000012545 processing Methods 0.000 claims abstract description 61
- 238000001914 filtration Methods 0.000 claims abstract description 4
- 240000004050 Pentaglottis sempervirens Species 0.000 claims description 63
- 235000004522 Pentaglottis sempervirens Nutrition 0.000 claims description 63
- 238000004590 computer program Methods 0.000 claims description 29
- 238000013507 mapping Methods 0.000 claims description 26
- 238000007499 fusion processing Methods 0.000 claims description 18
- 238000005286 illumination Methods 0.000 claims description 9
- 108010076504 Protein Sorting Signals Proteins 0.000 description 29
- 238000010586 diagram Methods 0.000 description 29
- 230000004927 fusion Effects 0.000 description 17
- 238000009434 installation Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 6
- 241000226585 Antennaria plantaginifolia Species 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000007613 environmental effect Effects 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 239000003550 marker Substances 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 230000006399 behavior Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/698—Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
Definitions
- the invention relates to an image data processing technology, in particular to a method for generating a panoramic bird's-eye view image.
- Panoramic bird’s-eye view image generation technology is an important technology widely used in mobile devices. For example, for vehicle-mounted systems, generating corresponding panoramic bird’s-eye images based on the images captured by the vehicle’s image sensor can help operators better understand the vehicle’s performance. Current environmental conditions, so that vehicle control behaviors such as turning and reversing can be accurately performed.
- the existing panoramic bird's-eye view image generation technology is more suitable for ordinary vehicles with low height.
- the volume is very large, and the height and length are generally close to 30 meters.
- the operator needs to look down at the scene below the RTG, and control the equipment to grab and place the container at a specific location.
- the panoramic bird's-eye view image provides the operator with a more comprehensive and convenient observation angle, which can greatly improve the operation efficiency and work safety.
- the image sensor installed on the RTG will inevitably capture a large number of vertical objects, and most of the area in the obtained panoramic bird's-eye view image will be occupied by vertical objects, resulting in operation
- the area that personnel need to observe is obscured by vertical objects, which affects the normal operation of the operator.
- the present invention proposes a panoramic bird's-eye view image generation method, including: acquiring original images captured by multiple image sensors located at different positions, wherein one or more of the original images includes Part of the image of the device where the multiple sensors are located; filtering the original image based on the preset mask corresponding to each image sensor to obtain a corresponding mask processed image, wherein the mask processed image does not include the device And performing stitching processing on each of the masked images to obtain a panoramic bird's-eye view of the device.
- it further includes: performing illumination compensation processing and/or frequency division fusion processing on the panoramic bird's-eye view image of the device.
- it further includes: filling the virtual image in all or part of the vacant area in the area filtered by the preset mask in the panoramic bird's-eye view image.
- it also includes performing de-distortion processing on the original image based on the distortion parameters of the image sensor.
- re-determining the preset mask every time the device changes its position which includes: acquiring a debugging image through each image sensor, wherein one or more of the debugging images includes a partial image of the device; And removing the image of the device in the debugging image and generating a preset mask corresponding to each image sensor.
- the method further includes: determining the mapping relationship between the debugging images.
- determining the mapping relationship between the debugging images includes: recognizing the markers included in the debugging images; and based on the difference between the recognition result and the preset meaning represented by the markers The relationship between determines the mapping relationship between the debugging images.
- determining the mapping relationship between the debug images includes: extracting and matching feature points in the debug image; and determining the matching condition of the feature points of the debug image Debug the mapping relationship between images.
- the method, wherein extracting and matching the characteristic points in the debugging image includes: extracting and matching the characteristic points using ORB, SURF or SIFT algorithms.
- the equipment is a container crane.
- the present application further includes a panoramic bird's-eye view image generation device, including: an image acquisition module configured to acquire original images captured by multiple image sensors located at different positions, wherein one or more of the original images includes the multiple Part of the image of the device where the sensor is located; a mask processing module configured to filter the original image based on the preset mask corresponding to each image sensor to obtain a corresponding mask processed image, wherein the mask processed image does not include The image of the device; a splicing processing module configured to perform splicing processing on each of the mask processed images to obtain a panoramic bird's-eye view of the device.
- an image acquisition module configured to acquire original images captured by multiple image sensors located at different positions, wherein one or more of the original images includes the multiple Part of the image of the device where the sensor is located
- a mask processing module configured to filter the original image based on the preset mask corresponding to each image sensor to obtain a corresponding mask processed image, wherein the mask processed image does not include The image of the device
- the panoramic bird's-eye view image generation device further includes an initialization module configured to obtain debugging images through each image sensor, wherein one or more of the debugging images include partial images of the equipment; and removing the Debug the image of the device in the image and generate a mask corresponding to each image sensor.
- the application further includes a container crane on which a computer program is stored, and when the computer program is executed by a processor, the steps of any one of the foregoing methods are implemented, wherein the device is the container crane.
- FIG. 1 is a schematic flowchart of a method for generating a panoramic bird's-eye view image in an embodiment of this application;
- FIG. 2A is a schematic diagram of the structure of a rubber-tyred container crane (RTG) in an embodiment of the application;
- FIG. 2B is an image sensor with a vertical downward shooting direction in an embodiment of the application
- 3A is a schematic diagram of an image taken by an RTG single image sensor in an embodiment of the application.
- 3B is a schematic diagram of a panoramic bird's-eye view image obtained by image splicing directly based on images taken by each single image sensor in an embodiment of the application;
- 3C is a schematic diagram of a panoramic bird's-eye view image obtained by performing mask processing on images taken by each RTG single image sensor and then performing image stitching in an embodiment of the present application;
- FIG. 4 is a schematic diagram of the principle of mask processing in an embodiment of the present application.
- FIG. 5 is a schematic diagram of a preset mask in an embodiment of this application.
- Fig. 6A is a schematic diagram of a normal image in an embodiment of the application.
- Fig. 6B is a schematic diagram of pincushion distortion in an embodiment of the application.
- 6C is a schematic diagram of barrel distortion in an embodiment of the application.
- FIG. 7 is a schematic structural diagram of an apparatus for generating a panoramic bird's-eye view image in an embodiment of the application.
- Fig. 8 is an internal structure diagram of a computer device in an embodiment of the application.
- FIG. 1 is a schematic flow chart of a method for generating a panoramic bird's-eye view image according to an embodiment of this application. As shown in FIG. 1, this application provides a method for generating a panoramic bird's-eye view image. The method mainly includes the following steps:
- Step 101 Obtain original images captured by multiple image sensors located at different positions.
- the original images of the multiple positions may be captured by image sensors arranged in different orientations, and the image sensors may be devices installed on the RTG and used to capture images of different orientations around the RTG.
- an image captured by any image sensor and at least one image captured by another image sensor have an overlapping area.
- the original images at multiple locations can reflect the horizontal ground image around the current RTG. Based on the original images at these different locations, a panoramic bird's-eye view image around the RTG can be obtained, so that the equipment operator can understand the surrounding environmental conditions of the device based on the panoramic bird's-eye view image, so as to control the device. Carry out equipment operation and other control.
- the equipment described in the present application may be a rubber-tyre container crane (RTG), a large airplane or a large building, etc., which has a certain height but has relatively fixed structures.
- RTG rubber-tyre container crane
- the technical solution of the present application is mainly applied to high-height equipment. Because the equipment itself is relatively high, the installation position of the image sensor is relatively high, and the image captured by the image sensor will contain a large amount of vertical Objects (such as the vertical rack structure of the device itself, etc.), thereby affecting the quality of the final panoramic bird's-eye view image.
- FIG. 2A is a schematic diagram of the structure of a rubber-tyred container crane (RTG) in an embodiment of the application.
- the device takes RTG as an example to introduce this method in detail.
- FIG. 2A it is a schematic structural diagram of an RTG.
- RTG is a special machine for a large-scale specialized container yard. Its volume is huge, and its height and length are generally close to 30 meters.
- the operator can control the RTG to move in the X-axis direction in the figure, and the operating cabin where the operator is located (set in the frame 21 , Not shown in the figure) can move along the frame 21 in the Y-axis direction in the figure.
- RTG is mainly used for loading and unloading standard containers.
- the height (Z-axis distance) of the frame 21 of the RTG relative to the ground on which the tire 23 is located is much higher than that of ordinary equipment (such as vehicles).
- the image sensor of the RTG can be set on the frame 21 in the figure. As known by those skilled in the art, the position and number of the image sensor are not fixed, as long as the images in various directions can be captured, and the captured images are stitched together. , Can reflect the surrounding environment of the current RTG. Since the image sensor is far away from the ground, it is easy to capture the structure of the rack 21 when shooting images under the rack 21, and the long distance between the image sensor and the ground will increase the area where the structure of the rack 21 is located in the image. The ratio makes the obtained panoramic bird's-eye view image contain a lot of unnecessary information, which affects the operation of the operator.
- Figure 2B is an image sensor with a vertical downward shooting direction in an embodiment of this application.
- the image sensor in this application can be installed on the (highest) beam or support structure of the RTG, specifically It is installed at each vertex or midpoint position. Based on the vertex or midpoint setting position, the number of image sensors can specifically be four. It should be noted that in actual applications, the installation position and number of image sensors can be adjusted according to RTG, which is not limited here.
- the shooting area of image sensor C3 is X1 to X3
- the shooting area of image sensor C4 is X3 to X4
- image sensor C3 and image sensor C4 have the same shooting area, namely X3 to X2, therefore, the images captured by the image sensor C3 and the image sensor C4 have overlapping areas, and image stitching processing can be performed.
- the operator can clearly understand the road conditions around the device, thereby facilitating normal operation control of the device.
- FIG. 3A is a schematic diagram of images taken by a single RTG image sensor in an embodiment of the application
- FIG. 3B is a schematic diagram of a panoramic bird's-eye view image obtained by image splicing directly based on images taken by each single image sensor in an embodiment of the application.
- the schematic diagram of the image taken by the RTG single image sensor and the schematic diagram of the panoramic bird's-eye view image obtained by image splicing directly based on the images taken by each single image sensor as shown in FIG. 3A
- the frame structure in the dashed line occupies a large proportion in the figure and obscures more areas that need to be observed.
- Step 102 Perform filtering processing on the original image based on a preset mask corresponding to each image sensor to obtain a corresponding mask processed image.
- the mask processing can be based on the preset mask corresponding to the preset image sensor, the process of obtaining a mask processed image with a specific area removed from the original image, and the specific area may change with the change of the preset mask .
- the method of determining the preset mask corresponding to each image sensor is described in detail in the subsequent part.
- Step 103 Perform splicing processing on each of the mask processed images to obtain a panoramic bird's-eye view image of the device.
- the processor masks the original images taken by different image sensors, the masked image obtained only contains the road condition area, and the panoramic bird's-eye view image obtained by image stitching based on the masked image contains only the road condition. Area part, thus, the operator can control the device according to the image information in the panoramic bird's-eye view image.
- the mask-processed image includes at least a first mask-processed image and a second mask-processed image (ie, mask-processed images obtained by different image sensors). Specifically, performing image splicing processing on each mask processed image further includes: splicing the first mask processed image and the second mask processed image according to the mapping relationship between the first mask processed image and the second mask processed image Process to get a spliced image.
- the RTG is provided with multiple image sensors, and according to the mapping relationship, the mask processed images obtained by all the image sensors are stitched in the above-mentioned method to obtain the RTG panoramic bird's-eye view image.
- FIG. 3C is a schematic diagram of a panoramic bird's-eye view image obtained by performing mask processing on images taken by each RTG single image sensor and then performing image stitching in an embodiment of the present application.
- the image after the above-mentioned preset mask and image stitching processing is as shown in FIG. 3C.
- the panoramic bird's-eye view image obtained by the method of the present application can effectively remove the interference of the interference part such as the structure of the rack itself on the road condition, and obtain the part that only contains the required road condition information.
- the process of determining the mapping relationship corresponding to the debug image of each image sensor only needs to be performed once. In some embodiments, due to problems such as post-maintenance, the relative position of the image sensor changes, and preprocessing needs to be performed again.
- each image sensor performs pre-shooting to obtain a debugging image, which may include an image of the device.
- the corresponding preset mask can be determined.
- the value of the preset mask can be 1 for the area that does not include the image of the device, and 0 for the area that includes the image of the device.
- Fig. 5 is a schematic diagram of a preset mask in an embodiment of the application. As shown in Figure 5, it is a schematic diagram of the preset mask.
- the area corresponding to the white part in the figure is the area that does not include the device image (corresponding to the value 1 in the preset mask), which is a reserved area; the area corresponding to the black part is the area that includes the device image (the value in the preset mask is 0), which is the discarded area.
- the process of obtaining the preset mask may be performed after the initial installation of the image sensor is completed. It can be understood that in the technical solution of the present application, after the position of each image sensor is fixed, the process of determining the preset mask corresponding to each image sensor only needs to be performed once.
- the fixed position of each image sensor may mean that the relative position of each image sensor does not change, and the installation position of each image sensor on the RTG does not change. In the actual application process, if the position of the image sensor changes, the preset mask needs to be re-determined.
- the processor is obtaining each After the original image captured by the image sensor, it is not directly based on each original image for image stitching processing, but firstly obtains the preset mask corresponding to each image sensor, and performs mask processing based on the preset mask and each original image, so that Obtain a masked image that only contains the area of the road condition and does not include the "disturbing part" of the structure of the rack itself, which will block the road condition area below.
- the shape, location and size of the "interference part" are different. Therefore, different image sensors correspond to The preset masks are also different.
- the irradiation area of the image sensor generally does not change.
- the image part such as the structure of the frame itself does not change, that is, the content of this part is the image sensor Fixed shooting content. Therefore, before mask processing is performed on the original image corresponding to each image sensor, the step of determining the preset mask corresponding to each image sensor is further included.
- the method for obtaining the mapping relationship between the first mask processed image and the second mask processed image includes: obtaining debug images corresponding to each image sensor, extracting matching points in each debug image, and based on each debug image To determine the image mapping relationship of each image sensor.
- the image of any image sensor and the image of at least one other image sensor have an overlapping area, so that the image can be performed according to the overlapping area.
- Stitching processing where the position of each image sensor is fixed may mean that the relative position between the image sensors does not change, and the installation position of each image sensor on the RTG does not change. Because of this, the mapping relationship of the debug image can also be applied to the mask processed image. In the actual application process, if the position of the image sensor changes, the mapping relationship needs to be re-determined. In some embodiments, the process of acquiring the mapping relationship may be performed after the initial installation of the image sensor is completed. Wherein, the process of obtaining the mapping relationship may be before obtaining the preset mask.
- the feature points in different debug images are first extracted.
- the feature points can be extracted by ORB (Oriented FAST and Rotated Brief, a fast feature point extraction and description algorithm) , SURF (Speeded Up Robust Features) or SIFT (Scale-invariant feature transform, scale-invariant feature transform) algorithm extraction and implementation.
- the matching point can also be determined on the debugging image by manual delineation.
- the feature points of each debug image are extracted, the feature points are matched to determine the matching point in each debug image, and then the image mapping relationship of each image sensor is determined based on the matching points.
- an identification code may be used, such as an aruco code or other two-dimensional codes or even a barcode, and each identifier corresponds to a preset value or a meaning.
- the aruco code can be a binary square mark, which consists of a wide black border and an internal binary matrix, which determines their id. The black border is conducive to quickly detecting the image, and the binary code can verify the id. As shown in Figure 9, it is a schematic diagram of the aruco code.
- the method for generating a panoramic bird's-eye view image further includes: performing parameter calibration on a plurality of image sensors.
- FIG. 6A is a schematic diagram of a normal image in an embodiment of this application
- FIG. 6B is a schematic diagram of pincushion distortion in an embodiment of this application
- FIG. 6C is a schematic diagram of barrel distortion in an embodiment of this application. Since the image sensor is located far away from the ground, when performing long-distance imaging, the captured image is prone to distortion.
- FIG. 6A, 6B, and 6C are schematic diagrams of a normal image and a distorted image, respectively, in which FIG. 6B is a pincushion distortion, and FIG. 6C is a barrel distortion. It can be seen that image distortion will change the normal form of the object, and the same object with different forms cannot be processed for image splicing. Therefore, this embodiment first performs parameter calibration on multiple image sensors to eliminate image distortion and keep the object in the image. The normal form is convenient for subsequent image stitching processing.
- the processing step of parameter calibration may be performed after the image sensor is installed on the device. In some embodiments, the processing step of parameter calibration may be performed before the image sensor is installed on the RTG.
- pinhole model for a general image sensor model, the following pinhole model can be used:
- [X Y Z] T can be the coordinates of the world coordinate system
- [u v] T can be the coordinates of the image coordinate system.
- Zhang Zhengyou checkerboard calibration method can be used.
- [u v] T can be the coordinates of the image coordinate system
- This application provides a method for generating a panoramic bird's-eye view image. After obtaining the original image taken by each image sensor, first perform mask processing based on the preset mask corresponding to the image sensor and the original image, and the image of the device can be removed through the mask processing. So as to ensure the information quality of the panoramic bird's-eye view image. In practical applications, it can be ensured that the operator can perform normal operation control of the device based on the panoramic bird's-eye view image.
- the method further includes: step 104, panoramic bird's-eye view image optimization processing.
- the optimization processing of the panoramic bird's-eye view image may include: illumination compensation of the overlapping area and/or frequency division fusion of the overlapping area.
- this embodiment further includes a step of performing illumination compensation processing on the overlapping area, so that the brightness difference of the overlapping area in the images captured by different image sensors can be reduced or eliminated.
- the illumination compensation processing can be implemented by using histogram equalization or other illumination equalization algorithms, which is not limited here.
- the frequency division fusion of the overlapping area may be the fusion processing of the first masked image and the second masked image in the frequency domain, that is, through Fourier transform and inverse In the Fourier transform, frequency domain signal fusion processing is performed on the first mask processed image and the second mask image to obtain a fused image.
- the processing process specifically includes the following steps:
- the first step Fourier transform is performed on the first overlapping image and the second overlapping image, respectively, to obtain a first signal sequence corresponding to the first overlapping image and a second signal sequence corresponding to the second overlapping image.
- the second step is to perform signal fusion processing on the signals corresponding to the same frequency in the first signal sequence and the second signal sequence to obtain a fused signal sequence.
- the third step is to perform an inverse Fourier transform on the fusion signal sequence to obtain a fusion image of the first overlap image and the second overlap image.
- This embodiment can perform frequency division fusion processing for the overlapping area of the two images, by dividing the overlapping area of the two images into signals of different frequencies, and then fusing the corresponding signals of different frequencies respectively, compared to directly in the image domain.
- this embodiment belongs to fusion processing in the frequency domain, which can avoid the loss of signal amplitude information and phase information.
- the amplitude information represents the corresponding texture information
- the phase information represents the corresponding position information. Get a more natural fusion effect than image domain fusion.
- the signal fusion processing includes spatial domain weighted sum processing.
- it may be based on the first preset weight corresponding to the first signal sequence and the second preset weight corresponding to the second signal sequence Perform weighted summation processing (for example, the first preset weight and the second preset weight are both 50%). Therefore, by setting the first preset weight and the second preset weight, the fusion effect can be adjusted according to the actual situation.
- the method further includes: step 105 panoramic bird's-eye view image mapping processing. Specifically, because the mask processing filters out part of the image, black areas appear in the image, as shown in FIG. 5.
- the virtual image may be added to the part of the panoramic bird's-eye view image that is filtered by the preset mask, so that the entire image is more complete and beautiful.
- a device for generating a panoramic bird's-eye view image is provided, and the device mainly includes the following modules:
- the image acquisition module 701 acquires original images captured by multiple image sensors located at different positions, where one or more of the original images includes partial images of the device where the multiple sensors are located;
- the mask processing module 702 is configured to filter the original image based on the preset mask corresponding to each image sensor to obtain a corresponding mask processed image, wherein the mask processed image does not include the image of the device;
- the splicing processing module 703 is configured to perform splicing processing on each of the mask processed images to obtain a panoramic bird's-eye view image of the device.
- This embodiment provides a panoramic bird's-eye view image generation device. After acquiring the original image taken by each image sensor, it first performs mask processing based on the preset mask corresponding to the image sensor and the original image, and the original image can be removed by the mask processing.
- the image of the equipment to ensure the information quality of the panoramic bird's-eye view image. In practical applications, it can be ensured that the operator can perform normal operation control of the equipment based on the panoramic bird's-eye view image that does not include the equipment image.
- the panoramic bird's-eye view image generation device further includes: an initialization module for acquiring the debugging image corresponding to each image sensor; determining the preset mask corresponding to each image sensor, and obtaining correspondingly by removing the device image in the debugging image Preset mask.
- the panoramic bird's-eye view image generating device further includes: a mapping relationship determination module for acquiring the debugging image corresponding to each image sensor; extracting matching points in each debugging image; determining each The image mapping relationship of the image sensor.
- the stitching processing module 703 is further configured to: extract the overlapping area of the first image and the second image to obtain the first overlapping image in the first image and the second overlapping image in the second image; Perform fusion processing on the overlapped image and the second overlapped image to obtain a fused image; replace the first overlapped image and the second overlapped image with the fused image to obtain a new first image and a new second image, and compare the new first image And the new second image undergoes image stitching processing.
- the stitching processing module 703 is further configured to perform frequency domain signal fusion processing on the first overlapping image and the second overlapping image through Fourier transform and inverse Fourier transform to obtain a fused image.
- the stitching processing module 703 is further configured to: perform Fourier transform on the first overlapping image and the second overlapping image to obtain the first signal sequence corresponding to the first overlapping image and the first signal sequence corresponding to the second overlapping image. Two signal sequence; perform signal fusion processing on the signals corresponding to the same frequency in the first signal sequence and the second signal sequence to obtain the fused signal sequence; perform inverse Fourier transform on the fused signal sequence to obtain the first overlap image and the second overlap Image fusion image.
- the stitching processing module 703 is further configured to: perform illumination compensation processing on the overlapped area of each mask processed image.
- each module in the above-mentioned panoramic bird's-eye view image generating device can be implemented in whole or in part by software, hardware, and a combination thereof.
- the above-mentioned modules may be embedded in the form of hardware or independent of the processor in the computer equipment, or may be stored in the memory of the computer equipment in the form of software, so that the processor can call and execute the corresponding operations of the above-mentioned modules.
- a computer device including a memory and a processor, and a computer program is stored in the memory.
- the processor executes the computer program, the following steps are implemented: acquiring multiple original images, and each original image is set in a different It is captured by an azimuth image sensor, where an image captured by any image sensor and at least one image captured by another image sensor have an overlapping area; mask processing is performed based on each original image and the preset mask corresponding to each image sensor to obtain The mask processed image corresponding to each original image does not include the device image; based on the overlap area between the mask processed images, image stitching processing is performed on each mask processed image to obtain a panoramic bird's-eye view image.
- the processor further implements the following steps when executing the computer program: acquiring the debugging image corresponding to each image sensor; determining the preset mask corresponding to each image sensor, and obtaining the corresponding preset by removing the device image in the debugging image Mask.
- the processor further implements the following steps when executing the computer program: acquiring the debugging image corresponding to each image sensor; extracting matching points in each debugging image; determining the image of each image sensor based on the matching points in each debugging image Mapping relations.
- the processor further implements the following steps when executing the computer program: extracting the overlapping area of the first image and the second image to obtain the first overlapping image in the first image and the second overlapping image in the second image; Perform fusion processing on the first overlap image and the second overlap image to obtain a fusion image; replace the first overlap image and the second overlap image with the fusion image to obtain a new first image and a new second image, and compare the new The first image and the new second image undergo image stitching processing.
- the processor further implements the following steps when executing the computer program: performing frequency domain signal fusion processing on the first overlapping image and the second overlapping image through Fourier transform and inverse Fourier transform to obtain a fused image.
- the processor executes the computer program, the following steps are further implemented: Fourier transform is performed on the first overlapping image and the second overlapping image to obtain the first signal sequence and the second overlapping image corresponding to the first overlapping image. Corresponding to the second signal sequence; perform signal fusion processing on the first signal sequence and the signals corresponding to the same frequency in the second signal sequence to obtain the fused signal sequence; perform inverse Fourier transform on the fused signal sequence to obtain the first overlapping image and The fusion image of the second overlapping image.
- the processor further implements the following steps when executing the computer program: performing illumination compensation processing on the overlapping regions of each mask processed image.
- Fig. 8 is an internal structure diagram of a computer device in an embodiment.
- the computer device may specifically be a terminal (or server).
- the computer equipment includes a processor, a memory, a network interface, an input device, and a display screen connected through a system bus.
- the memory includes a non-volatile storage medium and an internal memory.
- the non-volatile storage medium of the computer device stores an operating system and may also store a computer program.
- the processor can realize the method for generating a panoramic bird's-eye view image.
- a computer program may also be stored in the internal memory, and when the computer program is executed by the processor, the processor can execute the method for generating a panoramic bird's-eye view image.
- the display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen. It can be an external keyboard, touchpad, or mouse.
- FIG. 8 is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation on the computer device to which the solution of the present application is applied.
- the specific computer device may Including more or fewer parts than shown in the figure, or combining some parts, or having a different arrangement of parts.
- a computer-readable storage medium is provided, and a computer program is stored thereon.
- the computer program is executed by a processor, the following steps are implemented: acquiring multiple original images, and each original image passes through images set in different orientations. It is captured by a sensor, where an image captured by any image sensor and at least one image captured by another image sensor have an overlapping area; mask processing is performed based on each original image and the preset mask corresponding to each image sensor to obtain each original image Corresponding mask processed images, the mask processed images do not include device images; based on the overlap area between the mask processed images, image stitching processing is performed on each mask processed image to obtain a panoramic bird's-eye view image.
- the following steps are also implemented: obtain the debug image corresponding to each image sensor; determine the preset mask corresponding to each image sensor, and obtain the corresponding preset mask by removing the device image
- the following steps are also implemented: acquiring the debugging image corresponding to each image sensor; extracting matching points in each debugging image; determining the matching points of each image sensor based on the matching points in each debugging image Image mapping relationship.
- the following steps are also implemented: extracting the overlapping area of the first image and the second image to obtain the first overlapping image in the first image and the second overlapping image in the second image ; Perform fusion processing on the first overlap image and the second overlap image to obtain a fusion image; replace the first overlap image and the second overlap image with the fusion image to obtain a new first image and a new second image, and compare the new Perform image stitching processing on the first image and the new second image.
- the following steps are further implemented: performing frequency domain signal fusion processing on the first overlapping image and the second overlapping image through Fourier transform and inverse Fourier transform to obtain a fused image .
- the following steps are also implemented: Fourier transform is performed on the first overlapping image and the second overlapping image to obtain the first signal sequence and the second overlapping image corresponding to the first overlapping image.
- the second signal sequence corresponding to the image;
- signal fusion processing is performed on the first signal sequence and the signal corresponding to the same frequency in the second signal sequence to obtain the fusion signal sequence;
- the inverse Fourier transform is performed on the fusion signal sequence to obtain the first overlapping image And the fusion image of the second overlapping image.
- the following steps are further implemented: performing illumination compensation processing on the overlapped area of each mask processed image.
- Non-volatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
- Volatile memory may include random access memory (RAM) or external cache memory.
- RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Channel (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.
- SRAM static RAM
- DRAM dynamic RAM
- SDRAM synchronous DRAM
- DDRSDRAM double data rate SDRAM
- ESDRAM enhanced SDRAM
- SLDRAM synchronous chain Channel
- memory bus Radbus direct RAM
- RDRAM direct memory bus dynamic RAM
- RDRAM memory bus dynamic RAM
- the solution of the present application further includes a container crane on which the aforementioned computer program is stored.
- the computer program is executed by a processor, the steps of the method described in the aforementioned solution are realized, wherein the device is The container crane.
- it further includes an image sensor located on the lateral frame of the crane.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
Abstract
本发明涉及一种全景鸟瞰图像生成方法,包括:获取位于不同位置的多个图像传感器捕捉的原始图像,其中一幅或多幅所述原始图像中包括所述多个图像传感器所在设备的部分影像;基于各图像传感器对应的预设掩码分别对各所述原始图像进行过滤处理,得到对应的掩码处理图像,其中所述掩码处理图像不包括所述设备的影像;以及对各所述掩码处理图像进行拼接处理得到所述设备的全景鸟瞰图像。本申请进一步包括一种全景鸟瞰图像生成装置和一种集装箱起重机。
Description
本发明涉及一种图像数据处理技术,特别地涉及一种全景鸟瞰图像生成方法。
全景鸟瞰图像生成技术是一种广泛应用于可移动设备的重要技术,例如,对于车载系统,根据车载图像传感器拍摄得到的图像生成对应的全景鸟瞰图像可以有助于操作人员更好地了解车辆的当前环境状况,从而可以准确进行转弯、倒车等车辆控制行为。
现有的全景鸟瞰图像生成技术对于高度较低的普通车辆来说较为适用。然而,对于某些图像传感器安装位置比较高的特殊设备,例如,轮胎式集装箱起重机(Rubber Tyre Gantry,RTG),其体积非常大,高度和长度一般均接近30米。操作人员需要在RTG上俯看下方场景,控制设备在特定位置抓放集装箱。全景鸟瞰图像为操作人员提供了更全面更方便的观察角度,可以极大提升操作效率和作业安全。使用现有的全景鸟瞰图像生成技术得到全景鸟瞰图像时,安装在RTG上的图像传感器不可避免地会拍到大量垂直物体,得到的全景鸟瞰图像中大部分区域会被垂直物体占据,从而导致操作人员需要观测的区域被垂直物体所遮挡,影响操作人员的正常操作。
发明内容
针对现有技术中存在的技术问题,本发明提出了一种全景鸟瞰图像生成方法,包括:获取位于不同位置的多个图像传感器捕捉的原始图像,其中一幅或多幅所述原始图像中包括所述多个传感器所在设备的部分影像;基于各图像传 感器对应的预设掩码对所述原始图像进行过滤处理,得到对应的掩码处理图像,其中所述掩码处理图像不包括所述设备的影像;以及对各所述掩码处理图像进行拼接处理得到所述设备的全景鸟瞰图像。
特别的,还包括:对所述设备的全景鸟瞰图像进行光照补偿处理和/或分频融合处理。
特别的,还包括:在所述全景鸟瞰图像中被预设掩码过滤掉的区域全部或部分空缺区域填补虚拟图像。
特别的,还包括基于所述图像传感器的畸变参数对所述原始图像进行去畸变处理。
特别的,在所述设备每次改变其位置后重新确定预设掩码,其包括:通过各图像传感器获取调试图像,其中一幅或多幅所述调试图像中包括所述设备的部分影像;以及去除所述调试图像中所述设备的影像并生成各图像传感器对应的预设掩码。
特别的,所述的方法,还包括:确定所述调试图像之间的映射关系。
特别的,所述的方法,其中确定所述调试图像之间的映射关系,包括:对所述调试图像中包括的标识物进行识别;以及基于识别结果与所述标识物代表的预设含义之间的关系确定所述调试图像之间的映射关系。
特别的,所述的方法,其中确定所述调试图像之间的映射关系,包括:提取所述调试图像中的特征点并进行匹配;以及基于所述调试图像的特征点的匹配情况确定所述调试图像之间的映射关系。
特别的,所述的方法,其中提取所述调试图像中的特征点并进行匹配,包括:使用ORB、SURF或SIFT算法提取并匹配所述特征点。
特别的,其中所述设备为集装箱起重机。
本申请进一步包括一种全景鸟瞰图像生成装置,包括:图像获取模块,配置为获取位于不同位置的多个图像传感器捕捉的原始图像,其中一幅或多幅所 述原始图像中包括所述多个传感器所在设备的部分影像;掩码处理模块,配置为基于各图像传感器对应的预设掩码对所述原始图像进行过滤处理,得到对应的掩码处理图像,其中所述掩码处理图像不包括所述设备的影像;拼接处理模块,配置为对各所述掩码处理图像进行拼接处理得到所述设备的全景鸟瞰图像。
特别的,所述的全景鸟瞰图像生成装置,进一步包括初始化模块,配置为通过各图像传感器获取调试图像,其中一幅或多幅所述调试图像中包括所述设备的部分影像;以及去除所述调试图像中所述设备的影像并生成各图像传感器对应的掩码。
本申请进一步包括一种集装箱起重机,其上存储有计算机程序,所述计算机程序被处理器执行时实现前述任一项所述的方法的步骤,其中所述设备为所述集装箱起重机。
下面,将结合附图对本发明的优选实施方式进行进一步详细的说明,其中:
图1为本申请一个实施例中全景鸟瞰图像生成方法的流程示意图;
图2A为本申请一个实施例中轮胎式集装箱起重机(RTG)的结构示意图;
图2B为本申请一个实施例中拍摄方向为垂直向下的图像传感器;
图3A为本申请一个实施例中RTG单图像传感器拍摄图像示意图;
图3B为本申请一个实施例中直接根据各单图像传感器拍摄图像进行图像拼接得到的全景鸟瞰图像示意图;
图3C为一个本申请实施例中对各RTG单图像传感器拍摄图像进行掩码处理后再进行图像拼接得到的全景鸟瞰图像示意图;
图4为一个本申请实施例中掩码处理的原理示意图;
图5为本申请一个实施例中预设掩码的示意图;
图6A为本申请一个实施例中正常图像的示意图;
图6B为本申请一个实施例中枕形畸变的示意图;
图6C为本申请一个实施例中桶形畸变的示意图;
图7为本申请一个实施例中全景鸟瞰图像生成装置的结构示意图;
图8为本申请一个实施例中计算机设备的内部结构图。
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
在以下的详细描述中,可以参看作为本申请一部分用来说明本申请的特定实施例的各个说明书附图。在附图中,相似的附图标记在不同图式中描述大体上类似的组件。本申请的各个特定实施例在以下进行了足够详细的描述,使得具备本领域相关知识和技术的普通技术人员能够实施本申请的技术方案。应当理解,还可以利用其它实施例或者对本申请的实施例进行结构、逻辑或者电性的改变。
图1为本申请一个实施例全景鸟瞰图像生成方法的流程示意图,如图1所示,本申请提供一种全景鸟瞰图像生成方法,该方法主要包括以下步骤:
步骤101,获取位于不同位置的多个图像传感器捕捉的原始图像。
其中,所述多个位置原始图像可以是通过设置于不同方位的图像传感器拍摄得到的,图像传感器可以是安装于RTG上的、用于拍摄RTG周围不同方位图像的装置。其中,任一图像传感器拍摄的图像与至少一张其他图像传感器拍摄的图像存在重叠区域。多个位置的原始图像可以反映当前RTG周围水平方向地 面影像,基于这些不同位置的原始图像可以得到RTG周围的全景鸟瞰图像,从而便于设备操作人员依据全景鸟瞰图像了解设备周围环境状况,以对设备进行设备操作等控制。
本申请所述设备可以是一种轮胎式集装箱起重机(Rubber Tyre Gantry,RTG)、大型飞机或者大型建筑物等具有一定高度,但自身各个结构相对固定的物体。在一些实施例中,本申请的技术方案主要是应用于高度较高的设备,由于设备自身高度比较高,图像传感器的安装位置相对而言也比较高,图像传感器拍摄得到的图像会包含大量垂直物体(例如设备本身的垂直机架结构等),从而影响最终的全景鸟瞰图像的质量。
图2A为本申请一个实施例中轮胎式集装箱起重机(RTG)的结构示意图。在本实施例中,所述设备以RTG为例,详细介绍本方法。在一些实施例中,如图2A所示,为RTG的结构示意图。RTG是大型专业化集装箱堆场的专用机械,其体积巨大,高度和长度一般接近30米,操作人员可以控制RTG按图中X轴方向移动,且操作人员所在的操作舱(设置于机架21,图中未示出)可沿机架21按图中Y轴方向移动。RTG主要用于装卸标准集装箱,由于集装箱的尺寸较大,因此RTG的机架21相对于轮胎23所在的地面而言,其高度(Z轴距离)远远高于普通设备(如车辆等)。RTG的图像传感器可以设置于图中机架21上,如本领域技术人员所知,图像传感器的位置和数量并不固定,只要能拍摄到各个方位的图像、且所拍到的这些图像经过拼接,可以反应当前RTG的周围环境即可。由于图像传感器离地面的距离较远,在拍摄机架21下方的图像时,容易拍摄到机架21本身结构,且图像传感器与地面的远距离会增加机架21本身结构在图像中所在区域的比例,使得得到的全景鸟瞰图像包含大量非必要信息,影响操作人员操作。
在一个实施例中,对本申请中的图像传感器在RTG的安装位置以及拍摄方向进行解释说明。图2B为本申请一个实施例中拍摄方向为垂直向下的图像传 感器,参考图2A和图2B,本申请中的图像传感器,可以是安装在RTG的(最高)梁或者支撑结构上,具体可以是安装在各顶点位置或者中点位置,基于顶点或者中点的设置位置,图像传感器的数量具体可以是4个。需要说明的是,在实际应用中,图像传感器的安装位置和数量可以根据RTG进行调整,在此不做限定。
参考图2B,基于各图像传感器对应的视场角,图像传感器C3的拍摄区域为X1~X3,图像传感器C4的拍摄区域为X3~X4,图像传感器C3与图像传感器C4存在相同的拍摄区域,即X3~X2,故图像传感器C3与图像传感器C4所拍摄的图像存在重叠区域,能够进行图像拼接处理。另外,通过图像传感器C3以及图像传感器C4所拍摄的图像,操作人员可以清楚了解设备周围的路况,从而便于进行设备的正常操作控制。
图3A为本申请一个实施例中RTG单图像传感器拍摄图像示意图,图3B为本申请一个实施例中直接根据各单图像传感器拍摄图像进行图像拼接得到的全景鸟瞰图像示意图。在一些实施例中,如图3A及图3B所示,分别为RTG单图像传感器拍摄图像示意图以及直接根据各单图像传感器拍摄图像进行图像拼接得到的全景鸟瞰图像示意图,由图3A可以看出,虚线框中的机架结构本身在图中占据了较大比例,遮挡了较多需要观测的区域。由图3B可以看出,这类图像如果不经处理,直接根据各单图像传感器拍摄图像进行图像拼接得到的全景鸟瞰图像中,包含了较多的机架本身结构(虚线框内结构),从而使得需要观测的区域被遮挡,大大影响了操作人员的正常观测与设备操作。
因此,亟需一种方法可以去除图像中这类非必要信息。
步骤102,基于各图像传感器对应的预设掩码对所述原始图像进行过滤处理,得到对应的掩码处理图像。
掩码处理可以是基于预设的图像传感器对应的预设掩码,从原始图像中得到特定区域被去除的掩码处理图像的处理过程,该特定区域可以随着预设掩码 的变化而变化。
在一个实施例中,确定各图像传感器对应的预设掩码的方法在后续部分详细阐述。
步骤103,对各所述掩码处理图像进行拼接处理得到所述设备的全景鸟瞰图像。
处理器在对不同图像传感器拍摄的原始图像进行掩码处理后,得到的掩码处理图像仅包含路况区域部分,在根据掩码处理图像进行图像拼接处理得到的全景鸟瞰图像中,也只包含路况区域部分,从而,操作人员可以根据全景鸟瞰图像中的图像信息进行设备控制。
在一个实施例中,掩码处理图像至少包括第一掩码处理图像以及第二掩码处理图像(即不同图像传感器获得的掩码处理图像)。具体的,对各掩码处理图像进行图像拼接处理还包括:根据第一掩码处理图像与第二掩码处理图像的映射关系,对第一掩码处理图像以及第二掩码处理图像进行拼接处理,得到拼接图像。在一些实施例中,RTG设置有多个图像传感器,则依照映射关系,将所有图像传感器获得的掩码处理图像,以上述方法进行拼接,得到RTG全景鸟瞰图像。
图3C为一个本申请实施例中对各RTG单图像传感器拍摄图像进行掩码处理后再进行图像拼接得到的全景鸟瞰图像示意图。在一些实施例中,经过上述预设掩码和图像拼接处理后的图像,如图3C所示。相比于图3B,通过本申请的方法得到的全景鸟瞰图像可以有效去除机架本身结构等干扰部分对路况的干扰,得到仅包含所需路况信息的部分。
接下来具体介绍预处理。确定各图像传感器调试图像对应的映射关系的过程只需要执行一次即可。在一些实施例中,由于后期维护等问题,图像传感器的相对位置发生改变,也需要再一次进行预处理。
对于预设掩码的获得,图4为一个本申请实施例中掩码处理的原理示意图。 具体地,如图4所示,为掩码处理的原理示意图,其中,原始图像的大小为3*3,将原始图像中的每个像素与预设掩码中的每个对应像素进行与运算,具体为:i&1=i,i&0=0(i为原始图像中的像素值,1和0为预设掩码中的对应像素),从而得到原始图像对应的掩码处理图像。根据图4中预设掩码的值可以得知,图示掩码操作可以得到原始图像中的“T”形区域图像。
本实施例通过各图像传感器进行预拍摄,得到调试图像,调试图像中可能包括设备的影像。在确定不同区域之后,即可确定对应的预设掩码,预设掩码的值可以将不包括设备的影像的区域对应的值为1,将包括有设备的影像的区域对应的值为0,然后将预设掩码应用到原始图像,从而保留原始图像中包括设备的影像的区域。
图5为本申请一个实施例中预设掩码的示意图。如图5所示,为预设掩码的示意图。图中白色部分对应的区域为不包括设备影像的区域(对应预设掩码中的值为1),为保留区域;黑色部分对应的区域为包括设备影像区域(预设掩码中的值为0),为丢弃区域。
在一些实施例中,获取预设掩码的过程可以在图像传感器初次安装完成后进行。可以理解,在本申请的技术方案中,在各图像传感器的位置固定以后,确定各图像传感器对应的预设掩码的过程只需要执行一次即可。其中,各图像传感器的位置固定可以是指各图像传感器之间的相对位置不发生变化,以及各图像传感器在RTG上的安装位置不发生变化。在实际应用过程中,若图像传感器的位置发生变化,则需要重新确定预设掩码。
本申请中,对于图像传感器拍摄的原始图像,其中既包含有设备下方的路况区域部分,也包含有机架本身结构等会遮挡住下方路况区域的“干扰部分”,因此,处理器在得到各图像传感器拍摄的原始图像后,并不是直接基于各原始图像进行图像拼接处理,而是首先获取各图像传感器对应的预设掩码,基于预设掩码以及各原始图像进行掩码处理,从而可以得到仅包含路况区域部分,而 不包含机架本身结构等会遮挡住下方路况区域的“干扰部分”的掩码处理图像。
另外,由于图像传感器设置位置以及图像传感器参数(如焦距等)的差异,在不同的图像传感器所拍摄的图像中,“干扰部分”的形状、位置及大小均存在差别,因此,不同图像传感器对应的预设掩码也并不相同。在将图像传感器安装在设备上后,图像传感器的照射区域一般不会发生变化,在图像传感器拍摄的不同图像中,机架本身结构等图像部分并不会发生变化,即该部分内容为图像传感器的固定拍摄内容。因此,在对各图像传感器对应的原始图像进行掩码处理之前,还包括确定各图像传感器对应的预设掩码的步骤。
知晓了预设掩码的获取,还需要知晓如何将掩码处理图像拼接的规则。在一个实施例中,获得第一掩码处理图像与第二掩码处理图像的映射关系的方法包括:获取各图像传感器对应的调试图像,提取各调试图像中的匹配点,基于各调试图像中的匹配点,确定各图像传感器的图像映射关系。如前所述,可以理解,在本申请的技术方案中,在各图像传感器的位置固定以后,任一图像传感器的图像与其他至少一个图像传感器的图像存在重叠区域,从而可以根据重叠区域进行图像拼接处理,其中,各图像传感器的位置固定可以是指各图像传感器之间的相对位置不发生变化,以及各图像传感器在RTG上的安装位置不发生变化。也正因如此,调试图像的映射关系也可以应用在掩码处理图像上。在实际应用过程中,若图像传感器的位置发生变化,则需要重新确定映射关系。在一些实施例中,获取映射关系的过程可以在图像传感器初次安装完成后进行。其中,获取映射关系的过程可以在获取预设掩码之前。
具体地,在得到各图像传感器拍摄的调试图像之后,首先进行不同调试图像中的特征点提取,特征点提取具体可以是通过ORB(Oriented FAST and Rotated BRIEF,一种快速特征点提取和描述算法)、SURF(Speeded Up Robust Features)或SIFT(Scale-invariant feature transform,尺度不变特征变换)等算法提取实现。在一些实施例中,也可以通过人工圈定的方式在调试图 像上确定匹配点。
在提取得到各调试图像的特征点之后,对各特征点进行匹配,从而确定各调试图像中的匹配点,进而基于匹配点确定各图像传感器的图像映射关系。
另外,也可以通过对调试图像中包括的标识物进行识别,并基于识别结果与所述标识物代表的预设含义之间的关系确定所述调试图像之间的映射关系。在一些实施例中,可以使用识别码,例如aruco码或其他二维码等甚至可以是条形码,每个标识物对应着一个预设的值或者说一个含义。aruco码可以是一个二进制平方标记,它由一个宽的黑边和一个内部的二进制矩阵组成,内部的矩阵决定了它们的id。黑色的边界有利于快速检测到图像,二进制编码可以验证id。如图9所示,为aruco码的示意图。
本申请方案中涉及的图像传感器获得的图像会出现畸变。因此,在一个实施例中,在获取图像传感器拍摄的调试图像之前,全景鸟瞰图像生成方法还包括:对多个图像传感器进行参数标定。在一些实施例中,图6A为本申请一个实施例中正常图像的示意图;图6B为本申请一个实施例中枕形畸变的示意图;图6C为本申请一个实施例中桶形畸变的示意图。由于图像传感器设置的位置距离地面距离较远,在进行远距离成像时,拍摄的图像容易出现畸变的情况。如图6A、图6B及图6C所示,分别为正常图像与畸变图像的示意图,其中,图6B为枕形畸变,图6C为桶形畸变。可以看出,图像畸变会改变物体的正常形态,不同形态的同一物体无法进行图像拼接处理,因此,本实施例首先对多个图像传感器进行参数标定,从而可以消除图像畸变,保持图像中物体的正常形态,便于后续进行图像拼接处理。在一些实施例中,参数标定的处理步骤可以在将图像传感器安装在设备上之后进行。在一些实施例中,参数标定的处理步骤可以在图像传感器安装在RTG上之前进行。
具体地,对于一般情况的图像传感器模型,可以采用以下针孔模型:
其中,[X Y Z]
T可以是世界坐标系的坐标,[u v]
T可以是图像坐标系的坐标,在进行参数标定时,可以采用张正友棋盘格标定法。
另外,对于鱼眼图像传感器,可以采用以下模型:
本申请提供一种全景鸟瞰图像生成方法,在获取各图像传感器拍摄的原始图像之后,首先基于图像传感器对应的预设掩码以及原始图像进行掩码处理,通过掩码处理可以去除设备的影像,从而保证全景鸟瞰图像的信息质量。在实际应用中,可以保证操作人员可以根据全景鸟瞰图像进行设备的正常操作控制。
在一些实施例中,在步骤103后进一步包括:步骤104全景鸟瞰图像优化处理。其中全景鸟瞰图像优化处理可以包括:重叠区域的光照补偿和/或重叠区域的分频融合。
在一个实施例中,由于图像传感器的设置位置不同,对于同一目标,不同图像传感器拍摄的图像的亮度存在一定的差别,例如,向阳方向的图像传感器拍摄图像的亮度要高于非向阳方向的图像传感器拍摄图像的亮度,因此,本实 施例还包括对重叠区域进行光照补偿处理的步骤,从而可以减少或者消除重叠区域在不同的图像传感器拍摄图像中的亮度差异。光照补偿处理具体可以是采用直方图均衡化或其他光照均衡算法实现,在此不做限定。
在一些实施例中,为了获取更自然的融合效果,重叠区域的分频融合可以是在频域内进行第一掩码处理图像与第二掩码图像的融合处理,即通过傅里叶变换以及逆傅里叶变换,对第一掩码处理图像与第二掩码图像进行频域信号融合处理,得到融合图像,该处理过程具体包括如下步骤:
第一步,分别对第一重叠图像以及第二重叠图像进行傅里叶变换,得到第一重叠图像对应的第一信号序列以及第二重叠图像对应的第二信号序列。
第二步,对第一信号序列以及第二信号序列中对应相同频率的信号进行信号融合处理,得到融合信号序列。
第三步,对融合信号序列进行逆傅里叶变换,得到第一重叠图像以及第二重叠图像的融合图像。
本实施例可以是针对两幅图像的重叠区域进行分频融合处理,通过将两幅图像的重叠区域分成不同频率的信号,然后对不同频率的对应信号分别进行融合,相比于直接在图像域进行图像拼接,本实施例属于在频域的融合处理,从而可以避免信号的幅值信息以及相位信息的丢失,其中,幅值信息表征对应的纹理信息,相位信息表征对应的位置信息,从而可以得到比图像域融合更自然的融合效果。
可选地,信号融合处理包括空间域加权求和处理。在对第一信号序列以及第二信号序列中对应相同空间域的信号进行信号融合处理时,可以是根据第一信号序列对应的第一预设权重以及第二信号序列对应的第二预设权重进行加权求和处理(例如第一预设权重和第二预设权重均为50%)。从而,通过设置第一预设权重以及第二预设权重,可以根据实际情况对融合效果进行调整。
在一些实施例中,在步骤103后进一步包括:步骤105全景鸟瞰图像贴图 处理。具体地,由于掩码处理将部分图像过滤掉,使得图像中出现黑色区域,如图5所示。在一些实施例中,可以将虚拟图像添加至全景鸟瞰图像中被预设掩码过滤的部分,使得整幅图像更加完整美观。
在合理条件下应当理解,虽然前文各实施例涉及的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,各流程图中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
在一个实施例中,如图7所示,提供一种全景鸟瞰图像生成装置,该装置主要包括以下模块:
图像获取模块701,获取位于不同位置的多个图像传感器捕捉的原始图像,其中一幅或多幅所述原始图像中包括所述多个传感器所在设备的部分影像;
掩码处理模块702,配置为基于各图像传感器对应的预设掩码对所述原始图像进行过滤处理,得到对应的掩码处理图像,其中所述掩码处理图像不包括所述设备的影像;
拼接处理模块703,配置为对各所述掩码处理图像进行拼接处理得到所述设备的全景鸟瞰图像。
本实施例提供一种全景鸟瞰图像生成装置,在获取各图像传感器拍摄的原始图像之后,首先基于图像传感器对应的预设掩码以及原始图像进行掩码处理,通过掩码处理可以去除原始图像中的设备影像,从而保证全景鸟瞰图像的信息质量。在实际应用中,可以保证操作人员可以根据包含不包括设备影像的全景鸟瞰图像进行设备的正常操作控制。
在一个实施例中,全景鸟瞰图像生成装置还包括:初始化模块,用于获取各图像传感器对应的调试图像;确定各图像传感器对应的预设掩码,通过去除调试图像中的设备影像相应的获得预设掩码。
在一个实施例中,全景鸟瞰图像生成装置还包括:映射关系确定模块,用于获取各图像传感器对应的调试图像;提取各调试图像中的匹配点;基于各调试图像中的匹配点,确定各图像传感器的图像映射关系。
在一个实施例中,拼接处理模块703还用于:提取第一图像与第二图像的重叠区域,得到第一图像中的第一重叠图像以及第二图像中的第二重叠图像;对第一重叠图像以及第二重叠图像进行融合处理,得到融合图像;将第一重叠图像以及第二重叠图像替换为融合图像,得到新的第一图像以及新的第二图像,并对新的第一图像以及新的第二图像进行图像拼接处理。
在一个实施例中,拼接处理模块703还用于:通过傅里叶变换以及逆傅里叶变换,对第一重叠图像以及第二重叠图像进行频域信号融合处理,得到融合图像。
在一个实施例中,拼接处理模块703还用于:分别对第一重叠图像以及第二重叠图像进行傅里叶变换,得到第一重叠图像对应的第一信号序列以及第二重叠图像对应的第二信号序列;对第一信号序列以及第二信号序列中对应相同频率的信号进行信号融合处理,得到融合信号序列;对融合信号序列进行逆傅里叶变换,得到第一重叠图像以及第二重叠图像的融合图像。
在一个实施例中,拼接处理模块703还用于:对各掩码处理图像的重叠区域进行光照补偿处理。
关于全景鸟瞰图像生成装置的具体限定可以参见上文中对于全景鸟瞰图像生成方法的限定,在此不再赘述。上述全景鸟瞰图像生成装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中 的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在一个实施例中,提供了一种计算机设备,包括存储器和处理器,存储器中存储有计算机程序,该处理器执行计算机程序时实现以下步骤:获取多张原始图像,各原始图像通过设置于不同方位的图像传感器拍摄得到,其中,任一图像传感器拍摄的图像与至少一张其他图像传感器拍摄的图像存在重叠区域;基于各原始图像以及各图像传感器对应的预设掩码进行掩码处理,得到各原始图像对应的掩码处理图像,掩码处理图像中不包括设备影像;基于各掩码处理图像之间的重叠区域,对各掩码处理图像进行图像拼接处理,得到全景鸟瞰图像。
在一个实施例中,处理器执行计算机程序时还实现以下步骤:获取各图像传感器对应的调试图像;确定各图像传感器对应的预设掩码,通过去除调试图像中的设备影像得到相应的预设掩码。
在一个实施例中,处理器执行计算机程序时还实现以下步骤:获取各图像传感器对应的调试图像;提取各调试图像中的匹配点;基于各调试图像中的匹配点,确定各图像传感器的图像映射关系。
在一个实施例中,处理器执行计算机程序时还实现以下步骤:提取第一图像与第二图像的重叠区域,得到第一图像中的第一重叠图像以及第二图像中的第二重叠图像;对第一重叠图像以及第二重叠图像进行融合处理,得到融合图像;将第一重叠图像以及第二重叠图像替换为融合图像,得到新的第一图像以及新的第二图像,并对新的第一图像以及新的第二图像进行图像拼接处理。
在一个实施例中,处理器执行计算机程序时还实现以下步骤:通过傅里叶变换以及逆傅里叶变换,对第一重叠图像以及第二重叠图像进行频域信号融合处理,得到融合图像。
在一个实施例中,处理器执行计算机程序时还实现以下步骤:分别对第一重叠图像以及第二重叠图像进行傅里叶变换,得到第一重叠图像对应的第一信 号序列以及第二重叠图像对应的第二信号序列;对第一信号序列以及第二信号序列中对应相同频率的信号进行信号融合处理,得到融合信号序列;对融合信号序列进行逆傅里叶变换,得到第一重叠图像以及第二重叠图像的融合图像。
在一个实施例中,处理器执行计算机程序时还实现以下步骤:对各掩码处理图像的重叠区域进行光照补偿处理。
图8为一个实施例中计算机设备的内部结构图。该计算机设备具体可以是终端(或服务器)。如图8所示,该计算机设备包括该计算机设备包括通过系统总线连接的处理器、存储器、网络接口、输入装置和显示屏。其中,存储器包括非易失性存储介质和内存储器。该计算机设备的非易失性存储介质存储有操作系统,还可存储有计算机程序,该计算机程序被处理器执行时,可使得处理器实现全景鸟瞰图像生成方法。该内存储器中也可储存有计算机程序,该计算机程序被处理器执行时,可使得处理器执行全景鸟瞰图像生成方法。计算机设备的显示屏可以是液晶显示屏或者电子墨水显示屏,计算机设备的输入装置可以是显示屏上覆盖的触摸层,也可以是计算机设备外壳上设置的按键、轨迹球或触控板,还可以是外接的键盘、触控板或鼠标等。
本领域技术人员可以理解,图8中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
在一个实施例中,提供了一种计算机可读存储介质,其上存储有计算机程序,计算机程序被处理器执行时实现以下步骤:获取多张原始图像,各原始图像通过设置于不同方位的图像传感器拍摄得到,其中,任一图像传感器拍摄的图像与至少一张其他图像传感器拍摄的图像存在重叠区域;基于各原始图像以及各图像传感器对应的预设掩码进行掩码处理,得到各原始图像对应的掩码处理图像,掩码处理图像不包括设备影像;基于各掩码处理图像之间的重叠区域, 对各掩码处理图像进行图像拼接处理,得到全景鸟瞰图像。
在一个实施例中,计算机程序被处理器执行时还实现以下步骤:获取各图像传感器对应的调试图像;确定各图像传感器对应的预设掩码,通过去除设备影像获得相应的预设掩码
在一个实施例中,计算机程序被处理器执行时还实现以下步骤:获取各图像传感器对应的调试图像;提取各调试图像中的匹配点;基于各调试图像中的匹配点,确定各图像传感器的图像映射关系。
在一个实施例中,计算机程序被处理器执行时还实现以下步骤:提取第一图像与第二图像的重叠区域,得到第一图像中的第一重叠图像以及第二图像中的第二重叠图像;对第一重叠图像以及第二重叠图像进行融合处理,得到融合图像;将第一重叠图像以及第二重叠图像替换为融合图像,得到新的第一图像以及新的第二图像,并对新的第一图像以及新的第二图像进行图像拼接处理。
在一个实施例中,计算机程序被处理器执行时还实现以下步骤:通过傅里叶变换以及逆傅里叶变换,对第一重叠图像以及第二重叠图像进行频域信号融合处理,得到融合图像。
在一个实施例中,计算机程序被处理器执行时还实现以下步骤:分别对第一重叠图像以及第二重叠图像进行傅里叶变换,得到第一重叠图像对应的第一信号序列以及第二重叠图像对应的第二信号序列;对第一信号序列以及第二信号序列中对应相同频率的信号进行信号融合处理,得到融合信号序列;对融合信号序列进行逆傅里叶变换,得到第一重叠图像以及第二重叠图像的融合图像。
在一个实施例中,计算机程序被处理器执行时还实现以下步骤:对各掩码处理图像的重叠区域进行光照补偿处理。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,计算机程序可存储于一非易 失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。
在一个实施例中,本申请方案进一步包括一种集装箱起重机,其上存储有前述计算机程序,所述计算机程序被处理器执行时实现前述方案中所述的方法的步骤,其中所述设备为所述集装箱起重机。在一个实施例中,其中进一步包括图像传感器位于所述起重机横向机架上。
上述实施例仅供说明本发明之用,而并非是对本发明的限制,有关技术领域的普通技术人员,在不脱离本发明范围的情况下,还可以做出各种变化和变型,因此,所有等同的技术方案也应属于本发明公开的范畴。
Claims (13)
- 一种全景鸟瞰图像生成方法,包括:获取位于不同位置的多个图像传感器捕捉的原始图像,其中一幅或多幅所述原始图像中包括所述多个图像传感器所在设备的部分影像;基于各图像传感器对应的预设掩码分别对各所述原始图像进行过滤处理,得到对应的掩码处理图像,其中所述掩码处理图像不包括所述设备的影像;以及对各所述掩码处理图像进行拼接处理得到所述设备的全景鸟瞰图像。
- 根据权利要求1所述的方法,还包括:对所述设备的全景鸟瞰图像进行光照补偿处理和/或分频融合处理。
- 根据权利要求1或2所述的方法,还包括:在所述全景鸟瞰图像中被预设掩码过滤掉的区域全部或部分空缺区域填补虚拟图像。
- 根据权利要求1所述的方法,还包括基于所述图像传感器的畸变参数对所述原始图像进行去畸变处理。
- 根据权利要求1所述的方法,还包括,在所述设备每次改变其位置后重新确定预设掩码,其包括:通过各图像传感器获取调试图像,其中一幅或多幅所述调试图像中包括所述设备的部分影像;以及去除所述调试图像中所述设备的影像并生成各图像传感器对应的预设掩码。
- 根据权利要求5所述的方法,还包括:确定所述调试图像之间的映射关系。
- 根据权利要求6所述的方法,其中确定所述调试图像之间的映射关系,包括:对所述调试图像中包括的标识物进行识别;以及基于识别结果与所述标识物代表的预设含义之间的关系确定所述调试图像之间的映射关系。
- 根据权利要求6所述的方法,其中确定所述调试图像之间的映射关系,包括:提取所述调试图像中的特征点并进行匹配;以及基于所述调试图像的特征点的匹配情况确定所述调试图像之间的映射关系。
- 根据权利要求8所述的方法,其中提取所述调试图像中的特征点并进行匹配,包括:使用ORB、SURF或SIFT算法提取并匹配所述特征点。
- 根据权利要求1所述的方法,其中所述设备为集装箱起重机。
- 一种全景鸟瞰图像生成装置,包括:图像获取模块,配置为获取位于不同位置的多个图像传感器捕捉的原始图像,其中一幅或多幅所述原始图像中包括所述多个传感器所在设备的部分影像;掩码处理模块,配置为基于各图像传感器对应的预设掩码对所述原始图像进行过滤处理,得到对应的掩码处理图像,其中所述掩码处理图像不包括所述设备的影像;拼接处理模块,配置为对各所述掩码处理图像进行拼接处理得到所述设备的全景鸟瞰图像。
- 根据权利要求11所述的全景鸟瞰图像生成装置,进一步包括初始化模块,配置为通过各图像传感器获取调试图像,其中一幅或多幅所述调试图像中包括所述设备的部分影像;以及去除所述调试图像中所述设备的影像并生成各图像传感器对应的预设掩码。
- 一种集装箱起重机,其上存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1至10中任一项所述的方法的步骤,其中所述设备为所述集装箱起重机。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010186352.6 | 2020-03-17 | ||
CN202010186352.6A CN113411488A (zh) | 2020-03-17 | 2020-03-17 | 全景图像生成方法、装置、存储介质及计算机设备 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021185284A1 true WO2021185284A1 (zh) | 2021-09-23 |
Family
ID=77677107
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/081330 WO2021185284A1 (zh) | 2020-03-17 | 2021-03-17 | 一种全景鸟瞰图像生成方法 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113411488A (zh) |
WO (1) | WO2021185284A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114463170A (zh) * | 2021-12-24 | 2022-05-10 | 河北大学 | 一种针对agv应用的大场景图像拼接方法 |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115018904B (zh) * | 2022-06-02 | 2023-10-20 | 如你所视(北京)科技有限公司 | 全景图像的掩膜生成方法和装置 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101588482A (zh) * | 2009-06-02 | 2009-11-25 | 石黎 | 一种车载虚拟全景电子系统 |
CN103332597A (zh) * | 2013-07-08 | 2013-10-02 | 宁波大榭招商国际码头有限公司 | 一种基于主动视觉技术的起重机远程操作用监控系统及其实现方法 |
CN104992408A (zh) * | 2015-06-30 | 2015-10-21 | 百度在线网络技术(北京)有限公司 | 用于用户终端的全景图像生成方法和装置 |
CN105600693A (zh) * | 2016-03-16 | 2016-05-25 | 成都科达光电技术有限责任公司 | 一种塔式起重机的监控系统 |
CN107430764A (zh) * | 2015-03-30 | 2017-12-01 | 克诺尔商用车制动系统有限公司 | 图像合成装置和用于合成图像的方法 |
KR20190089683A (ko) * | 2018-01-22 | 2019-07-31 | 네이버 주식회사 | 파노라마 이미지를 보정하는 방법 및 시스템 |
US20190281229A1 (en) * | 2017-02-28 | 2019-09-12 | JVC Kenwood Corporation | Bird's-eye view video generation device, bird's-eye view video generation method, and non-transitory storage medium |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10051180B1 (en) * | 2016-03-04 | 2018-08-14 | Scott Zhihao Chen | Method and system for removing an obstructing object in a panoramic image |
-
2020
- 2020-03-17 CN CN202010186352.6A patent/CN113411488A/zh active Pending
-
2021
- 2021-03-17 WO PCT/CN2021/081330 patent/WO2021185284A1/zh active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101588482A (zh) * | 2009-06-02 | 2009-11-25 | 石黎 | 一种车载虚拟全景电子系统 |
CN103332597A (zh) * | 2013-07-08 | 2013-10-02 | 宁波大榭招商国际码头有限公司 | 一种基于主动视觉技术的起重机远程操作用监控系统及其实现方法 |
CN107430764A (zh) * | 2015-03-30 | 2017-12-01 | 克诺尔商用车制动系统有限公司 | 图像合成装置和用于合成图像的方法 |
CN104992408A (zh) * | 2015-06-30 | 2015-10-21 | 百度在线网络技术(北京)有限公司 | 用于用户终端的全景图像生成方法和装置 |
CN105600693A (zh) * | 2016-03-16 | 2016-05-25 | 成都科达光电技术有限责任公司 | 一种塔式起重机的监控系统 |
US20190281229A1 (en) * | 2017-02-28 | 2019-09-12 | JVC Kenwood Corporation | Bird's-eye view video generation device, bird's-eye view video generation method, and non-transitory storage medium |
KR20190089683A (ko) * | 2018-01-22 | 2019-07-31 | 네이버 주식회사 | 파노라마 이미지를 보정하는 방법 및 시스템 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114463170A (zh) * | 2021-12-24 | 2022-05-10 | 河北大学 | 一种针对agv应用的大场景图像拼接方法 |
CN114463170B (zh) * | 2021-12-24 | 2024-06-04 | 河北大学 | 一种针对agv应用的大场景图像拼接方法 |
Also Published As
Publication number | Publication date |
---|---|
CN113411488A (zh) | 2021-09-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109978755B (zh) | 全景图像合成方法、装置、设备与存储介质 | |
WO2021185284A1 (zh) | 一种全景鸟瞰图像生成方法 | |
DE102020100684A1 (de) | Kennzeichnung von graphischen bezugsmarkierern | |
CN109360203A (zh) | 图像配准方法、图像配准装置及存储介质 | |
CN114820485B (zh) | 一种基于机载图像测量波浪爬高的方法 | |
CN114202588B (zh) | 一种车载环视相机快速自动标定方法和装置 | |
CN113744348A (zh) | 一种参数标定方法、装置及雷视融合检测设备 | |
CN104392416A (zh) | 一种运动场景的视频拼接方法 | |
CN111652937A (zh) | 车载相机标定方法和装置 | |
CN114372919B (zh) | 一种双挂汽车列车全景环视图像拼接方法及系统 | |
CN116760937B (zh) | 一种基于多机位的视频拼接方法、装置、设备及存储介质 | |
CN110084743A (zh) | 基于多航带起始航迹约束的图像拼接与定位方法 | |
CN110443245A (zh) | 一种非限制场景下的车牌区域的定位方法、装置及设备 | |
CN114332142A (zh) | 车载相机的外参标定方法、设备、系统及介质 | |
CN113658144B (zh) | 路面病害几何信息的确定方法、装置、设备及介质 | |
CN116051652A (zh) | 一种参数标定方法及电子设备、存储介质 | |
CN115082565A (zh) | 相机标定方法、装置、服务器及介质 | |
CN117078800B (zh) | 基于bev图像合成地面标识方法及装置 | |
CN117848234A (zh) | 物体扫描机构、方法及相关设备 | |
CN110826364A (zh) | 一种库位识别方法及装置 | |
CN117333686A (zh) | 目标定位方法、装置、设备及介质 | |
CN112037128A (zh) | 一种全景视频拼接方法 | |
CN112150355B (zh) | 一种图像处理方法及相关设备 | |
CN109376653B (zh) | 用于定位车辆的方法、装置、设备和介质 | |
CN109345466B (zh) | 电磁成像空变模糊恢复方法及装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21772374 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21772374 Country of ref document: EP Kind code of ref document: A1 |