WO2020042858A1 - 图像拼接方法和装置、车载图像处理装置、电子设备、存储介质 - Google Patents

图像拼接方法和装置、车载图像处理装置、电子设备、存储介质 Download PDF

Info

Publication number
WO2020042858A1
WO2020042858A1 PCT/CN2019/098546 CN2019098546W WO2020042858A1 WO 2020042858 A1 WO2020042858 A1 WO 2020042858A1 CN 2019098546 W CN2019098546 W CN 2019098546W WO 2020042858 A1 WO2020042858 A1 WO 2020042858A1
Authority
WO
WIPO (PCT)
Prior art keywords
block
information
image
input
input image
Prior art date
Application number
PCT/CN2019/098546
Other languages
English (en)
French (fr)
Inventor
匡鑫
毛宁元
李清正
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Priority to SG11202101462WA priority Critical patent/SG11202101462WA/en
Priority to JP2021507821A priority patent/JP7164706B2/ja
Publication of WO2020042858A1 publication Critical patent/WO2020042858A1/zh
Priority to US17/172,267 priority patent/US20210174471A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/23Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
    • B60R1/27Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view providing all-round vision, e.g. using omnidirectional cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/92Dynamic range modification of images or parts thereof based on global image properties
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/10Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used
    • B60R2300/105Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used using multiple cameras
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • B60R2300/304Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing using merged images, e.g. merging camera image with stored images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Definitions

  • the present disclosure relates to image processing technologies, and in particular, to an image stitching method and device, a vehicle-mounted image processing device, an electronic device, and a storage medium.
  • Surround view stitching system as an important part of Advanced Driver Assistance System (ADAS), can display the situation around the car to the driver or intelligent decision system in real time.
  • the existing surround-view stitching system generally installs a camera in multiple directions around the vehicle body, collects images around the vehicle body through each camera, and fuses the collected images into a 360-degree panoramic view for display to the driver or an intelligent decision-making system.
  • the embodiments of the present disclosure provide a surround view stitching technical solution.
  • an image stitching method including:
  • Stitch processing is performed on the input image after brightness compensation to obtain a stitched image.
  • an image stitching device including:
  • a first acquisition module configured to acquire brightness compensation information of each input image in a plurality of input images to be spliced; wherein the plurality of input images are correspondingly acquired by multiple cameras;
  • a compensation module configured to perform brightness compensation on an input image based on the brightness compensation information of each input image
  • a stitching module is used to stitch the input image after brightness compensation to obtain a stitched image.
  • a vehicle-mounted image processing apparatus including:
  • a first storage module configured to store a mosaic information table and multiple input images correspondingly acquired by multiple cameras
  • a computing chip configured to obtain brightness compensation information of each input image in the plurality of input images to be spliced from the first storage module; and to obtain the output block correspondence from the first storage module for each output block respectively An input image block in the input image of the image; perform brightness compensation on the input image block based on the brightness compensation information of the input image where the input image block is located, and obtain an output image on the output block based on the input image block after the brightness compensation Write the obtained output image blocks back to the first storage module in order; in response to writing all the output image blocks based on a stitched image corresponding to the stitching information table back to the memory, a stitched image is obtained.
  • an electronic device including:
  • a processor is configured to execute a computer program stored in the memory, and when the computer program is executed, implement the method according to any one of the foregoing embodiments of the present disclosure.
  • a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the method according to any one of the foregoing embodiments of the present disclosure is implemented.
  • the multiple inputs to be stitched are obtained.
  • the brightness compensation information of each input image in the image performs brightness compensation on the input image based on the brightness compensation information of each input image, and performs stitching processing on the brightness compensated input image to obtain a stitched image.
  • the embodiments of the present disclosure perform brightness compensation for multiple input images to be stitched, and implements global brightness compensation for the images to be stitched.
  • the splicing marks appear in the stitched image, which enhances the visual effect of the stitched image display, and is beneficial to various application effects based on the stitched image.
  • the stitching used to display the driving environment of the vehicle is obtained.
  • the images help improve the accuracy of intelligent driving control.
  • FIG. 1 is a flowchart of an embodiment of an image stitching method of the present disclosure.
  • FIG. 2 is a diagram of an example region of a stitched image corresponding to six input images in the embodiment of the present disclosure.
  • FIG. 3 is a flowchart of another embodiment of an image stitching method according to the present disclosure.
  • FIG. 4 is a flowchart of another embodiment of an image stitching method according to the present disclosure.
  • FIG. 5 is a schematic structural diagram of an embodiment of an image stitching device of the present disclosure.
  • FIG. 6 is a schematic structural diagram of another embodiment of an image stitching device of the present disclosure.
  • FIG. 7 is a schematic structural diagram of an embodiment of an in-vehicle image processing device of the present disclosure.
  • FIG. 8 is a schematic structural diagram of another embodiment of an in-vehicle image processing device of the present disclosure.
  • FIG. 9 is a schematic structural diagram of an application embodiment of an electronic device according to the present disclosure.
  • a plurality may refer to two or more, and “at least one” may refer to one, two or more, part or all.
  • the term "and / or” in the disclosure is only an association relationship describing the associated object, which means that there can be three kinds of relationships, for example, A and / or B can mean: A exists alone, and A and B exist simultaneously, There are three cases of B alone.
  • the character "/" in the present disclosure generally indicates that the related objects before and after are an "or" relationship.
  • Embodiments of the present disclosure can be applied to electronic devices such as terminal devices, computer systems, and servers, which can operate with many other general or special-purpose computing system environments or configurations.
  • Examples of well-known terminal devices, computing systems, environments, and / or configurations suitable for use with electronic devices such as terminal devices, computer systems, servers include, but are not limited to: personal computer systems, server computer systems, thin clients, thick clients Computers, handheld or laptop devices, microprocessor-based systems, set-top boxes, programmable consumer electronics, networked personal computers, small computer systems, mainframe computer systems, and distributed cloud computing technology environments including any of these systems, and more.
  • Electronic devices such as terminal devices, computer systems, and servers can be described in the general context of computer system executable instructions (such as program modules) executed by a computer system.
  • program modules may include routines, programs, target programs, components, logic, data structures, and so on, which perform specific tasks or implement specific abstract data types.
  • the computer system / server can be implemented in a distributed cloud computing environment.
  • tasks are performed by remote processing devices linked through a communication network.
  • program modules may be located on a local or remote computing system storage medium including a storage device.
  • FIG. 1 is a flowchart of an embodiment of an image stitching method of the present disclosure. As shown in FIG. 1, the image stitching method in this embodiment includes:
  • multiple input images are correspondingly acquired by multiple cameras set on different parts of the device.
  • the deployment position and orientation of the multiple cameras can enable at least two adjacent images to have overlapping areas, or every two adjacent images to have overlapping areas in the multiple input images collected by the multiple cameras.
  • any two adjacent images are Has overlapping areas.
  • the adjacent image is an image collected by a camera deployed in an adjacent part of different parts of the device, or an image in which multiple input images correspond to adjacent positions in a stitched image.
  • the embodiment of the present disclosure there are no restrictions on the deployment position and direction of the multiple cameras. As long as at least two adjacent images or each two adjacent images of the multiple input images collected by the multiple cameras have overlapping areas, the embodiments of the present disclosure can be adopted. To achieve the stitching of multiple input images.
  • the device with multiple cameras can be a vehicle, a robot, or other devices that need to obtain stitched images, such as other vehicles.
  • the device for setting the multi-channel camera is a vehicle
  • the number of the multi-channel cameras may include: 4-8 according to the length and width of the vehicle and the shooting range of the camera.
  • the above-mentioned multi-channel camera may include: at least one camera disposed at a head position of the vehicle, at least one camera disposed at a rear position of the vehicle, and at least one disposed at a middle portion of a vehicle body side A camera in the area, and at least one camera disposed in a middle area on the other side of the vehicle body; or, the above-mentioned multi-channel camera includes: at least one camera disposed at a head position of the vehicle, and at least one disposed at a rear position of the vehicle Cameras, at least two cameras respectively located in the front half area and the rear half area of the vehicle body side, and at least two cameras respectively located in the front half area and the rear half area of the vehicle body side .
  • two cameras can be set on the head, tail, and each side of the vehicle, and a total of eight cameras are set around the vehicle to ensure that the shooting range can cover Around the vehicle; for longer vehicles, you can set up a camera on the head and tail of the vehicle, two cameras on each side of the vehicle, and a total of six cameras around the vehicle to ensure that the shooting range can cover Around the vehicle; for vehicles with small length and width, one camera can be set on the head, tail and each side of the vehicle, and a total of four cameras are set around the vehicle to ensure that the shooting range can cover the vehicle's surroundings.
  • the multi-channel camera may include: at least one fish-eye camera, and / or, at least one non-fish-eye camera.
  • the fish-eye camera is a lens with a focal length of 16 mm or less and a viewing angle generally exceeding 90 ° or even close to or equal to 180 °. It is an extreme wide-angle lens.
  • the use of a fisheye camera has the advantage of a wide range of viewing angles. Using a fisheye camera, it is possible to achieve a wide range of scenes by deploying fewer cameras.
  • the operation 102 may be performed by a processor calling a corresponding instruction stored in a memory, or may be performed by a first obtaining module executed by the processor.
  • brightness compensation is performed on an image, that is, a pixel value of each pixel point in the image is adjusted to adjust a visual effect of the image on brightness.
  • the operation 104 may be performed by a processor calling a corresponding instruction stored in a memory, or may be performed by a compensation module executed by the processor.
  • the operation 106 may be performed by a processor calling a corresponding instruction stored in the memory, or may be performed by a splicing module executed by the processor.
  • the embodiments of the present disclosure perform brightness compensation for multiple input images to be stitched, and implements global brightness compensation for the images to be stitched. It can eliminate the difference in brightness of multiple input images to be stitched due to the difference in light and exposure of different camera environments.
  • the splicing marks appear in the stitched image, which enhances the visual effect of the stitched image display, and is beneficial to various application effects based on the stitched image. For example, when the embodiment of the present disclosure is applied to a vehicle, the stitching used to display the driving environment of the vehicle is obtained. The images help improve the accuracy of intelligent driving control.
  • the operation 102 may include: determining brightness compensation information of each input image in the multiple input images according to the overlapping area in the multiple input images.
  • the brightness compensation information of each input image is used to make the brightness difference between the input images after brightness compensation fall within a preset brightness tolerance range.
  • the brightness compensation information of each input image is used to minimize the sum of pixel value differences of every two input images in each overlapping area after the brightness compensation, or less than a preset error value.
  • the brightness compensation information of the input image is determined according to the overlapping area, and the accuracy is high;
  • the brightness difference between the two falls within a preset brightness tolerance range, or the sum of the pixel value differences of every two input images in each overlapping area is the smallest or smaller than the preset error value, which can reduce or avoid different inputs in the stitched image Due to the difference in ambient light and the exposure of the camera, the image produces stitching marks in the overlapping area, which improves the visual effect.
  • the above operation 104 may include:
  • an input image block in an input image corresponding to the output block is obtained.
  • the input image block corresponding to an output block belongs to the overlapping area of adjacent input images, in this operation, the input image blocks in all the input images corresponding to the output block and having the overlapping area are obtained, so as to realize the Overlay and stitching of input image blocks in overlapping areas;
  • the output region refers to an output region of a stitched image
  • the output block is a block in the output region.
  • FIG. 2 it is an exemplary diagram of a region of a stitched image corresponding to six input images in the embodiment of the present disclosure.
  • the six input images in FIG. 2 respectively correspond to the output areas (1)-(6) of the stitched image.
  • the six input images are respectively surrounded by the vehicle (for example, distributed in the front, rear, and left and middle of the left side of the vehicle) , Left middle rear, right middle front, right middle rear)).
  • the output block may be a square, and the side length of the output block may be an N-th power of 2.
  • the size of the output block is 32 ⁇ 32 to facilitate subsequent calculations.
  • the size unit of the input block, the output block, the input image block, and the output image block may be a pixel, in order to read and process the image data.
  • the above-mentioned obtaining of the input image block in the input image corresponding to the output block may be implemented in the following manner:
  • the position information may include, for example, the size and offset address of the input image block, and the position of the input image block in the input image may be determined based on the size and offset address of the input image block;
  • an input image block is obtained from the corresponding input image.
  • each channel of each input image has one piece of brightness compensation information.
  • the The brightness compensation information forms a group of brightness compensation information for the channel.
  • the above-mentioned performing brightness compensation on the input image block based on the brightness compensation information of the input image where the input image block is located may include: for each channel of the input image block, the brightness compensation of the input image on the channel is performed. The information is used to multiply the pixel value of each pixel in the input image block in the channel, that is, the pixel value of each pixel in the input image block in the channel and the brightness compensation information of the input image in the input image block in the channel are performed. Multiply.
  • the method may further include: obtaining the output score based on the input image block after the brightness compensation.
  • performing the stitching processing on the brightness-compensated input image to obtain a stitched image may include: stitching each output image block to obtain a stitched image.
  • the obtaining the output image block on the output block based on the input image block after the brightness compensation may include:
  • an interpolation algorithm (such as a bilinear interpolation algorithm) is used to interpolate the corresponding input image block to obtain the output on the output block Image block.
  • an interpolation algorithm such as a bilinear interpolation algorithm
  • the embodiment of the present disclosure does not limit the specific expression of the interpolation algorithm.
  • the coordinates of the four associated pixels in the input image block corresponding to the target pixel point 1 in the output block can be determined as: x ( n) y (m), x (n + 1) y (m), x (n) y (m + 1), x (n + 1) y (m + 1).
  • the pixel value of the target pixel 1 on the output image can be calculated in the input image block based on the pixel values of the pixels on the four coordinates by using a bilinear interpolation algorithm. Interpolation processing according to the pixel value of the corresponding pixel point can make the pixel value of the target pixel point more accurate and make the output image more realistic.
  • the input image block in the input image corresponding to the output block belongs to the overlapping area
  • the input image block is interpolated to obtain the output image block, and may further include: performing each input image block corresponding to the output block separately. Interpolate and superimpose all interpolated input image blocks corresponding to the output block to obtain an output image block.
  • the above-mentioned superimposing on all the interpolated input image blocks corresponding to the output blocks may include:
  • an average value, a weighted value, or a weighted average value of the pixel values of each pixel at at least two different resolutions is obtained.
  • at least two different resolutions include: the resolution of the input image block after interpolation and at least one lower resolution that is lower than the resolution of the input image block after interpolation.
  • the resolution is 32 ⁇ 32
  • at least two different resolutions here can include 32 ⁇ 32, 16 ⁇ 16, 8 ⁇ 8, and 4 ⁇ 4, that is, to obtain each pixel at 32 ⁇ 32, 16 ⁇ 16
  • the average value of the pixel value at a resolution of 32 ⁇ 32, 16 ⁇ 16, 8 ⁇ 8, and 4 ⁇ 4 that is, the pixel is resolved at 32 ⁇ 32, 16 ⁇ 16, 8 ⁇ 8, and 4 ⁇ 4.
  • the sum of the product of the pixel value and the corresponding weighting coefficient A, B, C, D; the weighted average value of the pixel value at the resolution of 32 ⁇ 32, 16 ⁇ 16, 8 ⁇ 8, and 4 ⁇ 4 of a pixel is the pixel
  • weighted superposition is performed according to the average value of the pixel values of each pixel point, or the weighted value, or the weighted average value.
  • the weighted superposition refers to multiplying an average value of each pixel point with respect to a pixel value, or a weighted value, or a weighted average value by a corresponding preset weighting coefficient, and then superimposing it.
  • weighted superposition when superimposing all the interpolated input image blocks corresponding to the output block for the overlapping area, weighted superposition may be performed according to the average value of the pixel values of each pixel point, or the weighted value, or the weighted average value. , Thereby eliminating the stitching seam in the overlapping area and optimizing the display effect.
  • image stitching method of the present disclosure may further include:
  • the fusion transformation information is obtained based on the transformation information of the various collected images corresponding to the above-mentioned multiple cameras to the stitched images.
  • the transformation information at each level may include, for example, lens distortion information, perspective transformation information, and registration information.
  • the lens de-distortion information includes fish-eye distortion information for an input image captured by a fish-eye camera, and / or de-distortion information for an input image captured by a non-fish-eye camera.
  • the input image captured by various fisheye cameras or non-fisheye cameras can be dedistorted by using lens dedistortion information.
  • the fusion transformation information may be expressed as a fusion transformation function.
  • the following describes the fisheye de-distortion information, perspective transformation information, and registration information, respectively:
  • Fish-eye distortion information is used to perform fish-eye distortion processing on the input image.
  • the fisheye distortion information can be expressed as a function called a fisheye distortion function.
  • the coordinates obtained by performing a fisheye distortion operation on a pixel in the input image based on the fisheye distortion function can be expressed as:
  • f1 is the fish-eye distortion function.
  • k is a constant related to the degree of distortion of the camera, and can be determined based on the angle of the wide-angle lens of the camera.
  • the coordinates obtained by performing the fisheye dedistortion operation on the above pixels based on the fisheye dedistortion function can be:
  • the perspective of the stitched image is generally a bird's-eye view, a front-view perspective, or a back-sight perspective.
  • the perspective transformation information can be used to transform the perspective of the fisheye de-distorted image, and transform the fish-eye de-distorted image to the perspective required by the stitched image.
  • the perspective transformation information can be expressed as a perspective transformation function, and the perspective transformed coordinates of the above-mentioned pixel points in the fisheye-removed image using the perspective transformation function can be expressed as:
  • f2 is the perspective transformation function.
  • f2 is the perspective transformation function.
  • the coordinate mapping relationship of a pixel point in the image after the perspective transformation can be obtained in the following manner:
  • the registration information may be expressed as a registration function. Based on the registration function, the coordinate mapping relationship of the same pixel in a non-reference image to a reference image may be obtained:
  • f3 is the registration function corresponding to the affine transformation matrix.
  • the affine transformation is a two-dimensional coordinate transformation. It is assumed that the coordinates of a pixel before the affine transformation are (x2, y2) and the coordinates before the affine transformation are (x, y).
  • the coordinate form of the affine transformation is as follows:
  • it may further include an operation of generating a stitching information table, which may be implemented in the following manner, for example:
  • the relevant information of each output block is recorded in the stitching information table through an information table block, respectively.
  • the relevant information of the output block may include, but is not limited to, the position information of the output block (such as the size of the output block, the offset address of the output block), and the input corresponding to the output block.
  • the offset address of the input block is x min and y min .
  • x max is the maximum value of x coordinate in the coordinates of pixel points in the input block
  • x min is the minimum value of x coordinate in the coordinates of pixel points in the input block
  • y max is the coordinate of pixel points in the input block.
  • y min is the minimum value of the y coordinate among the coordinates of the pixel point.
  • obtaining the input image block in the input image corresponding to the output block may include: sequentially reading one information table block from the stitching information table, and based on the read information table block Relevant information about the recorded output blocks to obtain input image blocks corresponding to the recorded output blocks.
  • the lens dedistortion information, the perspective transformation information, and the registration information can be fused into one fusion transformation information.
  • the correspondence between the pixel coordinates of the income image and the stitched image can be directly calculated. De-distortion operation, perspective transformation operation and registration operation of the input image are realized through one operation, which simplifies the calculation process and improves the processing speed and efficiency.
  • the coordinates of each pixel can be quantized to facilitate reading by the computing chip.
  • the x and y coordinates of a pixel can be quantized to 8-bit integers and 4-bit decimals, respectively.
  • the size can also represent a more precise coordinate position.
  • the coordinates of a pixel in the input image block are (129.1234, 210.4321), and the quantized coordinates can be expressed as (1000001.0010, 11010010.0111).
  • the fusion transformation information may change, and the information in the stitching information table generated based on the fusion information may also change. Therefore, in a further embodiment of the present disclosure, corresponding to a change in the position and / or direction of any one or more cameras in the multi-channel camera, the fusion transformation information is re-obtained and the stitching information table is re-generated.
  • the method may further include: obtaining and compensating the brightness compensation information of each captured image in the multiple captured images based on the overlapping area of the multiple captured images acquired by the multiple cameras.
  • the splicing information table or in each information table block of the splicing information table.
  • the above-mentioned obtaining the brightness compensation information of each input image in the multiple input images to be spliced may be implemented by: obtaining the acquisitions collected by the same camera from the splicing information table or the information table block respectively.
  • the brightness compensation information of the image is used as the brightness compensation information of the corresponding input image.
  • the method may further include: when the light change in the environment where the multi-channel camera is located meets a predetermined condition, for example, the light change in the environment where the multi-channel camera is located is greater than a preset value, and re-obtained The brightness compensation information of each captured image in the multiple captured images, that is, the operation of re-executing the overlapping area of the multiple captured images obtained based on the multiple cameras to obtain the brightness compensation information of each captured image in the multiple captured images, and The operation of updating the brightness compensation information of each captured image in the stitching information table with the brightness compensation information of each captured image obtained this time.
  • obtaining the brightness compensation information of each of the acquired images based on the overlapping areas of the acquired images acquired by the multiple cameras may include:
  • the brightness compensation information of each captured image in the multiple captured images is obtained.
  • Each color image has three channels of red, green, and blue (RGB).
  • each channel of the acquired image can be acquired separately. After brightness compensation, every two of the overlapping areas of the acquired images are acquired. In a manner that the sum of the differences in the pixel values of the images in the channel is minimized, the brightness compensation information of each of the acquired images in the channel is acquired. That is, in this embodiment, each channel corresponding to the acquired image, such as the R channel, the G channel, and the B channel, respectively obtains a set of luminance compensation information, and the set of luminance compensation information includes each of the acquisitions in the multiple acquired images. The brightness compensation information of the image in this channel. Based on this embodiment, the three sets of brightness compensation information of the above-mentioned multiple acquired images in the R channel, G channel, and B channel can be obtained, respectively.
  • a preset error function can be used to represent the sum of the difference in pixel values of each two captured images in the overlapping area of the multiple captured images, and the function value of the error relationship can be obtained Brightness compensation information for each captured image at the minimum.
  • the error function is a function of the brightness compensation information of the acquired images in the same overlapping area and the pixel value of at least one pixel in the overlapping area.
  • the brightness compensation information of each acquired image when the function value of the error function is the smallest can be obtained as follows: For each channel of the acquired image, the acquired image is on the channel when the function value of the error function is the smallest. Brightness compensation information.
  • the error function is a function of the brightness compensation information of the acquired images with the same overlapping area and the pixel value of at least one pixel in the overlapping area in the channel.
  • the error function on one channel can be expressed as:
  • a1, a2, a3, a4, a5, and a6 respectively indicate the brightness compensation information (also referred to as: brightness compensation coefficient) of the six input images in the channel
  • p1, p2, p3, p4, p5, and p6 are respectively Represents that the six input images correspond to the average of pixel values (ie, R component, G component, and B component) of the channel.
  • the function value of e (i) is the smallest, the visual difference of the six input images in the channel is the smallest.
  • the embodiments of the present disclosure may also adopt other forms of error functions, and are not limited to adopting the form shown in the following formula (13).
  • the function value of the error function of a channel can be obtained based on the following methods:
  • the weighted difference between the pixel values of the two captured images in the overlapping area includes: the difference between the first product and the second product.
  • the first product includes: a product of the brightness compensation information of the first acquired image and a sum of the pixel values of at least one pixel point in the overlapping region of the first acquired image.
  • the second product includes a second product of the brightness compensation information of the second captured image and the sum of the pixel values of at least one pixel point in the overlapping region of the second captured image.
  • the stitching information table can be read into the memory, and the multi-camera can be real-time Alternatively, multiple input images to be stitched collected according to a preset period are read into the memory, so that the stitching information table and the input image can be read during application.
  • the stitching information table only needs to be generated once, you can directly search for image stitching. It only needs to be updated when the light changes and / or the position / direction of the camera changes, which can reduce the time required for image stitching, with low delay and throughput.
  • the advantages of large volume improve the processing efficiency of stitched images, can meet the real-time requirements of smart car surround view stitching, and improve the display frame rate and resolution of stitched videos.
  • the memory may be various types of memory such as DDR (Double Data Rate) memory.
  • DDR Double Data Rate
  • FIG. 3 is a flowchart of another embodiment of an image stitching method according to the present disclosure. As shown in FIG. 3, the image stitching method in this embodiment includes:
  • the operation 202 may be performed by a processor calling a corresponding instruction stored in a memory, or may be performed by a first obtaining module executed by the processor.
  • the input image block corresponding to the output block belongs to the overlapping region, then the input image block in all the input images corresponding to the output block having the overlapping region is obtained.
  • the operation 204 may be performed by a processor calling a corresponding instruction stored in a memory, or may be performed by a second obtaining module executed by the processor.
  • the operation 206 may be performed by a processor calling a corresponding instruction stored in the memory, or may be performed by a compensation module executed by the processor.
  • the average value of the pixel values of each pixel at at least two different resolutions can also be obtained for each channel of the output image block, or A weighted value or a weighted average value; and weighted superposition according to the average value of the pixel values of each pixel point, or the weighted value, or the weighted average value, to obtain an output image block.
  • the at least two different resolutions include: the resolution of the input image block after interpolation and at least one lower resolution that is lower than the resolution of the input image block after interpolation.
  • the operation 208 may be performed by a processor calling a corresponding instruction stored in the memory, or may be performed by a third obtaining module executed by the processor.
  • the operation 210 may be performed by a processor calling a corresponding instruction stored in a memory, or may be performed by a splicing module executed by the processor.
  • a block processing strategy is used to obtain each output image block separately.
  • a full pipeline can be used to accelerate the processing of input images with a small processing delay and a large throughput, which can meet the real-time requirements of video image stitching.
  • FIG. 4 is a flowchart of another embodiment of an image stitching method according to the present disclosure. This embodiment takes a pre-generated stitching information table as an example to further explain the image stitching method in the embodiment of the present disclosure. As shown in FIG. 4, the image stitching method in this embodiment includes:
  • the memory having the overlapping area corresponding to the output block is obtained from the memory.
  • the input image blocks in all input images are read into the computing chip.
  • the operation 302 may be performed by a processor calling a corresponding instruction stored in a memory, or may be performed by a second obtaining module executed by the processor.
  • the operation 304 may be performed by a processor calling a corresponding instruction stored in the memory, or may be performed by a compensation module executed by the processor.
  • the at least two different resolutions include: the resolution of the input image block after interpolation and at least one lower resolution that is lower than the resolution of the input image block after interpolation.
  • 314 Acquire the coordinates of each pixel in the output block and the coordinates in the corresponding input image block, and interpolate the input image block to obtain an output image block.
  • the operations 306-316 may be executed by the processor by calling corresponding instructions stored in the memory, or may be executed by a third acquisition module executed by the processor.
  • a stitched image is obtained based on the stitching of all the output image blocks in the memory.
  • the operation 318 may be performed by a processor calling a corresponding instruction stored in the memory, or may be performed by a splicing module executed by the processor.
  • the computing chip may be, for example, a Field Programmable Gate Array (FPGA).
  • FPGA Field Programmable Gate Array
  • an information table block can be sequentially read from the spliced information table in the memory and stored in the cache in the FPGA first, and the buffered data in the FPGA is operated in operations 304-314. Proceed accordingly.
  • a full pipeline can be used to accelerate the processing of images inside the FPGA.
  • the processing delay is small and the throughput is large, which can meet the real-time requirements of video image stitching.
  • the amount of data stored in the stitching information table is also large, and the cache in the FPGA is small. Reading the information table blocks and corresponding input image blocks from the memory to the cache and then processing them improves the parallel processing efficiency of the images.
  • the efficiency and the cache size of the FPGA can be considered to determine the output block.
  • the size of the block in one of the alternative examples, the size of the output block is 32x32 pixels.
  • Line buffering refers to a first-in, first-out (FIFO) technology used to improve processing efficiency when processing images line by line, so if you use the traditional line buffering method, you must read a large number of line input images because The input corresponding to a line of output image has many lines of input image, and a large number of pixels are not used, which inevitably results in low utilization of memory bandwidth and low processing efficiency.
  • the embodiment of the present disclosure proposes a block processing method.
  • a region of a stitched image is divided into blocks, and a corresponding input image and a stitching information table are also formed into blocks.
  • the image stitching is performed by the FPGA, the input image in the memory is separated and the information table is divided into blocks for processing, which can save the amount of buffered data of the FPGA and improve the image stitching processing efficiency.
  • the method may further include:
  • any of the image stitching methods provided by the embodiments of the present disclosure may be executed by any appropriate device having data processing capabilities, including but not limited to: a terminal device and a server.
  • any of the image stitching methods provided in the embodiments of the present disclosure may be executed by a processor.
  • the processor executes any of the image stitching methods mentioned in the embodiments of the present disclosure by calling corresponding instructions stored in a memory. I will not repeat them below.
  • the foregoing program may be stored in a computer-readable storage medium.
  • the program is executed, the program is executed.
  • the method includes the steps of the foregoing method embodiment; and the foregoing storage medium includes: a ROM, a RAM, a magnetic disk, or an optical disc, which can store various program codes.
  • FIG. 5 is a schematic structural diagram of an embodiment of an image stitching device of the present disclosure.
  • the image stitching device of this embodiment may be used to implement the foregoing image stitching method embodiments of the present disclosure.
  • the image stitching device of this embodiment includes a first acquisition module, a compensation module, and a stitching module. among them:
  • the first obtaining module is configured to obtain brightness compensation information of each input image in the multiple input images to be stitched. Among them, multiple input images are correspondingly acquired by multiple cameras.
  • multiple input images are correspondingly acquired by multiple cameras set on different parts of the device.
  • the deployment position and direction of the multiple cameras can make at least two adjacent images of the multiple input images collected by the multiple cameras have overlapping areas, or every two adjacent images have overlapping areas.
  • the device with multiple cameras can be a vehicle, a robot, or other devices that need to obtain stitched images, such as other vehicles.
  • the device for setting the multi-channel camera is a vehicle
  • the number of the multi-channel cameras may include: 4-8 according to the length and width of the vehicle and the shooting range of the camera.
  • the above-mentioned multi-channel camera may include: at least one camera disposed at a head position of the vehicle, at least one camera disposed at a rear position of the vehicle, and at least one disposed at a middle portion of a vehicle body side A camera in the area, and at least one camera disposed in a middle area on the other side of the vehicle body; or, the multi-channel camera includes: at least one camera disposed in a head position of the vehicle, and at least one camera disposed in a rear position of the vehicle Cameras, at least two cameras respectively disposed in a front half region and a rear half region of one side of a vehicle body, and at least two cameras respectively disposed in a front half region and a rear half region of the other side of the vehicle body.
  • the multi-channel camera may include: at least one fish-eye camera, and / or, at least one non-fish-eye camera.
  • the compensation module is configured to perform brightness compensation on the input image based on the brightness compensation information of each input image.
  • a stitching module is used to stitch the input image after brightness compensation to obtain a stitched image.
  • the embodiments of the present disclosure perform brightness compensation for multiple input images to be stitched, and implements global brightness compensation for the images to be stitched. It can eliminate the difference in brightness of multiple input images to be stitched due to the difference in light and exposure of different camera environments
  • the splicing marks appear in the stitched image, which enhances the visual effect of the stitched image display, and is beneficial to various application effects based on the stitched image. For example, when the embodiment of the present disclosure is applied to a vehicle, the stitching used to display the driving environment of the vehicle is obtained. The images help improve the accuracy of intelligent driving control.
  • the first obtaining module is configured to determine brightness compensation information of each input image in the plurality of input images according to an overlapping area in the plurality of input images.
  • the brightness compensation information of each input image is used to make the brightness difference between the input images after the brightness compensation fall within a preset brightness tolerance range.
  • the brightness compensation information of each input image is used to minimize the sum of pixel value differences of every two input images in each overlapping area after the brightness compensation, or less than a preset error value.
  • FIG. 6 is a schematic structural diagram of another embodiment of an image stitching device of the present disclosure. As shown in FIG. 6, compared with the embodiment shown in FIG. 5, this embodiment further includes a second obtaining module configured to obtain input image blocks in the input image corresponding to the output blocks for each output block respectively. . Accordingly, in this embodiment, the compensation module is configured to perform brightness compensation on the input image block based on the brightness compensation information of the input image where the input image block is located.
  • the second acquisition module when an input image block in an input image corresponding to an output block belongs to an overlapping region of an adjacent input image, the second acquisition module is configured to acquire all input images in the input image corresponding to the output block that have overlapping regions. Input image block.
  • the second acquisition module is configured to: acquire position information of an input image block in an input image corresponding to coordinate information of an output block; and acquire an input image from a corresponding input image based on the position information of the input image block. Piece.
  • the compensation module is configured to perform multiplication calculation processing for each channel of the input image block on the pixel value of each pixel in the input image block using the channel brightness compensation information of the input image block.
  • the image stitching device of the present disclosure may further include a third obtaining module configured to obtain an output image block on the output block based on the input image block after the luminance compensation.
  • the stitching module is configured to stitch each output image block to obtain a stitched image.
  • the third acquisition module is configured to interpolate the input image block based on the coordinates of each pixel in the output block and the coordinates in the corresponding input image block to obtain an output image block on the output block.
  • the third acquisition module is configured to respectively based on the coordinates of each pixel in the output block and each corresponding input image Coordinates in the block, interpolate each input image block corresponding to the output block, and superpose all interpolated input image blocks corresponding to the output block to obtain an output image block.
  • the third acquisition module when it superimposes all the interpolated input image blocks corresponding to the output block, it is used to: for each channel of each interpolated input image block, obtain each The average value, or weighted value, or weighted average of the pixel values of the pixel at at least two different resolutions; wherein the at least two different resolutions include: the resolution of the input image block after interpolation and at least one low Lower resolution of the input image block after interpolation; for each channel of all input image blocks after interpolation corresponding to the output block, respectively, according to the average value or weighted value of the pixel values of each pixel Or weighted average for weighted overlay.
  • the image stitching apparatus of the present disclosure may further include a fourth acquisition module, which is used to fuse and transform information of the multiple captured images corresponding to the stitched images based on the multiple cameras correspondingly acquired, Obtaining the coordinates of each pixel in the output block corresponds to the coordinates of the pixel in the input block of the acquired image.
  • a fifth acquisition module is configured to acquire position information of the input block and overlap attribute information used to indicate whether the input block belongs to an overlap region of any two captured images.
  • a generating module is configured to record the relevant information of each output block through an information table block in the stitching information table in accordance with the order of the output block; a storage module is used to store the stitching information table.
  • a second acquisition module is configured to sequentially read one information table segment from the spliced information table, and obtain the recorded information based on the related information of the output segment of the read information table segment record.
  • the related information of the output block may include, but is not limited to, position information of the output block, overlapping attribute information of the input block corresponding to the output block, an identification of an input image to which the input block corresponding to the output block belongs, and output The coordinates of the pixel points in the input block and the position information of the input block corresponding to the coordinates of each pixel point in the block;
  • the image stitching device of the present disclosure may further include: a sixth acquisition module, which is used to transform information of the various levels of the captured images corresponding to the stitched images based on the multiple cameras correspondingly acquired.
  • a sixth acquisition module which is used to transform information of the various levels of the captured images corresponding to the stitched images based on the multiple cameras correspondingly acquired.
  • the transformation information at each level may include, but is not limited to, lens de-distortion information, perspective transformation information, and registration information.
  • the lens de-distortion information includes fish-eye distortion information for an input image captured by a fish-eye camera, and / or de-distortion information for an input image captured by a non-fish-eye camera.
  • the image stitching device of the present application may further include: a control module, configured to instruct the first or second camera when the position and / or direction of any one or more cameras in the multi-channel camera changes.
  • the four acquisition modules obtain the coordinates of the pixels in the output block corresponding to the coordinates of the pixel points in the input block of the acquired image based on the fusion transformation information of the multiple acquired images corresponding to the multi-camera to the stitched image; instruct the fifth acquisition
  • the module obtains the position information of the input block, the overlapping attribute information used to indicate whether the input block belongs to the overlapping area of any two captured images, and instructs the generation module to pass an information table in the stitching information table according to the order of the output blocks.
  • Block records the relevant information for each output block.
  • the image stitching device of the present disclosure may further include: a reading module configured to read the stitching information table into the memory after recording the relevant information of all the output blocks in the stitching information table. Medium; and read multiple input images to be stitched collected by multiple cameras into memory.
  • the second obtaining module is configured to sequentially read an information table block from the spliced information table in the memory and read it into the computing chip, and output the block record based on the read information table block.
  • Relevant information about the blocks, the input image blocks corresponding to the recorded output blocks are obtained from the memory and read into the computing chip; the computing chip includes a compensation module and a stitching module.
  • the stitching module is used to sequentially write the obtained output image blocks back to the memory; when all output image blocks based on a stitching image corresponding to the stitching information table are written back to the memory, a stitched image is obtained.
  • the image stitching device of the present disclosure may further include: a seventh acquisition module, configured to acquire multiple acquired images based on overlapping areas of multiple acquired images acquired by multiple cameras.
  • the brightness compensation information of each captured image is stored in the mosaic information table or in each information table block of the mosaic information table.
  • the first obtaining module is configured to obtain the brightness compensation information of the collected image collected by the same camera from the stitching information table or the information table block, respectively, as the brightness compensation information of the corresponding input image.
  • control module may be further configured to instruct the seventh acquisition module to acquire the overlapping regions of the multiple acquired images acquired by the multiple cameras when detecting that the light change meets a predetermined condition, to acquire each of the multiple acquired images.
  • the seventh acquisition module is configured to acquire each of the plurality of acquired images based on a manner of minimizing a sum of differences in pixel values of every two acquired images in the overlapping area of the plurality of acquired images after the brightness compensation. Collect the brightness compensation information of the image.
  • the seventh acquisition module is configured for each channel of the acquired image, and based on the brightness compensation, the sum of the pixel values of the two acquired images in the overlapping area of the multiple acquired images is the smallest in the channel. Obtain the brightness compensation information of each acquired image in the channel in the channelized manner.
  • the seventh acquisition module acquires the sum of the pixel value differences of each two acquisition images in the overlapping area of multiple acquisition images for one channel of the acquisition image based on the following methods: Channel to obtain the sum of the absolute values of the weighted differences of the pixel values in the overlapping areas of the two acquired images each with the same overlapping area, or the weighted difference of the pixel values in the overlapping areas of the two acquired images each with the same overlapping area The sum of the squared values.
  • the weighted difference between the pixel values of the two acquired images in the overlapping area includes: the difference between the first product and the second product; the first product includes: the brightness compensation information of the first acquired image overlaps with the first acquired image A product of the sum of the pixel values of at least one pixel point in the region, and the second product includes a second product of the brightness compensation information of the second captured image and the sum of the pixel values of at least one pixel point in the overlapping region of the second captured image.
  • the image stitching device of the present disclosure may further include: a display module for displaying the stitched image; and / or an intelligent driving module for performing intelligent driving control based on the stitched image.
  • FIG. 7 is a schematic structural diagram of an embodiment of an in-vehicle image processing device of the present disclosure.
  • the vehicle-mounted image processing apparatus of this embodiment may be used to implement the foregoing image stitching method embodiments of the present disclosure.
  • the vehicle-mounted image processing apparatus of this embodiment includes a first storage module and a computing chip. among them:
  • the first storage module is configured to store a stitching information table and multiple input images respectively acquired by corresponding cameras.
  • a computing chip for obtaining brightness compensation information of each input image in a plurality of input images to be spliced from a first storage module; for each output block, the input in the input image corresponding to the output block is obtained from the first storage module.
  • Image block brightness compensation is performed on the input image block based on the brightness compensation information of the input image where the input image block is located, and the output image block on the output block is obtained based on the input image block after the brightness compensation and the obtained output image block is written sequentially Back to the first storage module; in response to all output image blocks based on a stitched image corresponding to the stitching information table being written back to the memory, a stitched image is obtained.
  • the stitching information table includes at least one information table block, and the information table block includes brightness compensation information of multiple input images and related information of each output block.
  • the related information of the output block includes: Position information of the output block, overlapping attribute information of the input block corresponding to the output block, the identifier of the input image to which the input block corresponding to the output block belongs, pixels in the input block corresponding to the coordinates of each pixel point in the output block The coordinates of the points and the position information of the input block.
  • the above-mentioned first memory module may include: a volatile memory module; the computing chip may include: a field programmable gate array FPGA.
  • the first storage module may be further configured to store the first application unit and the second application unit.
  • the first application unit is configured to obtain the coordinates of the pixel points in the output block corresponding to the pixel points in the input block of the captured image based on the fusion transformation information of the multiple captured images corresponding to the multi-camera to the stitched image. Coordinates; obtain the position information of the input block, and overlap attribute information used to indicate whether the input block belongs to the overlapping area of any two acquired images; according to the order of the output block, the information is divided into blocks in the stitching information table. Record the relevant information for each output block.
  • the second application unit is configured to obtain the brightness compensation information of each of the acquired images in the plurality of acquired images based on the overlapping areas of the acquired images acquired by the multiple cameras and store the information in the information table blocks of the stitching information table.
  • FIG. 8 is a schematic structural diagram of another embodiment of an in-vehicle image processing device of the present disclosure. As shown in FIG. 8, compared with the embodiment shown in FIG. 7, the vehicle-mounted image processing apparatus of this embodiment may further include any one or more of the following modules:
  • Non-volatile memory module used to store the operation support information of the computing chip
  • An input interface for connecting multiple cameras and a first storage module to write multiple input images acquired by the multiple cameras into the first storage module
  • a first output interface for connecting the first storage module and the display screen, and used for outputting the stitched image in the first storage module to the display screen for display;
  • the second output interface is used to connect the first storage module and the intelligent driving module, and is configured to output the stitched image in the first storage module to the intelligent driving module, so that the intelligent driving module performs intelligent driving control based on the stitched image.
  • another electronic device provided by an embodiment of the present disclosure includes:
  • the processor is configured to execute a computer program stored in the memory, and when the computer program is executed, implement the image stitching method of any one of the foregoing embodiments of the present disclosure.
  • FIG. 9 is a schematic structural diagram of an application embodiment of an electronic device according to the present disclosure.
  • the electronic device includes one or more processors, a communication unit, and the like.
  • the one or more processors are, for example, one or more central processing units (CPUs), and / or one or more images.
  • CPUs central processing units
  • the processor may perform various appropriate actions and processes according to executable instructions stored in a read-only memory (ROM) or executable instructions loaded from a storage portion into a random access memory (RAM) .
  • the communication unit may include, but is not limited to, a network card.
  • the network card may include, but is not limited to, an IB (Infiniband) network card.
  • the processor may communicate with a read-only memory and / or a random access memory to execute executable instructions, and is connected to the communication unit through a bus. And communicate with other target devices via the communication unit, thereby completing operations corresponding to any of the image stitching methods provided in the embodiments of the present disclosure, for example, obtaining brightness compensation information of each input image among a plurality of input images to be stitched;
  • the multiple input images are correspondingly acquired by multiple cameras set on different parts of the device; the input image is compensated for brightness based on the brightness compensation information of each input image; the input image after brightness compensation is stitched to obtain Stitch the images.
  • various programs and data required for the operation of the device can be stored in the RAM.
  • the CPU, ROM, and RAM are connected to each other through a bus.
  • ROM is an optional module.
  • the RAM stores executable instructions, or writes executable instructions into ROM at runtime, and the executable instructions cause the processor to perform operations corresponding to any of the image stitching methods described above in the present disclosure.
  • Input / output (I / O) interfaces are also connected to the bus.
  • the communication unit can be integrated or set to have multiple sub-modules (for example, multiple IB network cards) and be on the bus link.
  • the following components are connected to the I / O interface: including input parts such as keyboard, mouse, etc .; including output parts such as cathode ray tube (CRT), liquid crystal display (LCD), etc .; speakers; storage parts including hard disk; LAN card, modem, and other network interface card communication part.
  • the communication section performs communication processing via a network such as the Internet.
  • the drive is also connected to the I / O interface as required. Removable media, such as magnetic disks, optical disks, magneto-optical disks, semiconductor memories, etc., are installed on the drive as needed, so that a computer program read therefrom is installed into the storage section as needed.
  • FIG. 9 is only an optional implementation manner.
  • the number and types of the components in FIG. 9 may be selected, deleted, added, or replaced according to actual needs.
  • Different functional component settings can also be implemented by separate settings or integrated settings.
  • the GPU and CPU can be set separately or the GPU can be integrated on the CPU.
  • the communications department can be set separately or integrated on the CPU or GPU. and many more.
  • the process described above with reference to the flowchart may be implemented as a computer software program.
  • embodiments of the present disclosure include a computer program product including a computer program tangibly embodied on a machine-readable medium, the computer program including program code for performing a method shown in a flowchart, and the program code may include a corresponding The instructions corresponding to the steps of the image stitching method provided by any embodiment of the present disclosure are executed.
  • the computer program may be downloaded and installed from a network through a communication section, and / or installed from a removable medium.
  • the computer program is executed by the CPU, the above functions defined in the image stitching method of the embodiment of the present disclosure are executed.
  • an embodiment of the present disclosure also provides a computer program including computer instructions.
  • the computer instructions are run in a processor of the device, the image stitching method of any of the foregoing embodiments of the present disclosure is implemented.
  • an embodiment of the present disclosure also provides a computer-readable storage medium on which a computer program is stored.
  • the computer program is executed by a processor, the image stitching method of any one of the foregoing embodiments of the present disclosure is implemented.
  • the embodiments of the present disclosure can be used in a smart car driving scenario.
  • the embodiments of the present disclosure can be used to perform video surround stitching to meet the requirements of stitching effect, real-time performance and frame rate;
  • the driver can be shown to the driver when driving on a narrow road when the driver's line of sight is blocked, such as when entering a parking garage or on a crowded road Stitching images
  • a pedestrian detection and target detection algorithm can be performed to automatically control a car to stop or avoid a pedestrian or a target in an emergency.
  • the methods and apparatuses and devices of the present disclosure may be implemented by software, hardware, firmware, or any combination of software, hardware, and firmware.
  • the above order of the steps of the method is for illustration only, and the steps of the method of the present disclosure are not limited to the order described above unless specifically stated otherwise.
  • the present disclosure may also be implemented as programs recorded in a recording medium, which programs include machine-readable instructions for implementing the method according to the present disclosure.
  • the present disclosure also covers a recording medium storing a program for executing a method according to the present disclosure.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Mechanical Engineering (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

本公开实施例公开了一种图像拼接方法和装置、车载图像处理装置、电子设备、存储介质,其中,图像拼接方法包括:获取待拼接的多张输入图像中各输入图像的亮度补偿信息;其中,所述多张输入图像分别由多路摄像头对应采集得到;分别基于各输入图像的亮度补偿信息对输入图像进行亮度补偿;对亮度补偿后的输入图像进行拼接处理,得到拼接图像。本申请实施例可以消除由于不同摄像头输入的曝光不同和光线差异导致拼接图像中产生的拼接痕迹,增强了拼接图像显示的视觉效果,有利于基于该拼接图像进行的各种应用效果。

Description

图像拼接方法和装置、车载图像处理装置、电子设备、存储介质
本公开要求在2018年08月29日提交中国专利局、申请号为CN201810998634.9、发明名称为“图像拼接方法和装置、车载图像处理装置、电子设备、存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本公开中。
技术领域
本公开涉及图像处理技术,尤其是一种图像拼接方法和装置、车载图像处理装置、电子设备、存储介质。
背景技术
环视拼接系统作为高级驾驶辅助系统(Advanced Driver Assistance System,ADAS)的重要组成部分,可以将汽车周围的情况实时显示给驾驶者或者智能决策系统。现有的环视拼接系统一般在车身四周的多个方位各安装一个摄像头,通过各摄像头分别采集车身四周的图像,并将采集的图像融合形成360度全景图显示给驾驶者或者智能决策系统。
发明内容
本公开实施例提供一种环视拼接技术方案。
根据本公开实施例的一个方面,提供的一种图像拼接方法,包括:
获取待拼接的多张输入图像中各输入图像的亮度补偿信息;其中,所述多张输入图像分别由设置在设备的不同部位上的多路摄像头对应采集得到;
分别基于各输入图像的亮度补偿信息对输入图像进行亮度补偿;
对亮度补偿后的输入图像进行拼接处理,得到拼接图像。
根据本公开实施例的另一个方面,提供的一种图像拼接装置,包括:
第一获取模块,用于获取待拼接的多张输入图像中各输入图像的亮度补偿信息;其中,所述多张输入图像分别由多路摄像头对应采集得到;
补偿模块,用于分别基于各输入图像的亮度补偿信息对输入图像进行亮度补偿;
拼接模块,用于对亮度补偿后的输入图像进行拼接处理,得到拼接图像。
根据本公开实施例的又一个方面,提供的一种车载图像处理装置,包括:
第一存储模块,用于存储拼接信息表和分别由多路摄像头对应采集得到的多张输入图像;
计算芯片,用于从所述第一存储模块获取待拼接的多张输入图像中各输入图像的亮度补偿信息;分别针对各输出分块,从所述第一存储模块获取所述输出分块对应的输入图像中的输入图像块;基于所述输入图像块所在输入图像的亮度补偿信息对所述输入图像块进行亮度补偿,基于亮度补偿后的输入图像块获取所述输出分块上的输出图像块并将获取到的输出图像块依序写回所述第一存储模块;响应于基于所述拼接信息表对应的一个拼接图像的所有输出图像块写回所述内存中,得到拼接图像。
根据本公开实施例的再一个方面,提供的一种电子设备,包括:
存储器,用于存储计算机程序;
处理器,用于执行所述存储器中存储的计算机程序,且所述计算机程序被执行时,实现本公开上述任一实施例所述的方法。
根据本公开实施例的再一个方面,提供的一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时,实现本公开上述任一实施例所述的方法。
基于本公开上述实施例提供的图像拼接方法和装置、车载图像处理装置、电子设备、存储介质,对由多路摄像头对应采集得到的多张输入图像进行拼接时,获取该待拼接的多张输入图像中各输入图像的亮度补偿信息,分别基于各输入图像的亮度补偿信息对输入图像进行亮度补偿,并对亮度补偿后的输入图像进行拼接处理,得到拼接图像。本公开实施例针对待拼接的多张输入图像进行亮度补偿,实现了对待拼接图像的全局亮度补偿,可以消除由于不同摄像头所在环境的光线差异和曝光不同导致待拼接的多张输入图像亮度不同从而使得拼接图像中出现拼接痕迹,增强了拼接图像显示的视觉效果,有利于基于该拼接图像进行的各种应用效果,例如本公开实施例应用于车辆时,得到的用于显示车辆驾驶环境的拼接图像有利于提高智能驾驶控制的准确性。
下面通过附图和实施例,对本公开的技术方案做进一步的详细描述。
附图说明
构成说明书的一部分的附图描述了本公开的实施例,并且连同描述一起用于解释本公开的原理。
参照附图,根据下面的详细描述,可以更加清楚地理解本公开,其中:
图1为本公开图像拼接方法一个实施例的流程图。
图2为本公开实施例中六张输入图像对应的拼接图像的区域示例图。
图3为本公开图像拼接方法另一个实施例的流程图。
图4为本公开图像拼接方法又一个实施例的流程图。
图5为本公开图像拼接装置一个实施例的结构示意图。
图6为本公开图像拼接装置另一个实施例的结构示意图。
图7为本公开车载图像处理装置一个实施例的结构示意图。
图8为本公开车载图像处理装置另一个实施例的结构示意图。
图9为本公开电子设备一个应用实施例的结构示意图。
具体实施方式
现在将参照附图来详细描述本公开的各种示例性实施例。应注意到:除非另外说明,否则在这些实施例中阐述的部件和步骤的相对布置、数字表达式和数值不限制本公开的范围。
还应理解,在本公开实施例中,“多个”可以指两个或两个以上,“至少一个”可以指一个、两个或两个以上、部分或全部。
本领域技术人员可以理解,本公开实施例中的“第一”、“第二”等术语仅用于区别不同步骤、设备或模块等,既不代表任何特定技术含义,也不表示它们之间的必然逻辑顺序。
还应理解,对于本公开实施例中提及的任一部件、数据或结构,在没有明确限定或者在前后文给出相反启示的情况下,一般可以理解为一个或多个。
还应理解,本公开对各个实施例的描述着重强调各个实施例之间的不同之处,其相同或相似之处可以相互参考,为了简洁,不再一一赘述。
同时,应当明白,为了便于描述,附图中所示出的各个部分的尺寸并不是按照实际的比例关系绘制的。
以下对至少一个示例性实施例的描述实际上仅仅是说明性的,决不作为对本公开及其应用或使用的任何限制。
对于相关领域普通技术人员已知的技术、方法和设备可能不作详细讨论,但在适当情况下,所述技术、方法和设备应当被视为说明书的一部分。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步讨论。
另外,公开中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本公开中字符“/”,一般表示前后关联对象是一种“或”的关系。
本公开实施例可以应用于终端设备、计算机系统、服务器等电子设备,其可与众多其它通用或专用计算系统环境或配置一起操作。适于与终端设备、计算机系统、服务器等电子设备一起使用的众所周知的终端设备、计算系统、环境和/或配置的例子包括但不限于:个人计算机系统、服务器计算机系统、瘦客户机、厚客户机、手持或膝上设备、基于微处理器的系统、机顶盒、可编程消费电子产品、网络个人电脑、小型计算机系统﹑大型计算机系统和包括上述任何系统的分布式云计算技术环境,等等。
终端设备、计算机系统、服务器等电子设备可以在由计算机系统执行的计算机系统可执行指令(诸如程序模块)的一般语境下描述。通常,程序模块可以包括例程、程序、目标程序、组件、逻辑、数据结构等等,它们执行特定的任务或者实现特定的抽象数据类型。计算机系统/服务器可以在分布式云计算环境中实施,分布式云计算环境中,任务是由通过通信网络链接的远程处理设备执行的。在分布式云计算环境中,程序模块可以位于包括存储设备的本地或远程计算系统存储介质上。
图1为本公开图像拼接方法一个实施例的流程图。如图1所示,该实施例的图像拼接方法包括:
102,获取待拼接的多张输入图像中各输入图像的亮度补偿信息。
其中,多张输入图像分别由设置在设备的不同部位上的多路摄像头对应采集得到。该多路摄像头的部署位置和方向,可以使该多路摄像头采集得到的多张输入图像中,至少两张相邻图像具有重叠区域、或者每二张相邻图像均具有重叠区域,例如,任意两张相邻图像均具有重叠区域。其中,相邻图像为部署在上述设备的不同部位中相邻部位的摄像头采集的图像、或者多张输入图像对应于拼接图像中的位置相邻的图像。
本公开实施例中,对多路摄像头的部署位置和方向不作限制,只要多路摄像头采集得到的多张输入图像中至少两张相邻图像或每二张相邻图像具有重叠区域,即可采用本公开实施例实现多张输入图像的拼接。
在其中一些实施方式中,上述设置多路摄像头的设备可以是车辆、机器人或者其他需要获取拼接图像的设备,例如其他交通工具等。在上述设置多路摄像头的设备是车辆时,根据车辆的长度和宽度、以及摄像头的拍摄范围,上述多路摄像头的数量可以包括:4-8个。
由此,在其中一些实施方式中,上述多路摄像头可以包括:至少一个设置在车辆的头部位置的摄像头,至少一个设置在车辆的尾部位置的摄像头,至少一个设置在车辆的车身一侧中部区域内的摄像头,和至少一个设置在车辆的车身另一侧中部区域内的摄像头;或者,上述多路摄像头包括:至少一个设置在车辆的头部位置的摄像头,至少一个设置在车辆的尾部位置的摄像头,至少两个分别设置在车辆的车身一侧前半部区域和后半部区域内的摄像头,和至少两个分别设置在车辆的车身另一侧前半部区域和后半部区域内的摄像头。
例如,在实际应用中,对于长度和宽度均较大的车辆,可以在车辆的头部、尾部和每一侧分别设置两个摄像头,一共在车辆周围设置八个摄像头,以确保拍摄范围可以覆盖车辆四周;对于长度较大的车辆,可以在车辆的头部和尾部分别设置一个摄像头,在车辆的每一侧分别设置两个摄像头,一共在车辆周围设置六个摄像头,以确保拍摄范围可以覆盖车辆四周;对于长度和宽度均较小的车辆,可以在车辆的头部、尾部和每一侧分别设置一个摄像头,一共在车辆周围设置四个摄像头,以确保拍摄范围可以覆盖车辆四周。
在其中一些实施方式中,上述多路摄像头可以包括:至少一鱼眼摄像头,和/或,至少一非鱼眼摄像头。
其中,鱼眼摄像头是一种焦距为16mm或更短、并且视角通常超过90°甚至接近或等于180°的镜头。是一种极端的广角镜头。采用鱼眼摄像头具有视角范围广的优点,使用鱼眼摄像头,可以通过部署较少的摄像头即可是实现对刚广范围内的场景拍摄。
在一个可选示例中,该操作102可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第一获取模块执行。
104,分别基于各输入图像的亮度补偿信息对输入图像进行亮度补偿。
本公开实施例中,对图像进行亮度补偿,即对图像中各像素点的像素值进行调整,以调整图像在亮度方面的视觉效果。
在一个可选示例中,该操作104可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的补偿模块执行。
106,对亮度补偿后的输入图像进行拼接处理,得到拼接图像。
在一个可选示例中,该操作106可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的拼接模块执行。
基于上述实施例,对由多路摄像头对应采集得到的多张输入图像进行拼接时,获取该待拼接的多张输入图像中各输入图像的亮度补偿信息,分别基于各输入图像的亮度补偿信息对输入图像进行亮度补偿,并对亮度补偿后的输入图像进行拼接处理,得到拼接图像。本公开实施例针对待拼接的多张输入图像进行亮度补偿,实现了对待拼接图像的全局亮度补偿,可以消除由于不同摄像头所在环境的光线差异和曝光不同导致待拼接的多张输入图像亮度不同从而使得拼接图像中出现拼接痕迹,增强了拼接图像显示的视觉效果,有利于基于该拼接图像进行的各种应用效果,例如本公开实施例应用于车辆时,得到的用于显示车辆驾驶环境的拼接图像有利于提高智能驾驶控制的准确性。
在其中一些实施方式中,上述操作102可以包括:根据上述多张输入图像中的重叠区域确定该多张输入图像中各输入图像的亮度补偿信息。
在其中一些实施方式中,各输入图像的亮度补偿信息用于使经过亮度补偿后的各输入图像之间的亮度差异落入预先设定的亮度容差范围内。
或者,在其中一些实施方式中,各输入图像的亮度补偿信息用于使经过亮度补偿后,各重叠区域中每二张输入图像的像素值差异之和最小或者小于预设误差值。
由于重叠区域的拍摄对象是相同的,具有亮度比较的可比性,本公开实施例中,根据重叠区域来确定输入图像的亮度补偿信息,准确性较高;使经过亮度补偿后的各输入图像之间的亮度差异落入预先设定的亮度容差范围内,或者各重叠区域中每二张输入图像的像素值差异之和最小或者小于预设误差值,从而可减轻或避免拼接图像中不同输入图像由于环境光线差异和摄像头的曝光不同导致在重叠区域产生拼接痕迹,提高了视觉效果。
在其中一些实施方式中,上述操作104可以包括:
分别针对输出区域中的各输出分块,获取输出分块对应的输入图像中的输入图像块。其中,若某个输出分块对应的输入图像块属于相邻输入图像的重叠区域时,该操作中,获取该输出分块对应的具有重叠区域的所有输入图像中的输入图像块,以便实现对重叠区域的输入图像块的叠加和拼接;
基于亮度补偿后的输入图像块所在输入图像的亮度补偿信息对该输入图像块进行亮度补偿。
本公开实施例中,输出区域是指拼接图像的输出区域,输出分块是该输出区域中的一个分块。如图2所示,为本公开实施例中六张输入图像对应的拼接图像的区域示例图。图2中的六张输入图像分别对应于拼接图像的输出区域(1)-(6),该六张输入图像分别由环绕在车辆周围(例如分布在车辆的前方、后方、左侧中前部、左侧中后部、右侧中前部、右侧中后部)的摄像头采集得到。
在其中一个可选示例中,输出块可以为正方形,且输出分块的边长可以为2的N次方,例如图2中,输出分块的大小是32x32,以方便后续的计算。
本公开实施例中,输入分块、输出分块、输入图像块、输出图像块的大小单位可以为像素,以便于对图像数据进行读取、处理。
在其中一些可选示例中,上述获取输出分块对应的输入图像中的输入图像块,可以通过如下方式实现:
获取输出分块的坐标信息对应的输入图像中的输入图像块的位置信息。该位置信息例如可以包括:输入图像块的大小和偏移地址,基于输入图像块的大小和偏移地址可以确定输入图像块在输入图像中的位置;
基于输入图像块的位置信息,从对应的输入图像中获取输入图像块。
由于图像具有红绿蓝(RGB)三个通道,在本公开一些实施方式中,每张输入图像的每个通道分别具有一个亮度补偿信息,在每个通道上,待拼接的多张输入图像的亮度补偿信息形成一组该通道的亮度补偿信息。相应地,该实施方式中,上述基于输入图像块所在输入图像的亮度补偿信息对输入图像块进行亮度补偿,可以包括:分别针对输入图像块的每个通道,以输入图像在该通道的亮度补偿信息对该输入图像块中各像素在通道的像素值进行乘法计算处理,即,将输入图像块中各像素在该通道的像素值与该输入图像块所在输入图像在该通道的亮度补偿信息进行相乘。
另外,在本公开的另一个实施例中,上述基于输入图像块所在输入图像的亮度补偿信息对该输入图像块进行亮度补偿之后,还可以包括:基于亮度补偿后的输入图像块获取该输出分块上的输出图像块。相应地,该实施例中,上述对亮度补偿后的输入图像进行拼接处理,得到拼接图像,可以包括:对各输出图像块进行拼接,得到拼接图像。
在其中一些实施方式中,上述基于亮度补偿后的输入图像块获取输出分块上的输出图像块,可以包括:
基于输出分块中各像素点的坐标及对应的输入图像块中的坐标,通过插值算法(例如双线性插值算法),对该对应的输入图像块进行插值,得到该输出分块上的输出图像块。本公开实施例不限定插值算法的具体表现形式。
例如,根据输出分块中各像素点的坐标及对应的输入图像块中的坐标,可以确定与输出分块中目标像素点1对应的输入图像块中四个关联像素的坐标分别为:x(n)y(m)、x(n+1)y(m)、x(n)y(m+1)、x(n+1)y(m+1)。可以在输入图像块中根据四个坐标上像素的像素值,利用双线性插值算法计算得到输出图像上的目标像素1的像素值。根据对应像素点的像素值进行插值处理,可以使得目标像素点的像素值更加准确,使得输出图像更加真实。
其中,上述输出分块对应的输入图像中的输入图像块属于重叠区域时,对输入图像块进行插值,得到输出图像块,还可以包括:分别对该输出分块对应的每个输入图像块进行插值,并对输出分块对应的所有插值后的输入图像块进行叠加,得到输出图像块。
在其中一些可选示例中,上述对输出分块对应的所有插值后的输入图像块进行叠加,可以包括:
分别针对每个插值后的输入图像块的每个通道,获取每个像素点在至少两个不同的分辨率下像素值的平均值、或者加权值、或者加权平均值。其中,至少两个不同的分辨率包括:插值后的输入图像块的分辨率和至少一个低于插值后的输入图像块的分辨率的较低分辨率,例如,若插值后的输入图像块的分辨率为32×32,则此处至少两个不同的分辨率可以包括32×32、16×16、8×8和4×4,即,获取每个像素点在32×32、16×16、8×8和4×4分辨率下像素值的平均值、或者加权值、或者加权平均值。其中,一个像素点在32×32、16×16、8×8和4×4分辨率下像素值的平均值即该像素点在32×32、16×16、8×8和4×4分辨率下的像素值之和的平均值;假设一个像素点在32×32、16×16、8×8和4×4分辨率下像素值的加权系数为A、B、C、D,则一个像素点在32×32、16×16、8×8和4×4分辨率下像素值的加权值即该像素点在32×32、16×16、8×8和4×4分辨率下的像素值分别与对应的加权系数A、B、C、D乘积之和;一个像素点在32×32、16×16、8×8和4×4分辨率下像素值的加权平均值即该像素点在32×32、16×16、 8×8和4×4分辨率下的像素值分别与对应的加权系数A、B、C、D乘积之和、再求平均值;
分别针对输出分块对应的所有插值后的输入图像块的每个通道,按照每个像素点的像素值的平均值、或者加权值、或者加权平均值进行加权叠加。其中,加权叠加是指将每个像素点对像素值的平均值、或者加权值、或者加权平均值分别与对应的预设加权系数进行相乘然后再叠加。
基于上述实施例,针对重叠区域,对输出分块对应的所有插值后的输入图像块进行叠加时,可以按照每个像素点的像素值的平均值、或者加权值、或者加权平均值进行加权叠加,从而消除了重叠区域产生拼接缝,优化了显示效果。
在本公开图像拼接方法的另一个实施例中,还可以包括:
基于上述多路摄像头对应采集的多张采集图像到拼接图像的各级变换信息,获取融合变换信息。其中的各级变换信息例如可以包括:镜头去畸变信息、视角变换信息、配准信息。
其中,镜头去畸变信息包括:针对鱼眼摄像头拍摄的输入图像的鱼眼去畸变信息,和/或针对非鱼眼摄像头拍摄的输入图像的去畸变信息。
由于鱼眼摄像头或非鱼眼摄像头拍摄的输入图像中都可能存在畸变,因此,通过镜头去畸变信息,可以对各种鱼眼摄像头或非鱼眼摄像头拍摄的输入图像进行去畸变。
在其中一些可选方式中,融合变换信息可以表示为融合变换函数。
以下分别对鱼眼去畸变信息、视角变换信息、配准信息进行介绍:
1)鱼眼去畸变信息:
鱼眼去畸变信息用于对输入图像进行鱼眼去畸变操作。该鱼眼去畸变信息可以表示为一个函数,称为鱼眼去畸变函数,基于鱼眼去畸变函数对输入图像中某个像素点进行鱼眼去畸变操作后得到的坐标可以表示为:
p(x1,y1)=f1(x0,y0)   公式(1)
其中f1是鱼眼去畸变函数。对输入图像逐像素点按照上述公式(1)进行鱼眼去畸变操作,即可得到鱼眼去畸变后的图像。
假设鱼眼去畸变操作前输入图像中某像素点的坐标为(x0,y0),半径为r表示如下:
Figure PCTCN2019098546-appb-000001
首先通过如下公式(3)求反向放大函数M:
Figure PCTCN2019098546-appb-000002
其中,
Figure PCTCN2019098546-appb-000003
其中,k是跟摄像头的畸变程度相关的常数,可以基于摄像头的广角镜头的角度确定。
基于鱼眼去畸变函数对上述像素点进行鱼眼去畸变操作后得到的坐标可以为:
Figure PCTCN2019098546-appb-000004
2)视角变换信息:
拼接图像的视角一般是俯瞰视角、前视视角或后视视角,通过视角变换信息可以对鱼眼去畸变后的图像进行视角变换,将鱼眼去畸变后的图像变换至拼接图像所需的视角。视角变换信息可以表示为一个视角变换函数,利用视角变换函数对鱼眼去畸变后的图像中的上述像素点视角变换后的坐标可以表示为:
p(x2,y2)=f2(x1,y1)  公式(6)
其中f2是视角变换函数。同样的,如果对去鱼眼去畸变后的图像按照变换坐标逐像素进行映射,可以得到对应视角变换后的图像。在本公开实施例中,可以通过如下方式获取视角变换后的图像中某像素点的坐标映射关系:
假设视角变换前在图像中上述像素点的坐标为(x1,y1),视角变换后三维坐标是(x2,y2,z2),则
Figure PCTCN2019098546-appb-000005
Figure PCTCN2019098546-appb-000006
假设上述像素点在拼接图像中的坐标表示为(x,y),则:
Figure PCTCN2019098546-appb-000007
上述公式(9)所示的方程组有8个未知数:a 11,a 12,a 13,a 21,a 22,a 23,a 31,a 32,a 33,x,y。可以基于4组从视角变换前图像到视角变换后图像中同一像素点坐标的映射关系即可获得上述8个未知数的数值。
3)配准信息:
在图像拼接的过程中,需要对视角变换后的有重叠区域的图像在位置上进行两两进行配准。对于多张输入图像进行拼接的情况,可以选择其中任意一张输入图像对应的视角变换后的图像作为基准图像,对视角变换后的有重叠区域的图像两两进行配准。后面依次选取基准图像配准过的图像作为基准图像。对有重叠区域的两张图像进行配准时,可以利用预设特征提取算法,例如尺度不变特征转换(SIFT)算法,提取这两张图像的重叠区域的特征点;利用预设匹配算法,例如随机抽样一致(Random sample consensus,RANSAC)算法,对提取的两张图像中的特征点进行配对,特征点一般有多对,然后通过配对点的坐标计算两张图像中非基准图像到基准图像间的仿射变换矩阵
Figure PCTCN2019098546-appb-000008
在本公开的一些实施例中,配准信息可以表示为一个配准函数,基于该配准函数,可以得到同一像素点在非基准图像到基准图像中的坐标映射关系:
p(x,y)=f3(x2,y2)   公式(10)
其中f3是仿射变换矩阵对应的配准函数。其中的仿射变换即二维坐标变换,假设一个像素点仿射变换前的坐标为(x2,y2),仿射变换前的坐标为(x,y),仿射变换的坐标形式表示如下:
Figure PCTCN2019098546-appb-000009
Figure PCTCN2019098546-appb-000010
由于上述鱼眼去畸变、视角变换、配准(仿射变换)均是线性变换,本公开实施例可以把鱼眼去畸变、视角变换、配准(仿射变换)这三步操作融合成在一起,即求三个坐标变换信息的融合变换函数f4。那么上述像素点在融合变换之后的坐标可以表示为:p(x,y)=f4(x0,y0)。基于该融合变换函数,可以得到拼接图像中某一像素点在原始输入图像中对应的坐标值。
在本公开图像拼接方法的又一个实施例中,还可以包括生成拼接信息表的操作,其例如可以通过如下方式实现:
基于多路摄像头对应采集的多张采集图像到拼接图像的融合变换信息,获取输出分块中各像素点的坐标对应于采集图像的输入分块中像素点的坐标;
获取输入分块的位置信息(例如大小和偏移地址)、用于表示输入分块是否属于任意两张采集图像的重叠区域的重叠属性信息;
按照输出分块的顺序,在拼接信息表中分别通过一个信息表分块记录每个输出分块的相关信息。在其中一些实施方式中,该输出分块的相关信息例如可以包括但不限于:输出分块的位置信息(例如输出分块的大小、输出分块的偏移地址)、输出分块对应的输入分块的重叠属性信息、输出分块对应的输入分块所属输入图像的标识、输出分块中各像素点的坐标对应的输入分块中像素点的坐标、输入分块的位置信息(例如输入分块的大小和输入分块的偏移地址)。
其中,输入分块大小,为输入分块中像素点的坐标中的最大值和最小值的差。其宽w和高h可表示为:w=x max-x min,h=y max-y min。输入分块的偏移地址就是x min和y min。其中,x max为输入分块中像素点的坐标中x坐标的最大值,x min为输入分块中像素点的坐标中x坐标的最小值,y max为输入分块中像素点的坐标中y坐标的最大值,y min为其中像素点的坐标中y坐标的最小值。
相应地,在该实施例中,上述获取输出分块对应的输入图像中的输入图像块,可以包括:从拼接信 息表中依序读取一个信息表分块,基于读取的信息表分块记录的输出分块的相关信息,获取记录的输出分块对应的输入图像块。
基于上述实施例,可以将镜头去畸变信息、视角变换信息、配准信息融合为一个融合变换信息,基于该融合变换信息可以直接计算收入图像和拼接图像之间像素点坐标的对应关系,由此通过一个操作即实现了对输入图像的去畸变操作、视角变换操作和配准操作,简化了计算过程,提高了处理速度和效率。
在其中一些实施方式中,可以对各像素点坐标进行量化,以便于计算芯片进行读取,例如将像素点的x坐标和y坐标分别量化为8bit整数和4bit小数,即可以节省坐标表示数据的大小,还可以表示比较精确的坐标位置。例如输入图像块中一个像素点的坐标是(129.1234,210.4321),量化后的坐标可以表示为(1000001.0010,11010010.0111)。
在上述多路摄像头中任意一个或多个摄像头的位置和/或方向发生变化时,融合变换信息可能发生变化,基于融合信息生成的拼接信息表中的信息也可能发生变化。由此,在本公开的进一步实施例中,相应于上述多路摄像头中任意一个或多个摄像头的位置和/或方向发生变化,重新获取融合变换信息、重新生成拼接信息表。即,重新执行上述基于多路摄像头对应采集的多张采集图像到拼接图像的各级变换信息,获取融合变换信息的操作、上述基于多路摄像头对应采集的多张采集图像到拼接图像的融合变换信息,获取输出分块中各像素点的坐标对应于采集图像的输入分块中像素点的坐标的操作、获取输入分块的位置信息、用于表示输入分块是否属于任意两张采集图像的重叠区域的重叠属性信息的操作、和按照输出分块的顺序,在拼接信息表中分别通过一个信息表分块记录每个输出分块的相关信息的操作。
另外,在本公开图像拼接方法的又一个实施例中,还可以包括:基于多路摄像头采集得到的多张采集图像的重叠区域,获取该多张采集图像中各采集图像的亮度补偿信息并存储在拼接信息表中、或者拼接信息表的各信息表分块中。
相应地,该实施例中,上述获取待拼接的多张输入图像中各输入图像的亮度补偿信息,可以通过如下方式实现:分别从拼接信息表中或者信息表分块中获取同一摄像头采集的采集图像的亮度补偿信息作为相应输入图像的亮度补偿信息。
在本公开的进一步实施例中,还可以包括:在上述多路摄像头所处环境中的光线变化满足预定条件时,例如上述多路摄像头所处环境中的光线变化大于一个预设数值,重新获取多张采集图像中各采集图像的亮度补偿信息,即,重新执行上述基于多路摄像头采集得到的多张采集图像的重叠区域,获取多张采集图像中各采集图像的亮度补偿信息的操作,并以本次获取的各采集图像的亮度补偿信息对拼接信息表中各采集图像的亮度补偿信息进行更新的操作。
在其中一些实施方式中,上述基于多路摄像头采集得到的多张采集图像的重叠区域,获取多张采集图像中各采集图像的亮度补偿信息,可以包括:
基于亮度补偿后,上述多张采集图像的重叠区域中每二张采集图像的像素值差异之和最小化的方式,获取多张采集图像中各采集图像的亮度补偿信息。
每张彩色图像均具有红绿蓝(RGB)三个通道,在其中一些实施方式中,可以分别针对采集图像的每个通道,基于亮度补偿后,多张采集图像的重叠区域中每二张采集图像在通道的像素值差异之和最小化的方式,获取上述多张采集图像中各采集图像在通道的亮度补偿信息。即,在该实施例中,对应于采集图像的每个通道,例如R通道、G通道、B通道,分别获得一组亮度补偿信息,该一组亮度补偿信息包括上述多张采集图像中各采集图像在该通道的亮度补偿信息。则基于该实施例,可以获得上述多张采集图像分别在R通道、G通道、B通道的三组亮度补偿信息
例如,在其中一个可选示例中,可以以一个预先设定的误差函数表示上述多张采集图像的重叠区域中每二张采集图像的像素值差异之和,则可以获取该误差关系的函数值最小时各采集图像的亮度补偿信息。其中,该误差函数为同一重叠区域的采集图像的亮度补偿信息与重叠区域中至少一个像素点的像素值的函数。
在其中一些可选示例中,可以通过如下方式获取误差函数的函数值最小时各采集图像的亮度补偿信息:分别针对采集图像的每个通道,获取误差函数的函数值最小时各采集图像在通道的亮度补偿信息。该实施例中,误差函数为具有同一重叠区域的采集图像的亮度补偿信息与重叠区域中至少一个像素点在通道的像素值的函数。
例如,在一个可选示例中,对于图2所示的六张待拼接的输入图像,其在一个通道上的误差函数可以表示为:
e(i)=(a1*p1-a2*p2) 2+(a1*p1-a3*p3) 2++(a2*p2-a4*p4) 2+(a3*p3-a5*p5) 2+(a4*p4-a6*p6) 2+(a5*p5-a6*p6) 2   公式(13)
其中,a1、a2、a3、a4、a5、a6分别表示该六张输入图像在该通道的亮度补偿信息(也可以称为:亮度补偿系数),p1、p2、p3、p4、p5、p6分别表示该六张输入图像对应于该通道的像素值(即:R分 量、G分量、B分量)的平均值。在e(i)的函数值最小时,该六张输入图像在该通道的视觉差异最小。另外,本公开实施例还可以采用其他形式的误差函数,并不限于该采用如下公式(13)所示的形式。
其中,可以基于以下方式获取一个通道的误差函数的函数值:
分别针对采集图像的一个通道,获取各具有同一重叠区域的两张采集图像在重叠区域中像素值的加权差值的绝对值之和,或者,各具有同一重叠区域的两张采集图像在重叠区域中像素值的加权差值的平方值之和。
其中,两张采集图像在重叠区域中像素值的加权差值包括:第一乘积与第二乘积之间的差值。第一乘积包括:第一采集图像的亮度补偿信息与第一采集图像重叠区域中至少一个像素点的像素值之和的乘积。第二乘积包括:第二采集图像的亮度补偿信息与第二采集图像重叠区域中至少一个像素点的像素值之和的第二乘积。
基于本公开上述实施例,在拼接信息表中记录所有输出分块的相关信息之后,在基于该拼接信息表进行图像拼接时,可以将该拼接信息表读入内存中,并将多路摄像头实时或者按照预设周期采集的待拼接的多张输入图像读入内存中,以便于应用时读取该拼接信息表和输入图像。
由于拼接信息表只需要生成一次,便可直接查找进行图像拼接,仅在光线变化和/或摄像头的位置/方向变化时才需要更新,从而可以减少图像拼接所需时间,具有延时低,吞吐量大的优点,提高了拼接图像的处理效率,可以满足智能汽车环视拼接的实时性要求,提高拼接视频的显示帧率和分辨率。
在一种可能的实现方式中,内存可以为DDR(Double Data Rate,双倍速率)内存等各种类型的存储器。
图3为本公开图像拼接方法另一个实施例的流程图。如图3所示,该实施例的图像拼接方法包括:
202,根据待拼接的多张输入图像中的重叠区域,确定该多张输入图像中各输入图像的亮度补偿信息。
在一个可选示例中,该操作202可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第一获取模块执行。
204,分别针对拼接图像对应区域中的各输出分块,获取输出分块对应的输入图像中的输入图像块。
若输出分块对应的输入图像块属于重叠区域,则获取该输出分块对应的具有重叠区域的所有输入图像中的输入图像块。
在一个可选示例中,该操作204可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第二获取模块执行。
206,基于输入图像块所在输入图像的亮度补偿信息对该输入图像块进行亮度补偿。
在一个可选示例中,该操作206可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的补偿模块执行。
208,基于亮度补偿后的输入图像块获取该输出分块上的输出图像块。
若输出分块对应的输入图像中的输入图像块属于重叠区域,还可以分别针对输出图像块的每个通道,获取每个像素点在至少两个不同的分辨率下像素值的平均值、或者加权值、或者加权平均值;并按照每个像素点的像素值的平均值、或者加权值、或者加权平均值进行加权叠加,得到输出图像块。其中,至少两个不同的分辨率包括:插值后的输入图像块的分辨率和至少一个低于插值后的输入图像块的分辨率的较低分辨率。
在一个可选示例中,该操作208可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第三获取模块执行。
210,对拼接图像对应区域中的所有输出图像块进行拼接,得到拼接图像。
在一个可选示例中,该操作210可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的拼接模块执行。
基于该实施例,采用分块处理策略分别获取各输出图像块,可以采用全流水线加速处理输入图像,处理的延时很小,吞吐量大,能够满足视频图像拼接的实时性要求。
图4为本公开图像拼接方法又一个实施例的流程图。该实施例以预先生成拼接信息表为例,对本公开实施例的图像拼接方法进行进一步说明。如图4所示,该实施例的图像拼接方法包括:
302,从内存中的拼接信息表中依序读取一个信息表分块并读入计算芯片中,基于读取的信息表分块记录的输出分块的相关信息,从内存中获取记录的输出分块对应的输入图像块并读入计算芯片中。
基于该读取的信息表分块记录的输出分块的相关信息,若输出分块对应的输入图像中的输入图像块属于重叠区域,则从内存中获取该输出分块对应的具有重叠区域的所有输入图像中的输入图像块并读入计算芯片中。
在一个可选示例中,该操作302可以由处理器调用存储器存储的相应指令执行,也可以由被处理器 运行的第二获取模块执行。
304,分别针对读入计算芯片中的各输入图像块的每个通道,以输入图像在该通道的亮度补偿信息对该输入图像块中各像素进行亮度补偿,即,对各像素在该通道的像素值进行乘法计算处理。
在一个可选示例中,该操作304可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的补偿模块执行。
306,根据读入计算芯片中信息表分块记录的输出分块的相关信息,确定输出分块对应的输入图像中的输入图像块是否属于重叠区域。
若输出分块对应的输入图像中的输入图像块属于重叠区域,执行操作308。否则,若输出分块对应的输入图像中的输入图像块不属于重叠区域,执行操作314。
308,分别对输出分块对应的每个输入图像块,获取该输出分块中各像素点的坐标及对应的输入图像块中的坐标,对该输入图像块进行插值。
310,分别针对每个插值后的输入图像块的每个通道,获取每个像素点在至少两个不同的分辨率下像素值的平均值、或者加权值、或者加权平均值。
其中,至少两个不同的分辨率包括:插值后的输入图像块的分辨率和至少一个低于插值后的输入图像块的分辨率的较低分辨率。
312,分别针对输出分块对应的所有插值后的输入图像块的每个通道,按照每个像素点的像素值的平均值、或者加权值、或者加权平均值进行加权叠加,得到输出图像块。
之后,执行操作316。
314,获取该输出分块中各像素点的坐标及对应的输入图像块中的坐标,对该输入图像块进行插值,得到输出图像块。
316,将得到的输出图像块依序写回内存。
在一个可选示例中,该操作306-316可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第三获取模块执行。
318,响应于基于拼接信息表对应的一个拼接图像区域的所有输出图像块写回内存中,基于内存中的所有输出图像块拼接得到拼接图像。
在一个可选示例中,该操作318可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的拼接模块执行。
在其中一些实施方式中,上述计算芯片例如可以是现场可编程门阵列(Field Programmable Gata Array,FPGA)。在计算芯片为FPGA时,该操作302中,可以从内存中的拼接信息表中依序读取一个信息表分块先存储到FPGA中的缓存中,操作304-314中对FPGA中的缓存数据进行相应处理。
基于上述实施例,在FPGA内部可以采用全流水线加速处理图像,处理的延时很小,吞吐量大,能够满足视频图像拼接的实时性要求。
由于车辆上部署的多路摄像头拍摄的输入图像较大、且为实时拍摄图像,拼接信息表中存储的数据量也较大,FPGA中的缓存较小,由FPGA按照分块读取策略,从内存中读取信息表分块和相应的输入图像块至缓存、再进行处理,提高了图像的并行处理效率。
由于输出分块的区域小会导致内存的带宽利用率低,而FPGA内部缓存容量有限,输出分块的区域不能太大,本公开实施例中,可以兼顾效率和FPGA的缓存大小来确定输出分块的大小,在其中一种可选示例中,输出分块的大小是32x32个像素。
由于拼接图像中像素点的坐标对应原始输入图像中像素点的坐标呈一种局域性的离散状,一行的输出图像在摄像头采集的同一输入图像中不在一行。行缓存是指在按一行一行的处理图像时,为了提高处理效率所利用的一个先入先出(FIFO)技术,因此若使用传统的行缓存的方式,就要读入大量的行输入图像,因为一行输出图像对应的输入有许多行输入图像,而其中大量的像素都用不上,必然造成内存带宽的利用率低,处理效率低。本公开实施例提出分块处理的方式,先把拼接图像的区域分块,与之对应的输入图像和拼接信息表也形成分块。FPGA在进行图像拼接时逐渐读取内存里的输入图像分开和信息表分块进行处理,可以节省FPGA的缓存数据量并提高图像拼接处理效率。
另外,基于本公开上述实施例,得到拼接图像后,还可以包括:
显示拼接图像或者基于拼接图像进行碰撞预警和/或驾驶控制。
本公开实施例提供的任一种图像拼接方法可以由任意适当的具有数据处理能力的设备执行,包括但不限于:终端设备和服务器等。或者,本公开实施例提供的任一种图像拼接方法可以由处理器执行,如处理器通过调用存储器存储的相应指令来执行本公开实施例提及的任一种图像拼接方法。下文不再赘述。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实 施例的步骤;而前述的存储介质包括:ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
图5为本公开图像拼接装置一个实施例的结构示意图。该实施例的图像拼接装置可用于实现本公开上述各图像拼接方法实施例。如图5所示,该实施例的图像拼接装置包括:第一获取模块,补偿模块,拼接模块。其中:
第一获取模块,用于获取待拼接的多张输入图像中各输入图像的亮度补偿信息。其中,多张输入图像分别由多路摄像头对应采集得到。
其中,多张输入图像分别由设置在设备的不同部位上的多路摄像头对应采集得到。该多路摄像头的部署位置和方向,可以使该多路摄像头采集得到的多张输入图像中,至少两张相邻图像具有重叠区域、或者每二张相邻图像均具有重叠区域。
在其中一些实施方式中,上述设置多路摄像头的设备可以是车辆、机器人或者其他需要获取拼接图像的设备,例如其他交通工具等。在上述设置多路摄像头的设备是车辆时,根据车辆的长度和宽度、以及摄像头的拍摄范围,上述多路摄像头的数量可以包括:4-8个。
由此,在其中一些实施方式中,上述多路摄像头可以包括:至少一个设置在车辆的头部位置的摄像头,至少一个设置在车辆的尾部位置的摄像头,至少一个设置在车辆的车身一侧中部区域内的摄像头,和至少一个设置在车辆的车身另一侧中部区域内的摄像头;或者,多路摄像头包括:至少一个设置在车辆的头部位置的摄像头,至少一个设置在车辆的尾部位置的摄像头,至少两个分别设置在车辆的车身一侧前半部区域和后半部区域内的摄像头,和至少两个分别设置在车辆的车身另一侧前半部区域和后半部区域内的摄像头。
在其中一些实施方式中,多路摄像头可以包括:至少一鱼眼摄像头,和/或,至少一非鱼眼摄像头。
补偿模块,用于分别基于各输入图像的亮度补偿信息对输入图像进行亮度补偿。
拼接模块,用于对亮度补偿后的输入图像进行拼接处理,得到拼接图像。
基于上述实施例,对由多路摄像头对应采集得到的多张输入图像进行拼接时,获取该待拼接的多张输入图像中各输入图像的亮度补偿信息,分别基于各输入图像的亮度补偿信息对输入图像进行亮度补偿,并对亮度补偿后的输入图像进行拼接处理,得到拼接图像。本公开实施例针对待拼接的多张输入图像进行亮度补偿,实现了对待拼接图像的全局亮度补偿,可以消除由于不同摄像头所在环境的光线差异和曝光不同导致待拼接的多张输入图像亮度不同从而使得拼接图像中出现拼接痕迹,增强了拼接图像显示的视觉效果,有利于基于该拼接图像进行的各种应用效果,例如本公开实施例应用于车辆时,得到的用于显示车辆驾驶环境的拼接图像有利于提高智能驾驶控制的准确性。
在其中一些实施方式中,第一获取模块,用于根据多张输入图像中的重叠区域确定多张输入图像中各输入图像的亮度补偿信息。
其中,各输入图像的亮度补偿信息用于使经过亮度补偿后的各输入图像之间的亮度差异落入预先设定的亮度容差范围内。或者,各输入图像的亮度补偿信息用于使经过亮度补偿后,各重叠区域中每二张输入图像的像素值差异之和最小或者小于预设误差值。
图6为本公开图像拼接装置另一个实施例的结构示意图。如图6所示,与图5所示的实施例相比,该实施例还包括:第二获取模块,用于分别针对各输出分块,获取输出分块对应的输入图像中的输入图像块。相应地,该实施例中,补偿模块用于基于输入图像块所在输入图像的亮度补偿信息对输入图像块进行亮度补偿。
在其中一些实施方式中,输出分块对应的输入图像中的输入图像块属于相邻输入图像的重叠区域时,第二获取模块用于获取输出分块对应的具有重叠区域的所有输入图像中的输入图像块。
在其中一些实施方式中,第二获取模块用于:获取输出分块的坐标信息对应的输入图像中输入图像块的位置信息;基于输入图像块的位置信息,从对应的输入图像中获取输入图像块。
在其中一些实施方式中,补偿模块,用于分别针对输入图像块的每个通道,以输入图像在通道的亮度补偿信息对输入图像块中各像素在通道的像素值进行乘法计算处理。
另外,再参见图6,在本公开图像拼接装置的又一实施例中,还可以包括:第三获取模块,用于基于亮度补偿后的输入图像块获取输出分块上的输出图像块。相应地,该实施例中,拼接模块用于对各输出图像块进行拼接,得到拼接图像。
在其中一些实施方式中,第三获取模块,用于基于输出分块中各像素点的坐标及对应的输入图像块中的坐标,对输入图像块进行插值,得到输出分块上的输出图像块。
在其中一些实施方式中,输出分块对应的输入图像块属于相邻输入图像的重叠区域时,第三获取模块,用于分别基于输出分块中各像素点的坐标及对应的每个输入图像块中的坐标,对输出分块对应的每个输入图像块进行插值,并对输出分块对应的所有插值后的输入图像块进行叠加,得到输出图像块。
在其中一个可选示例中,第三获取模块对输出分块对应的所有插值后的输入图像块进行叠加时,用 于:分别针对每个插值后的输入图像块的每个通道,获取每个像素点在至少两个不同的分辨率下像素值的平均值、或者加权值、或者加权平均值;其中,至少两个不同的分辨率包括:插值后的输入图像块的分辨率和至少一个低于插值后的输入图像块的分辨率的较低分辨率;分别针对输出分块对应的所有插值后的输入图像块的每个通道,按照每个像素点的像素值的平均值、或者加权值、或者加权平均值进行加权叠加。
另外,再参见图6,在本公开图像拼接装置的再一实施例中,还可以包括:第四获取模块,用于基于多路摄像头对应采集的多张采集图像到拼接图像的融合变换信息,获取输出分块中各像素点的坐标对应于采集图像的输入分块中像素点的坐标。第五获取模块,用于获取输入分块的位置信息、用于表示输入分块是否属于任意两张采集图像的重叠区域的重叠属性信息。生成模块,用于按照输出分块的顺序,在拼接信息表中分别通过一个信息表分块记录每个输出分块的相关信息;存储模块,用于存储拼接信息表。相应地,该实施例中,第二获取模块,用于从拼接信息表中依序读取一个信息表分块,基于读取的信息表分块记录的输出分块的相关信息,获取记录的输出分块对应的输入图像块。
其中,输出分块的相关信息例如可以包括但不限于:输出分块的位置信息、输出分块对应的输入分块的重叠属性信息、输出分块对应的输入分块所属输入图像的标识、输出分块中各像素点的坐标对应的输入分块中像素点的坐标、输入分块的位置信息;
另外,再参见图6,在本公开图像拼接装置的再一实施例中,还可以包括:第六获取模块,用于基于多路摄像头对应采集的多张采集图像到拼接图像的各级变换信息,获取融合变换信息,其中的各级变换信息例如可以包括但不限于:镜头去畸变信息、视角变换信息、配准信息。
其中,镜头去畸变信息包括:针对鱼眼摄像头拍摄的输入图像的鱼眼去畸变信息,和/或针对非鱼眼摄像头拍摄的输入图像的去畸变信息。
再参见图6,在本申请图像拼接装置的再一实施例中,还可以包括:控制模块,用于在多路摄像头中任意一个或多个摄像头的位置和/或方向发生变化时,指示第四获取模块基于多路摄像头对应采集的多张采集图像到拼接图像的融合变换信息,获取输出分块中各像素点的坐标对应于采集图像的输入分块中像素点的坐标;指示第五获取模块获取输入分块的位置信息、用于表示输入分块是否属于任意两张采集图像的重叠区域的重叠属性信息、指示生成模块按照输出分块的顺序,在拼接信息表中分别通过一个信息表分块记录每个输出分块的相关信息。
再参见图6,在本公开图像拼接装置的再一实施例中,还可以包括:读取模块,用于在拼接信息表中记录所有输出分块的相关信息之后,将拼接信息表读入内存中;以及将多路摄像头采集的待拼接的多张输入图像读入内存中。相应地,该实施例中,第二获取模块,用于从内存中的拼接信息表中依序读取一个信息表分块并读入计算芯片中,基于读取的信息表分块记录的输出分块的相关信息,从内存中获取记录的输出分块对应的输入图像块并读入计算芯片中;计算芯片包括补偿模块和拼接模块。拼接模块,用于将获取到的输出图像块依序写回内存;在基于拼接信息表对应的一个拼接图像的所有输出图像块写回内存中时,得到拼接图像。
再参见图6,在本公开图像拼接装置的再一实施例中,还可以包括:第七获取模块,用于基于多路摄像头采集得到的多张采集图像的重叠区域,获取多张采集图像中各采集图像的亮度补偿信息并存储在拼接信息表中、或者拼接信息表的各信息表分块中。相应地,该实施例中,第一获取模块,用于分别从拼接信息表中或者信息表分块中获取同一摄像头采集的采集图像的亮度补偿信息作为相应输入图像的亮度补偿信息。
另外,在进一步实施例中,控制模块还可用于在检测到光线变化满足预定条件时,指示第七获取模块基于多路摄像头采集得到的多张采集图像的重叠区域,获取多张采集图像中各采集图像的亮度补偿信息的操作,并以本次获取的各采集图像的亮度补偿信息对拼接信息表中各采集图像的亮度补偿信息进行更新。
在其中一些实施方式中,第七获取模块,用于基于亮度补偿后,多张采集图像的重叠区域中每二张采集图像的像素值差异之和最小化的方式,获取多张采集图像中各采集图像的亮度补偿信息。
在其中一些实施方式中,第七获取模块,用于分别针对采集图像的每个通道,基于亮度补偿后,多张采集图像的重叠区域中每二张采集图像在通道的像素值差异之和最小化的方式,获取多张采集图像中各采集图像在通道的亮度补偿信息。
在其中一些实施方式中,第七获取模块基于以下方式针对采集图像的一个通道,获取多张采集图像的重叠区域中每二张采集图像在通道的像素值差异之和:分别针对采集图像的一个通道,获取各具有同一重叠区域的两张采集图像在重叠区域中像素值的加权差值的绝对值之和,或者,各具有同一重叠区域的两张采集图像在重叠区域中像素值的加权差值的平方值之和。其中,两张采集图像在重叠区域中像素值的加权差值包括:第一乘积与第二乘积之间的差值;第一乘积包括:第一采集图像的亮度补偿信息与 第一采集图像重叠区域中至少一个像素点的像素值之和的乘积,第二乘积包括:第二采集图像的亮度补偿信息与第二采集图像重叠区域中至少一个像素点的像素值之和的第二乘积。
再参见图6,在本公开图像拼接装置的再一实施例中,还可以包括:显示模块,用于显示拼接图像;和/或,智能驾驶模块,用于基于拼接图像进行智能驾驶控制。
图7为本公开车载图像处理装置一个实施例的结构示意图。该实施例的车载图像处理装置可用于实现本公开上述各图像拼接方法实施例。如图7所示,该实施例的车载图像处理装置包括:第一存储模块和计算芯片。其中:
第一存储模块,用于存储拼接信息表和分别由多路摄像头对应采集得到的多张输入图像。
计算芯片,用于从第一存储模块获取待拼接的多张输入图像中各输入图像的亮度补偿信息;分别针对各输出分块,从第一存储模块获取输出分块对应的输入图像中的输入图像块;基于输入图像块所在输入图像的亮度补偿信息对输入图像块进行亮度补偿,基于亮度补偿后的输入图像块获取输出分块上的输出图像块并将获取到的输出图像块依序写回第一存储模块;响应于基于拼接信息表对应的一个拼接图像的所有输出图像块写回内存中,得到拼接图像。
在其中一些实施方式中,拼接信息表包括至少一个信息表分块,该信息表分块包括多张输入图像的亮度补偿信息和每个输出分块的相关信息,输出分块的相关信息包括:输出分块的位置信息、输出分块对应的输入分块的重叠属性信息、输出分块对应的输入分块所属输入图像的标识、输出分块中各像素点的坐标对应的输入分块中像素点的坐标、输入分块的位置信息。
在其中一些实施方式中,上述第一存储模块可以包括:易失性存储模块;计算芯片可以包括:现场可编程门阵列FPGA。
在其中一些实施方式中,上述第一存储模块,还可用于存储第一应用单元和第二应用单元。其中,第一应用单元,用于基于多路摄像头对应采集的多张采集图像到拼接图像的融合变换信息,获取输出分块中各像素点的坐标对应于采集图像的输入分块中像素点的坐标;获取输入分块的位置信息、用于表示输入分块是否属于任意两张采集图像的重叠区域的重叠属性信息;按照输出分块的顺序,在拼接信息表中分别通过一个信息表分块记录每个输出分块的相关信息。第二应用单元,用于基于多路摄像头采集得到的多张采集图像的重叠区域,获取多张采集图像中各采集图像的亮度补偿信息并存储在拼接信息表的各信息表分块中。
图8为本公开车载图像处理装置另一个实施例的结构示意图。如图8所示,与图7所示的实施例相比,该实施例的车载图像处理装置还可以包括以下任意一个或多个模块:
非易失性存储模块,用于存储计算芯片的运行支持信息;
输入接口,用于连接多路摄像头和第一存储模块,用于将多路摄像头采集得到的多张输入图像写入第一存储模块中;
第一输出接口,用于连接第一存储模块和显示屏,用于将第一存储模块中的拼接图像输出给显示屏显示;
第二输出接口,用于连接第一存储模块和智能驾驶模块,用于将第一存储模块中的拼接图像输出给智能驾驶模块,以便智能驾驶模块基于拼接图像进行智能驾驶控制。
另外,本公开实施例提供的另一种电子设备,包括:
存储器,用于存储计算机程序;
处理器,用于执行存储器中存储的计算机程序,且计算机程序被执行时,实现本公开上述任一实施例的图像拼接方法。
图9为本公开电子设备一个应用实施例的结构示意图。下面参考图9,其示出了适于用来实现本公开实施例的终端设备或服务器的电子设备的结构示意图。如图9所示,该电子设备包括一个或多个处理器、通信部等,所述一个或多个处理器例如:一个或多个中央处理单元(CPU),和/或一个或多个图像处理器(GPU)等,处理器可以根据存储在只读存储器(ROM)中的可执行指令或者从存储部分加载到随机访问存储器(RAM)中的可执行指令而执行各种适当的动作和处理。通信部可包括但不限于网卡,所述网卡可包括但不限于IB(Infiniband)网卡,处理器可与只读存储器和/或随机访问存储器中通信以执行可执行指令,通过总线与通信部相连、并经通信部与其他目标设备通信,从而完成本公开实施例提供的任一图像拼接方法对应的操作,例如,获取待拼接的多张输入图像中各输入图像的亮度补偿信息;其中,所述多张输入图像分别由设置在设备的不同部位上的多路摄像头对应采集得到;分别基于各输入图像的亮度补偿信息对输入图像进行亮度补偿;对亮度补偿后的输入图像进行拼接处理,得到拼接图像。
此外,在RAM中,还可存储有装置操作所需的各种程序和数据。CPU、ROM以及RAM通过总线彼此相连。在有RAM的情况下,ROM为可选模块。RAM存储可执行指令,或在运行时向ROM中写 入可执行指令,可执行指令使处理器执行本公开上述任一图像拼接方法对应的操作。输入/输出(I/O)接口也连接至总线。通信部可以集成设置,也可以设置为具有多个子模块(例如多个IB网卡),并在总线链接上。
以下部件连接至I/O接口:包括键盘、鼠标等的输入部分;包括诸如阴极射线管(CRT)、液晶显示器(LCD)等以及扬声器等的输出部分;包括硬盘等的存储部分;以及包括诸如LAN卡、调制解调器等的网络接口卡的通信部分。通信部分经由诸如因特网的网络执行通信处理。驱动器也根据需要连接至I/O接口。可拆卸介质,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器上,以便于从其上读出的计算机程序根据需要被安装入存储部分。
需要说明的,如图9所示的架构仅为一种可选实现方式,在具体实践过程中,可根据实际需要对上述图9的部件数量和类型进行选择、删减、增加或替换;在不同功能部件设置上,也可采用分离设置或集成设置等实现方式,例如GPU和CPU可分离设置或者可将GPU集成在CPU上,通信部可分离设置,也可集成设置在CPU或GPU上,等等。这些可替换的实施方式均落入本公开公开的保护范围。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括有形地包含在机器可读介质上的计算机程序,计算机程序包含用于执行流程图所示的方法的程序代码,程序代码可包括对应执行本公开任一实施例提供的图像拼接方法步骤对应的指令。在这样的实施例中,该计算机程序可以通过通信部分从网络上被下载和安装,和/或从可拆卸介质被安装。在该计算机程序被CPU执行时,执行本公开实施例的图像拼接方法中限定的上述功能。
另外,本公开实施例还提供了一种计算机程序,包括计算机指令,当计算机指令在设备的处理器中运行时,实现本公开上述任一实施例的图像拼接方法。
另外,本公开实施例还提供了一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时,实现本公开上述任一实施例的图像拼接方法。
本公开实施例可用于如下场景:
本公开实施例可用于智能汽车驾驶场景。在辅助驾驶场景中,可以利用本公开实施例进行视频环视拼接处理,满足拼接效果,实时性和帧率的要求;
驾驶员需要查看汽车周围的实时情况、包括盲区内的情况时,基于本公开实施例可以在驾驶员视线受到阻挡时,例如在倒车入库或者拥挤道路情况下,狭窄道路行驶时向驾驶员显示拼接图像;
作为智能汽车的一部分,为智能汽车驾驶决策提供信息。智能汽车或自动驾驶汽车系统需要感知汽车周围的情况以做出实时反应。利用本公开实施例,可以进行行人检测、目标检测算法,以在突发情况下自动控制汽车停止或者避让行人或目标。
本说明书中各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其它实施例的不同之处,各个实施例之间相同或相似的部分相互参见即可。对于系统实施例而言,由于其与方法实施例基本对应,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
可能以许多方式来实现本公开的方法和装置、设备。例如,可通过软件、硬件、固件或者软件、硬件、固件的任何组合来实现本公开的方法和装置、设备。用于所述方法的步骤的上述顺序仅是为了进行说明,本公开的方法的步骤不限于以上描述的顺序,除非以其它方式特别说明。此外,在一些实施例中,还可将本公开实施为记录在记录介质中的程序,这些程序包括用于实现根据本公开的方法的机器可读指令。因而,本公开还覆盖存储用于执行根据本公开的方法的程序的记录介质。
本公开的描述是为了示例和描述起见而给出的,而并不是无遗漏的或者将本公开限于所公开的形式。很多修改和变化对于本领域的普通技术人员而言是显然的。选择和描述实施例是为了更好说明本公开的原理和实际应用,并且使本领域的普通技术人员能够理解本公开从而设计适于特定用途的带有各种修改的各种实施例。

Claims (62)

  1. 一种图像拼接方法,其特征在于,包括:
    获取待拼接的多张输入图像中各输入图像的亮度补偿信息;其中,所述多张输入图像分别由设置在设备的不同部位上的多路摄像头对应采集得到;
    分别基于各输入图像的亮度补偿信息对输入图像进行亮度补偿;
    对亮度补偿后的输入图像进行拼接处理,得到拼接图像。
  2. 根据权利要求1所述的方法,其特征在于,所述多张输入图像中至少两张相邻图像具有重叠区域。
  3. 根据权利要求1所述的方法,其特征在于,所述多张输入图像中每二张相邻图像均具有重叠区域。
  4. 根据权利要求1-3任一所述的方法,其特征在于,所述设备包括:车辆或机器人;和/或,所述多路摄像头的数量包括:4-8个。
  5. 根据权利要求4所述的方法,其特征在于,所述多路摄像头包括:至少一个设置在所述车辆的头部位置的摄像头,至少一个设置在所述车辆的尾部位置的摄像头,至少一个设置在所述车辆的车身一侧中部区域内的摄像头,和至少一个设置在所述车辆的车身另一侧中部区域内的摄像头;或者,
    所述多路摄像头包括:至少一个设置在所述车辆的头部位置的摄像头,至少一个设置在所述车辆的尾部位置的摄像头,至少两个分别设置在所述车辆的车身一侧前半部区域和后半部区域内的摄像头,和至少两个分别设置在所述车辆的车身另一侧前半部区域和后半部区域内的摄像头。
  6. 根据权利要求1-5任一所述的方法,其特征在于,所述多路摄像头包括:至少一鱼眼摄像头,和/或,至少一非鱼眼摄像头。
  7. 根据权利要求1-6任一所述的方法,其特征在于,所述获取待拼接的多张输入图像中各输入图像的亮度补偿信息,包括:
    根据所述多张输入图像中的重叠区域确定所述多张输入图像中各输入图像的亮度补偿信息。
  8. 根据权利要求7所述的方法,其特征在于,所述各输入图像的亮度补偿信息用于使经过亮度补偿后的各输入图像之间的亮度差异落入预先设定的亮度容差范围内。
  9. 根据权利要求7所述的方法,其特征在于,所述各输入图像的亮度补偿信息用于使经过亮度补偿后,各重叠区域中每二张输入图像的像素值差异之和最小或者小于预设误差值。
  10. 根据权利要求1-9任一所述的方法,其特征在于,所述分别基于各输入图像的亮度补偿信息对输入图像进行亮度补偿,包括:
    分别针对各输出分块,获取所述输出分块对应的输入图像中的输入图像块;
    基于所述输入图像块所在输入图像的亮度补偿信息对所述输入图像块进行亮度补偿。
  11. 根据权利要求10所述的方法,其特征在于,所述输出分块对应的输入图像块属于相邻输入图像的重叠区域时,所述获取所述输出分块对应的输入图像中的输入图像块,包括:
    获取所述输出分块对应的具有重叠区域的所有输入图像中的输入图像块。
  12. 根据权利要求10或11所述的方法,其特征在于,所述获取所述输出分块对应的输入图像中的输入图像块,包括:
    获取所述输出分块的坐标信息对应的输入图像中输入图像块的位置信息;
    基于所述输入图像块的位置信息,从所述对应的输入图像中获取所述输入图像块。
  13. 根据权利要求10-12任一所述的方法,其特征在于,所述基于所述输入图像块所在输入图像的亮度补偿信息对所述输入图像块进行亮度补偿,包括:
    分别针对所述输入图像块的每个通道,以所述输入图像在所述通道的亮度补偿信息对所述输入图像块中各像素在所述通道的像素值进行乘法计算处理。
  14. 根据权利要求10-14任一所述的方法,其特征在于,所述基于所述输入图像块所在输入图像的亮度补偿信息对所述输入图像块进行亮度补偿之后,还包括:基于亮度补偿后的输入图像块获取所述输出分块上的输出图像块;
    所述对亮度补偿后的输入图像进行拼接处理,得到拼接图像,包括:对各输出图像块进行拼接,得到所述拼接图像。
  15. 根据权利14所述的方法,其特征在于,所述基于亮度补偿后的输入图像块获取所述输出分块上的输出图像块,包括:
    基于所述输出分块中各像素点的坐标及对应的输入图像块中的坐标,对所述输入图像块进行插值, 得到所述输出分块上的输出图像块。
  16. 根据权利要求15所述的方法,其特征在于,所述输出分块对应的输入图像块属于相邻输入图像的重叠区域时,所述对所述输入图像块进行插值,得到所述输出图像块,包括:
    分别对所述输出分块对应的每个所述输入图像块进行插值,并对所述输出分块对应的所有插值后的输入图像块进行叠加,得到所述输出图像块。
  17. 根据权利要求16所述的方法,其特征在于,所述对所述输出分块对应的所有插值后的输入图像块进行叠加,包括:
    分别针对每个所述插值后的输入图像块的每个通道,获取每个像素点在至少两个不同的分辨率下像素值的平均值、或者加权值、或者加权平均值;其中,所述至少两个不同的分辨率包括:所述插值后的输入图像块的分辨率和至少一个低于所述插值后的输入图像块的分辨率的较低分辨率;
    分别针对所述输出分块对应的所有所述插值后的输入图像块的每个通道,按照每个像素点的所述像素值的平均值、或者加权值、或者加权平均值进行加权叠加。
  18. 根据权利要求14-17任一所述的方法,其特征在于,还包括:
    基于所述多路摄像头对应采集的多张采集图像到拼接图像的融合变换信息,获取输出分块中各像素点的坐标对应于采集图像的输入分块中像素点的坐标;
    获取所述输入分块的位置信息、用于表示所述输入分块是否属于任意两张采集图像的重叠区域的重叠属性信息;
    按照输出分块的顺序,在拼接信息表中分别通过一个信息表分块记录每个输出分块的相关信息;
    所述获取所述输出分块对应的输入图像中的输入图像块,包括:从所述拼接信息表中依序读取一个信息表分块,基于读取的信息表分块记录的输出分块的相关信息,获取所述记录的输出分块对应的输入图像块。
  19. 根据权利要求18所述的方法,其特征在于,所述输出分块的相关信息包括:输出分块的位置信息、输出分块对应的输入分块的重叠属性信息、输出分块对应的输入分块所属输入图像的标识、输出分块中各像素点的坐标对应的输入分块中像素点的坐标、输入分块的位置信息。
  20. 根据权利要求18或19所述的方法,其特征在于,还包括:
    基于多路摄像头对应采集的多张采集图像到拼接图像的各级变换信息,获取融合变换信息,所述各级变换信息包括:镜头去畸变信息、视角变换信息、配准信息。
  21. 根据权利要求18-20任一所述的方法,其特征在于,还包括:
    响应于所述多路摄像头中任意一个或多个摄像头的位置和/或方向发生变化,重新执行所述基于多路摄像头对应采集的多张采集图像到拼接图像的融合变换关系,获取输出分块中各像素点的坐标对应于采集图像的输入分块中像素点的坐标的操作、所述获取所述输入分块的位置信息、用于表示所述输入分块是否属于任意两张采集图像的重叠区域的重叠属性信息的操作、和所述按照输出分块的顺序,在拼接信息表中分别通过一个信息表分块记录每个输出分块的相关信息的操作。
  22. 根据权利要求18-21任一所述的方法,其特征在于,还包括:
    在拼接信息表中记录所有输出分块的相关信息之后,将所述拼接信息表读入内存中;
    将所述多路摄像头采集的所述待拼接的多张输入图像读入所述内存中;
    所述从所述拼接信息表中依序读取一个信息表分块,基于读取的信息表分块记录的输出分块的相关信息,获取所述记录的输出分块对应的输入图像块,包括:从所述内存中的所述拼接信息表中依序读取一个信息表分块并读入计算芯片中,基于读取的信息表分块记录的输出分块的相关信息,从所述内存中获取所述记录的输出分块对应的输入图像块并读入所述计算芯片中;
    所述对各输出图像块进行拼接,得到所述拼接图像,包括:
    将获取到的输出图像块依序写回所述内存;
    响应于基于所述拼接信息表对应的一个拼接图像的所有输出图像块写回所述内存中,得到所述拼接图像。
  23. 根据权利要求18-22任一所述的方法,其特征在于,还包括:
    基于多路摄像头采集得到的多张采集图像的重叠区域,获取所述多张采集图像中各采集图像的亮度补偿信息并存储在所述拼接信息表中、或者所述拼接信息表的各所述信息表分块中;
    所述获取待拼接的多张输入图像中各输入图像的亮度补偿信息,包括:
    分别从所述拼接信息表中或者所述信息表分块中获取同一摄像头采集的采集图像的亮度补偿信息作为相应输入图像的亮度补偿信息。
  24. 根据权利要求23所述的方法,其特征在于,还包括:
    响应于检测到光线变化满足预定条件,重新执行所述基于多路摄像头采集得到的多张采集图像的重 叠区域,获取所述多张采集图像中各采集图像的亮度补偿信息的操作,并以本次获取的各采集图像的亮度补偿信息对所述拼接信息表中各采集图像的亮度补偿信息进行更新。
  25. 根据权利要求23或24所述的方法,其特征在于,所述基于多路摄像头采集得到的多张采集图像的重叠区域,获取所述多张采集图像中各采集图像的亮度补偿信息,包括:
    基于亮度补偿后,所述多张采集图像的重叠区域中每二张采集图像的像素值差异之和最小化的方式,获取所述多张采集图像中各采集图像的亮度补偿信息。
  26. 根据权利要求25所述的方法,其特征在于,所述基于亮度补偿后,所述多张采集图像的重叠区域中每二张采集图像的像素值差异之和最小化的方式,获取所述多张采集图像中各采集图像的亮度补偿信息,包括:
    分别针对采集图像的每个通道,基于亮度补偿后,所述多张采集图像的重叠区域中每二张采集图像在所述通道的像素值差异之和最小化的方式,获取所述多张采集图像中各采集图像在所述通道的亮度补偿信息。
  27. 根据权利要求26所述的方法,其特征在于,基于以下方式针对采集图像的一个通道,获取多张采集图像的重叠区域中每二张采集图像在所述通道的像素值差异之和:
    分别针对采集图像的一个通道,获取各具有同一重叠区域的两张采集图像在重叠区域中像素值的加权差值的绝对值之和,或者,各具有同一重叠区域的两张采集图像在重叠区域中像素值的加权差值的平方值之和;
    其中,所述两张采集图像在重叠区域中像素值的加权差值包括:第一乘积与第二乘积之间的差值;所述第一乘积包括:第一采集图像的亮度补偿信息与所述第一采集图像所述重叠区域中至少一个像素点的像素值之和的乘积,所述第二乘积包括:第二采集图像的亮度补偿信息与所述第二采集图像所述重叠区域中所述至少一个像素点的像素值之和的第二乘积。
  28. 根据权利要求1-27任一所述的方法,其特征在于,还包括:
    显示所述拼接图像和/或基于所述拼接图像进行智能驾驶控制。
  29. 一种图像拼接装置,其特征在于,包括:
    第一获取模块,用于获取待拼接的多张输入图像中各输入图像的亮度补偿信息;其中,所述多张输入图像分别由多路摄像头对应采集得到;
    补偿模块,用于分别基于各输入图像的亮度补偿信息对输入图像进行亮度补偿;
    拼接模块,用于对亮度补偿后的输入图像进行拼接处理,得到拼接图像。
  30. 根据权利要求29所述的装置,其特征在于,所述多张输入图像中至少两张相邻图像具有重叠区域;或者,所述多张输入图像中每二张相邻图像均具有重叠区域。
  31. 根据权利要求29或30所述的装置,其特征在于,所述设备包括:车辆或机器人;和/或,
    所述多路摄像头的数量包括:4-8个。
  32. 根据权利要求31所述的装置,其特征在于,所述多路摄像头包括:至少一个设置在所述车辆的头部位置的摄像头,至少一个设置在所述车辆的尾部位置的摄像头,至少一个设置在所述车辆的车身一侧中部区域内的摄像头,和至少一个设置在所述车辆的车身另一侧中部区域内的摄像头;或者,
    所述多路摄像头包括:至少一个设置在所述车辆的头部位置的摄像头,至少一个设置在所述车辆的尾部位置的摄像头,至少两个分别设置在所述车辆的车身一侧前半部区域和后半部区域内的摄像头,和至少两个分别设置在所述车辆的车身另一侧前半部区域和后半部区域内的摄像头。
  33. 根据权利要求29-32任一所述的装置,其特征在于,所述多路摄像头包括:至少一鱼眼摄像头,和/或,至少一非鱼眼摄像头。
  34. 根据权利要求29-32任一所述的装置,其特征在于,所述第一获取模块,用于根据所述多张输入图像中的重叠区域确定所述多张输入图像中各输入图像的亮度补偿信息。
  35. 根据权利要求34所述的装置,其特征在于,所述各输入图像的亮度补偿信息用于使经过亮度补偿后的各输入图像之间的亮度差异落入预先设定的亮度容差范围内。
  36. 根据权利要34所述的装置,其特征在于,所述各输入图像的亮度补偿信息用于使经过亮度补偿后,各重叠区域中每二张输入图像的像素值差异之和最小或者小于预设误差值。
  37. 根据权利要求29-36任一所述的装置,其特征在于,还包括:
    第二获取模块,用于分别针对各输出分块,获取所述输出分块对应的输入图像中的输入图像块;
    所述补偿模块,用于基于所述输入图像块所在输入图像的亮度补偿信息对所述输入图像块进行亮度补偿。
  38. 根据权利要求37所述的装置,其特征在于,所述输出分块对应的输入图像中的输入图像块属于相邻输入图像的重叠区域时,所述第二获取模块用于获取所述输出分块对应的具有重叠区域的所有输 入图像中的输入图像块。
  39. 根据权利要求37或38所述的装置,其特征在于,所述第二获取模块用于:
    获取所述输出分块的坐标信息对应的输入图像中输入图像块的位置信息;
    基于所述输入图像块的位置信息,从所述对应的输入图像中获取所述输入图像块。
  40. 根据权利要求37-39任一所述的装置,其特征在于,所述补偿模块,用于分别针对所述输入图像块的每个通道,以所述输入图像在所述通道的亮度补偿信息对所述输入图像块中各像素在所述通道的像素值进行乘法计算处理。
  41. 根据权利要求37-40任一所述的装置,其特征在于,还包括:
    第三获取模块,用于基于亮度补偿后的输入图像块获取所述输出分块上的输出图像块;
    所述拼接模块,用于对各输出图像块进行拼接,得到所述拼接图像。
  42. 根据权利要求37-41任一所述的装置,其特征在于,所述第三获取模块,用于基于所述输出分块中各像素点的坐标及对应的输入图像块中的坐标,对所述输入图像块进行插值,得到所述输出分块上的输出图像块。
  43. 根据权利要求42所述的装置,其特征在于,所述输出分块对应的输入图像块属于相邻输入图像的重叠区域时,所述第三获取模块,用于分别基于所述输出分块中各像素点的坐标及对应的每个输入图像块中的坐标,对所述输出分块对应的每个所述输入图像块进行插值,并对所述输出分块对应的所有插值后的输入图像块进行叠加,得到所述输出图像块。
  44. 根据权利要求43所述的装置,其特征在于,所述第三获取模块对所述输出分块对应的所有插值后的输入图像块进行叠加时,用于:分别针对每个所述插值后的输入图像块的每个通道,获取每个像素点在至少两个不同的分辨率下像素值的平均值、或者加权值、或者加权平均值;其中,所述至少两个不同的分辨率包括:所述插值后的输入图像块的分辨率和至少一个低于所述插值后的输入图像块的分辨率的较低分辨率;分别针对所述输出分块对应的所有所述插值后的输入图像块的每个通道,按照每个像素点的所述像素值的平均值、或者加权值、或者加权平均值进行加权叠加。
  45. 根据权利要求41-44任一所述的装置,其特征在于,还包括:
    第四获取模块,用于基于所述多路摄像头对应采集的多张采集图像到拼接图像的融合变换信息,获取输出分块中各像素点的坐标对应于采集图像的输入分块中像素点的坐标;
    第五获取模块,用于获取所述输入分块的位置信息、用于表示所述输入分块是否属于任意两张采集图像的重叠区域的重叠属性信息;
    生成模块,用于按照输出分块的顺序,在拼接信息表中分别通过一个信息表分块记录每个输出分块的相关信息;
    存储模块,用于存储所述拼接信息表;
    所述第二获取模块,用于从所述拼接信息表中依序读取一个信息表分块,基于读取的信息表分块记录的输出分块的相关信息,获取所述记录的输出分块对应的输入图像块。
  46. 根据权利要求45所述的装置,其特征在于,所述输出分块的相关信息包括:输出分块的位置信息、输出分块对应的输入分块的重叠属性信息、输出分块对应的输入分块所属输入图像的标识、输出分块中各像素点的坐标对应的输入分块中像素点的坐标、输入分块的位置信息。
  47. 根据权利要求45或46所述的装置,其特征在于,还包括:
    第六获取模块,用于基于多路摄像头对应采集的多张采集图像到拼接图像的各级变换信息,获取融合变换信息,所述各级变换信息包括:镜头去畸变信息、视角变换信息、配准信息。
  48. 根据权利要求45-47任一所述的装置,其特征在于,还包括:
    控制模块,用于在所述多路摄像头中任意一个或多个摄像头的位置和/或方向发生变化时,指示所述第四获取模块基于所述多路摄像头对应采集的多张采集图像到拼接图像的融合变换信息,获取输出分块中各像素点的坐标对应于采集图像的输入分块中像素点的坐标;指示所述第五获取模块获取所述输入分块的位置信息、用于表示所述输入分块是否属于任意两张采集图像的重叠区域的重叠属性信息、指示所述生成模块所述按照输出分块的顺序,在拼接信息表中分别通过一个信息表分块记录每个输出分块的相关信息。
  49. 根据权利要求45-48任一所述的装置,其特征在于,还包括:
    读取模块,用于在拼接信息表中记录所有输出分块的相关信息之后,将所述拼接信息表读入内存中;以及将所述多路摄像头采集的所述待拼接的多张输入图像读入所述内存中;
    所述第二获取模块,用于从所述内存中的所述拼接信息表中依序读取一个信息表分块并读入计算芯片中,基于读取的信息表分块记录的输出分块的相关信息,从所述内存中获取所述记录的输出分块对应的输入图像块并读入所述计算芯片中;所述计算芯片包括所述补偿模块和所述拼接模块;
    所述拼接模块,用于将获取到的输出图像块依序写回所述内存;在基于所述拼接信息表对应的一个拼接图像的所有输出图像块写回所述内存中时,得到所述拼接图像。
  50. 根据权利要求45-49任一所述的装置,其特征在于,还包括:
    第七获取模块,用于基于多路摄像头采集得到的多张采集图像的重叠区域,获取所述多张采集图像中各采集图像的亮度补偿信息并存储在所述拼接信息表中、或者所述拼接信息表的各所述信息表分块中;
    所述第一获取模块,用于分别从所述拼接信息表中或者所述信息表分块中获取同一摄像头采集的采集图像的亮度补偿信息作为相应输入图像的亮度补偿信息。
  51. 根据权利要求50所述的装置,其特征在于,还包括:
    控制模块,用于在检测到光线变化满足预定条件时,指示所述第七获取模块基于多路摄像头采集得到的多张采集图像的重叠区域,获取所述多张采集图像中各采集图像的亮度补偿信息的操作,并以本次获取的各采集图像的亮度补偿信息对所述拼接信息表中各采集图像的亮度补偿信息进行更新。
  52. 根据权利要求50或51所述的装置,其特征在于,所述第七获取模块,用于基于亮度补偿后,所述多张采集图像的重叠区域中每二张采集图像的像素值差异之和最小化的方式,获取所述多张采集图像中各采集图像的亮度补偿信息。
  53. 根据权利要求52所述的装置,其特征在于,所述第七获取模块,用于分别针对采集图像的每个通道,基于亮度补偿后,所述多张采集图像的重叠区域中每二张采集图像在所述通道的像素值差异之和最小化的方式,获取所述多张采集图像中各采集图像在所述通道的亮度补偿信息。
  54. 根据权利要求53所述的装置,其特征在于,所述第七获取模块基于以下方式针对采集图像的一个通道,获取多张采集图像的重叠区域中每二张采集图像在所述通道的像素值差异之和:
    分别针对采集图像的一个通道,获取各具有同一重叠区域的两张采集图像在重叠区域中像素值的加权差值的绝对值之和,或者,各具有同一重叠区域的两张采集图像在重叠区域中像素值的加权差值的平方值之和;
    其中,所述两张采集图像在重叠区域中像素值的加权差值包括:第一乘积与第二乘积之间的差值;所述第一乘积包括:第一采集图像的亮度补偿信息与所述第一采集图像所述重叠区域中至少一个像素点的像素值之和的乘积,所述第二乘积包括:第二采集图像的亮度补偿信息与所述第二采集图像所述重叠区域中所述至少一个像素点的像素值之和的第二乘积。
  55. 根据权利要求29-54任一所述的装置,其特征在于,还包括:
    显示模块,用于显示所述拼接图像;和/或,
    智能驾驶模块,用于基于所述拼接图像进行智能驾驶控制。
  56. 一种车载图像处理装置,其特征在于,包括:
    第一存储模块,用于存储拼接信息表和分别由多路摄像头对应采集得到的多张输入图像;
    计算芯片,用于从所述第一存储模块获取待拼接的多张输入图像中各输入图像的亮度补偿信息;分别针对各输出分块,从所述第一存储模块获取所述输出分块对应的输入图像中的输入图像块;基于所述输入图像块所在输入图像的亮度补偿信息对所述输入图像块进行亮度补偿,基于亮度补偿后的输入图像块获取所述输出分块上的输出图像块并将获取到的输出图像块依序写回所述第一存储模块;响应于基于所述拼接信息表对应的一个拼接图像的所有输出图像块写回所述内存中,得到拼接图像。
  57. 根据权利要求56所述的装置,其特征在于,所述拼接信息表包括至少一个信息表分块,所述信息表分块包括所述多张输入图像的亮度补偿信息和每个输出分块的相关信息,所述输出分块的相关信息包括:输出分块的位置信息、输出分块对应的输入分块的重叠属性信息、输出分块对应的输入分块所属输入图像的标识、输出分块中各像素点的坐标对应的输入分块中像素点的坐标、输入分块的位置信息。
  58. 根据权利要求56或57所述的装置,其特征在于,所述第一存储模块包括:易失性存储模块;
    所述计算芯片包括:现场可编程门阵列FPGA。
  59. 根据权利要求56-58任一所述的装置,其特征在于,所述第一存储模块,还用于存储第一应用单元和第二应用单元;
    所述第一应用单元,用于基于所述多路摄像头对应采集的多张采集图像到拼接图像的融合变换信息,获取输出分块中各像素点的坐标对应于采集图像的输入分块中像素点的坐标;获取所述输入分块的位置信息、用于表示所述输入分块是否属于任意两张采集图像的重叠区域的重叠属性信息;按照输出分块的顺序,在拼接信息表中分别通过一个信息表分块记录每个输出分块的相关信息;
    所述第二应用单元,用于基于多路摄像头采集得到的多张采集图像的重叠区域,获取所述多张采集图像中各采集图像的亮度补偿信息并存储在所述拼接信息表的各所述信息表分块中。
  60. 根据权利要求56-59任一所述的装置,其特征在于,还包括以下任意一个或多个模块:
    非易失性存储模块,用于存储所述计算芯片的运行支持信息;
    输入接口,用于连接所述多路摄像头和所述第一存储模块,用于将所述多路摄像头采集得到的多张输入图像写入所述第一存储模块中;
    第一输出接口,用于连接所述第一存储模块和显示屏,用于将所述第一存储模块中的拼接图像输出给所述显示屏显示;
    第二输出接口,用于连接所述第一存储模块和智能驾驶模块,用于将所述第一存储模块中的拼接图像输出给所述智能驾驶模块,以便所述智能驾驶模块基于所述拼接图像进行智能驾驶控制。
  61. 一种电子设备,其特征在于,包括:
    存储器,用于存储计算机程序;
    处理器,用于执行所述存储器中存储的计算机程序,且所述计算机程序被执行时,实现上述权利要求1-28任一所述的方法。
  62. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该计算机程序被处理器执行时,实现上述权利要求1-28任一所述的方法。
PCT/CN2019/098546 2018-08-29 2019-07-31 图像拼接方法和装置、车载图像处理装置、电子设备、存储介质 WO2020042858A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
SG11202101462WA SG11202101462WA (en) 2018-08-29 2019-07-31 Image stitching method and device, on-board image processing device, electronic apparatus, and storage medium
JP2021507821A JP7164706B2 (ja) 2018-08-29 2019-07-31 画像繋ぎ合わせ方法及び装置、車載画像処理装置、電子機器、記憶媒体
US17/172,267 US20210174471A1 (en) 2018-08-29 2021-02-10 Image Stitching Method, Electronic Apparatus, and Storage Medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810998634.9 2018-08-29
CN201810998634.9A CN110874817B (zh) 2018-08-29 2018-08-29 图像拼接方法和装置、车载图像处理装置、设备、介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/172,267 Continuation US20210174471A1 (en) 2018-08-29 2021-02-10 Image Stitching Method, Electronic Apparatus, and Storage Medium

Publications (1)

Publication Number Publication Date
WO2020042858A1 true WO2020042858A1 (zh) 2020-03-05

Family

ID=69644982

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/098546 WO2020042858A1 (zh) 2018-08-29 2019-07-31 图像拼接方法和装置、车载图像处理装置、电子设备、存储介质

Country Status (5)

Country Link
US (1) US20210174471A1 (zh)
JP (1) JP7164706B2 (zh)
CN (1) CN110874817B (zh)
SG (1) SG11202101462WA (zh)
WO (1) WO2020042858A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113240582A (zh) * 2021-04-13 2021-08-10 浙江大华技术股份有限公司 一种图像拼接方法及装置
CN116490894A (zh) * 2020-12-31 2023-07-25 西门子股份公司 一种图像拼接方法、装置和计算机可读介质

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
IL271518B2 (en) * 2019-12-17 2023-04-01 Elta Systems Ltd Radiometric corrections in the Mozika image
CN111862623A (zh) * 2020-07-27 2020-10-30 上海福赛特智能科技有限公司 一种车辆侧面图拼接装置和方法
US11978181B1 (en) 2020-12-11 2024-05-07 Nvidia Corporation Training a neural network using luminance
US11637998B1 (en) * 2020-12-11 2023-04-25 Nvidia Corporation Determination of luminance values using image signal processing pipeline
CN112714282A (zh) * 2020-12-22 2021-04-27 北京百度网讯科技有限公司 远程控制中的图像处理方法、装置、设备和程序产品
CN112668442B (zh) * 2020-12-23 2022-01-25 南京市计量监督检测院 一种基于智能图像处理的数据采集与联网方法
CN112738469A (zh) * 2020-12-25 2021-04-30 浙江合众新能源汽车有限公司 图像处理方法、设备、系统和计算机可读介质
CN112785504B (zh) * 2021-02-23 2022-12-23 深圳市来科计算机科技有限公司 一种昼夜图像融合的方法
CN113344834B (zh) * 2021-06-02 2022-06-03 深圳兆日科技股份有限公司 图像拼接方法、装置及计算机可读存储介质
CN113658058B (zh) * 2021-07-22 2024-07-02 武汉极目智能技术有限公司 一种车载环视系统中的亮度均衡方法及系统
CN113781302B (zh) * 2021-08-25 2022-05-17 北京三快在线科技有限公司 多路图像拼接方法、系统、可读存储介质、及无人车
EP4177823A1 (en) * 2021-11-03 2023-05-10 Axis AB Producing an output image of a scene from a plurality of source images captured by different cameras
CN115460354B (zh) * 2021-11-22 2024-07-26 北京罗克维尔斯科技有限公司 图像亮度处理方法、装置、电子设备、车辆和存储介质
CN114387163A (zh) * 2021-12-10 2022-04-22 爱芯元智半导体(上海)有限公司 图像处理方法和装置
CN114897684A (zh) * 2022-04-25 2022-08-12 深圳信路通智能技术有限公司 车辆图像的拼接方法、装置、计算机设备和存储介质
CN115278068A (zh) * 2022-07-20 2022-11-01 重庆长安汽车股份有限公司 车载360全景影像系统的弱光增强方法及装置
CN115343013B (zh) * 2022-10-18 2023-01-20 湖南第一师范学院 空腔模型的压力测量方法及相关设备
CN116579927B (zh) * 2023-07-14 2023-09-19 北京心联光电科技有限公司 一种图像拼接方法、装置、设备及存储介质
CN117911287B (zh) * 2024-03-20 2024-08-02 中国科学院西安光学精密机械研究所 一种大幅壁画图像的交互式拼接修复方法

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102045546A (zh) * 2010-12-15 2011-05-04 广州致远电子有限公司 一种全景泊车辅助系统
CN103810686A (zh) * 2014-02-27 2014-05-21 苏州大学 无缝拼接全景辅助驾驶系统及方法
CN106683047A (zh) * 2016-11-16 2017-05-17 深圳百科信息技术有限公司 一种全景图像的光照补偿方法和系统
CN106713755A (zh) * 2016-12-29 2017-05-24 北京疯景科技有限公司 全景图像的处理方法及装置
US20170232896A1 (en) * 2015-06-17 2017-08-17 Geo Semiconductor Inc. Vehicle vision system
CN107330872A (zh) * 2017-06-29 2017-11-07 无锡维森智能传感技术有限公司 用于车载环视系统的亮度均衡方法和装置
US20180035047A1 (en) * 2016-07-29 2018-02-01 Multimedia Image Solution Limited Method for stitching together images taken through fisheye lens in order to produce 360-degree spherical panorama

Family Cites Families (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6802614B2 (en) * 2001-11-28 2004-10-12 Robert C. Haldiman System, method and apparatus for ambient video projection
US20040151376A1 (en) * 2003-02-05 2004-08-05 Konica Minolta Holdings, Inc. Image processing method, image processing apparatus and image processing program
JP2009258057A (ja) * 2008-04-21 2009-11-05 Hamamatsu Photonics Kk 放射線像変換パネル
CN101409790B (zh) * 2008-11-24 2010-12-29 浙江大学 一种高效的多投影仪拼接融合方法
WO2010147293A1 (ko) * 2009-06-15 2010-12-23 엘지전자 주식회사 디스플레이 장치
CN101980080B (zh) * 2010-09-19 2012-05-23 华为终端有限公司 共光心摄像机、图像处理方法及装置
JP5585494B2 (ja) * 2011-02-28 2014-09-10 富士通株式会社 画像処理装置、画像処理プログラム及び画像処理方法
JP5935432B2 (ja) * 2012-03-22 2016-06-15 株式会社リコー 画像処理装置、画像処理方法及び撮像装置
US9142012B2 (en) * 2012-05-31 2015-09-22 Apple Inc. Systems and methods for chroma noise reduction
JP6084434B2 (ja) * 2012-10-31 2017-02-22 クラリオン株式会社 画像処理システム及び画像処理方法
CN104091316A (zh) * 2013-04-01 2014-10-08 德尔福电子(苏州)有限公司 一种车辆鸟瞰辅助系统图像数据处理方法
CN105072365B (zh) * 2015-07-29 2018-04-13 深圳华侨城文化旅游科技股份有限公司 一种金属幕投影下增强图像效果的方法及系统
US10033928B1 (en) * 2015-10-29 2018-07-24 Gopro, Inc. Apparatus and methods for rolling shutter compensation for multi-camera systems
CN105516614B (zh) * 2015-11-27 2019-02-05 联想(北京)有限公司 信息处理方法及电子设备
CN106994936A (zh) * 2016-01-22 2017-08-01 广州求远电子科技有限公司 一种3d全景泊车辅助系统
CN107333051B (zh) * 2016-04-28 2019-06-21 杭州海康威视数字技术股份有限公司 一种室内全景视频生成方法及装置
CN105957015B (zh) * 2016-06-15 2019-07-12 武汉理工大学 一种螺纹桶内壁图像360度全景拼接方法及系统
US10290111B2 (en) * 2016-07-26 2019-05-14 Qualcomm Incorporated Systems and methods for compositing images
CN106709868A (zh) * 2016-12-14 2017-05-24 云南电网有限责任公司电力科学研究院 一种图像拼接方法及装置
CN106875339B (zh) * 2017-02-22 2020-03-27 长沙全度影像科技有限公司 一种基于长条形标定板的鱼眼图像拼接方法
CN107424179A (zh) * 2017-04-18 2017-12-01 微鲸科技有限公司 一种图像均衡方法及装置
CN108228696B (zh) * 2017-08-31 2021-03-23 深圳市商汤科技有限公司 人脸图像检索方法和系统、拍摄装置、计算机存储介质
CN108205704B (zh) * 2017-09-27 2021-10-29 深圳市商汤科技有限公司 一种神经网络芯片
CN108234975A (zh) * 2017-12-29 2018-06-29 花花猫显示科技有限公司 基于摄像机的拼接墙颜色均匀性和一致性控制方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102045546A (zh) * 2010-12-15 2011-05-04 广州致远电子有限公司 一种全景泊车辅助系统
CN103810686A (zh) * 2014-02-27 2014-05-21 苏州大学 无缝拼接全景辅助驾驶系统及方法
US20170232896A1 (en) * 2015-06-17 2017-08-17 Geo Semiconductor Inc. Vehicle vision system
US20180035047A1 (en) * 2016-07-29 2018-02-01 Multimedia Image Solution Limited Method for stitching together images taken through fisheye lens in order to produce 360-degree spherical panorama
CN106683047A (zh) * 2016-11-16 2017-05-17 深圳百科信息技术有限公司 一种全景图像的光照补偿方法和系统
CN106713755A (zh) * 2016-12-29 2017-05-24 北京疯景科技有限公司 全景图像的处理方法及装置
CN107330872A (zh) * 2017-06-29 2017-11-07 无锡维森智能传感技术有限公司 用于车载环视系统的亮度均衡方法和装置

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116490894A (zh) * 2020-12-31 2023-07-25 西门子股份公司 一种图像拼接方法、装置和计算机可读介质
CN113240582A (zh) * 2021-04-13 2021-08-10 浙江大华技术股份有限公司 一种图像拼接方法及装置
CN113240582B (zh) * 2021-04-13 2023-12-12 浙江大华技术股份有限公司 一种图像拼接方法及装置

Also Published As

Publication number Publication date
JP2021533507A (ja) 2021-12-02
JP7164706B2 (ja) 2022-11-01
CN110874817B (zh) 2022-02-01
US20210174471A1 (en) 2021-06-10
CN110874817A (zh) 2020-03-10
SG11202101462WA (en) 2021-03-30

Similar Documents

Publication Publication Date Title
WO2020042858A1 (zh) 图像拼接方法和装置、车载图像处理装置、电子设备、存储介质
CA3019163C (en) Generating intermediate views using optical flow
US8755624B2 (en) Image registration device and method thereof
US9030524B2 (en) Image generating apparatus, synthesis table generating apparatus, and computer readable storage medium
CN109005334B (zh) 一种成像方法、装置、终端和存储介质
US20130058589A1 (en) Method and apparatus for transforming a non-linear lens-distorted image
CN106856000B (zh) 一种车载全景图像无缝拼接处理方法及系统
CN111179168B (zh) 一种车载360度全景环视监控系统及方法
WO2017091927A1 (zh) 图像处理方法和双摄像头系统
CN114445303A (zh) 图像失真变换方法和设备
WO2021184302A1 (zh) 图像处理方法、装置、成像设备、可移动载体及存储介质
US11341607B2 (en) Enhanced rendering of surround view images
KR101705558B1 (ko) Avm 시스템의 공차 보정 장치 및 방법
CN114339185A (zh) 用于车辆相机图像的图像彩色化
CN114742866A (zh) 图像配准方法、装置、存储介质及电子设备
CN118382876A (zh) 使用运动数据生成更高分辨率图像
US11715218B2 (en) Information processing apparatus and information processing method
WO2024067732A1 (zh) 神经网络模型的训练方法、车辆视图的生成方法和车辆
US20200280684A1 (en) Method and system of fast image blending for overlapping region in surround view
KR20210133472A (ko) 이미지 병합 방법 및 이를 수행하는 데이터 처리 장치
US12094079B2 (en) Reference-based super-resolution for image and video enhancement
KR20220133766A (ko) 멀티뷰 어안 렌즈들을 이용한 실시간 전방위 스테레오 매칭 방법 및 그 시스템
CN118485738B (zh) Ipm图像生成方法、装置、设备及计算机可读存储介质
Lai et al. Zynq-based full HD around view monitor system for intelligent vehicle
US11508043B2 (en) Method and apparatus for enhanced anti-aliasing filtering on a GPU

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19853448

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021507821

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 25.06.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19853448

Country of ref document: EP

Kind code of ref document: A1