WO2018209603A1 - 图像处理方法、图像处理设备及存储介质 - Google Patents

图像处理方法、图像处理设备及存储介质 Download PDF

Info

Publication number
WO2018209603A1
WO2018209603A1 PCT/CN2017/084736 CN2017084736W WO2018209603A1 WO 2018209603 A1 WO2018209603 A1 WO 2018209603A1 CN 2017084736 W CN2017084736 W CN 2017084736W WO 2018209603 A1 WO2018209603 A1 WO 2018209603A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
images
hdr
target space
depth
Prior art date
Application number
PCT/CN2017/084736
Other languages
English (en)
French (fr)
Inventor
阳光
Original Assignee
深圳配天智能技术研究院有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳配天智能技术研究院有限公司 filed Critical 深圳配天智能技术研究院有限公司
Priority to PCT/CN2017/084736 priority Critical patent/WO2018209603A1/zh
Priority to CN201780034126.2A priority patent/CN109314776B/zh
Publication of WO2018209603A1 publication Critical patent/WO2018209603A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Definitions

  • the present invention relates to the field of image processing technologies, and in particular, to an image processing method, an image processing device, and a storage medium.
  • the depth information in the image information is generally obtained by directly acquiring a related image such as a color image by using a camera or the like, and calculating a depth image by using a correlation algorithm on the image data of the related image, the depth image including the image. Depth information at any point.
  • the image obtained by acquiring the moving object is prone to blur or the correct exposure of the light in the time of the light change results in the loss of the image data of the high light or low light area, thereby causing the depth information calculated by the collected image to be inaccurate.
  • the technical problem to be solved by the present invention is to provide an image processing method, an image processing device, and a storage medium, which can improve the calculation accuracy of depth information.
  • the first technical solution adopted by the present invention is to provide an image processing method, including: acquiring a plurality of high dynamic illumination rendering HDR images of multiple views, wherein the plurality of HDR images are all The same target space is obtained by the acquisition process; and the depth image of the target space is calculated by using the image data of the plurality of HDR images.
  • the second technical solution adopted by the present invention is to provide an image processing apparatus including a processor and a memory connected to each other; the memory is used to store computer instructions and data; and the processor executes the Computer instructions for: acquiring a plurality of high dynamic illumination rendering HDR images of a plurality of views, wherein the plurality of HDR images are obtained by acquiring the same target space; and calculating the image data of the plurality of HDR images A depth image of the target space.
  • the third technical solution adopted by the present invention is to provide a non-easy
  • the cryptographic storage medium stores computer instructions executable by a processor for use in the image processing method of the first aspect.
  • the fourth technical solution adopted by the present invention is to provide an image processing method, including: acquiring a plurality of images of different exposure times, wherein the plurality of images are all in the same target space at the same viewpoint. Obtaining, using the depth information of the depth image of the target space that has been calculated, calculating depth information of at least one of the plurality of images collected by the current view point in the target space, and according to the depth Deblurring at least one of the plurality of images; performing pixel point matching on the plurality of images after deblurring; synthesizing the plurality of images according to the pixel point matching result to obtain high dynamic illumination rendering HDR image.
  • a fifth technical solution adopted by the present invention is to provide an image processing apparatus including a processor and a memory connected to each other; the memory is used to store computer instructions and data; and the processor executes the Computer instructions for: acquiring a plurality of images of different exposure times, wherein the plurality of images are all acquired in the same target space at the same viewpoint; and using depth information of the depth image of the target space that has been calculated Calculating depth information of at least one of the plurality of images collected by the current view point in the target space, and deblurring at least one of the plurality of images according to the depth information; The plurality of images are subjected to pixel point matching; the plurality of images are combined according to the pixel point matching result to obtain a high dynamic illumination rendering HDR image.
  • a sixth technical solution adopted by the present invention is to provide a nonvolatile storage medium, characterized in that a computer instruction executable by a processor is stored, and the computer instruction is used to execute the fourth technology.
  • the accuracy of calculating the depth information of the target space can be improved.
  • the acquired image can be deblurred first, the definition of the HDR image and the accuracy of the HDR image data are improved, and the image that cannot be used due to the light problem is avoided after the deblurring process.
  • the data improves the adaptability of image acquisition to complex light.
  • the effect of motion on image acquisition can be removed, so HDR image synthesis and depth calculation under adaptive motion state are realized.
  • FIG. 1 is a flow chart of an embodiment of an image processing method of the present invention
  • FIG. 2 is a schematic diagram of image acquisition of an application scenario shown in FIG. 1;
  • FIG 3 is a flow chart of the step S12 shown in Figure 1 in another embodiment
  • FIG. 4 is a schematic diagram of image matching in an application scenario shown in FIG. 3;
  • FIG. 5 is a flow chart showing another embodiment of the image processing method of the present invention.
  • Figure 6 is a flow chart of the step S52 shown in Figure 5 in still another embodiment
  • FIG. 7 is a block diagram showing an embodiment of an image processing apparatus of the present invention.
  • the solution proposed by the invention can be applied to a scene in which the image collector has relative motion with the photographed target space, for example, the image collector is set in a traveling vehicle or a reversed vehicle, and is collected by an image collector in real time (for example, every setting)
  • the surrounding environment image acquired once in a fixed time is calculated to obtain the depth information of the current surrounding environment, that is, the distance information between the current vehicle and the surrounding environment is obtained.
  • the shooting target is a vehicle on the road
  • the image collector is fixedly disposed on the roadside
  • the image of the passing vehicle is collected by the image collector in real time to calculate the current depth information of the vehicle, that is, the vehicle and the image collection. Distance information between the devices.
  • FIG. 1 is a flow chart of an embodiment of an image processing method according to the present invention.
  • the method is used to calculate a depth image of the target space, which is executed by an image processing device, and includes the following steps:
  • the plurality of HDR (High-Dynamic Range) images are obtained by acquiring the same target space, that is, the plurality of HDR images of the multi-view points have a certain overlap.
  • the plurality of viewpoints simultaneously acquire images, and the collected targets are empty. There is a certain overlap between them.
  • the HDR image of the plurality of viewpoints may also be an HDR image that is acquired and synthesized by the same viewpoint at different times and at different positions on the same target space.
  • the HDR image of each viewpoint can be synthesized from a plurality of images respectively acquired by the viewpoint at different exposure times.
  • the S11 specifically includes: performing pixel point matching on a plurality of images of different exposure times of each viewpoint; and combining the plurality of images of each viewpoint into the HDR image of the viewpoint according to the pixel point matching result. Taking the first image and the second image acquired at two different exposure times for each viewpoint as an example, the matching pixel points in the first image and the second image of each viewpoint are found, and matched according to the viewpoint. The pixel points and the image data in the first image and the second image are calculated to obtain HDR image data of the viewpoint.
  • the image processing apparatus firstly collects the road setting directions in real time by the image collectors 21 and 22 set at the A and B viewpoints to obtain the first image a1 and the second image a2 and the B viewpoint under the A viewpoint.
  • the first image a1 and the first image b1 are acquired at the first exposure time t1
  • the second image a2 and the second image b2 are acquired at the second exposure time t2.
  • the first exposure time is different from the second exposure time.
  • the first exposure time is greater than the second exposure time by ⁇ t.
  • the image processing device respectively performs pixel matching on the two images acquired by each viewpoint, that is, matching the pixel points corresponding to the same spatial point in the first image a1 and the second image a2 of the A viewpoint, respectively, and the first point of the B viewpoint
  • the pixel points corresponding to the same spatial point in the image b1 and the second image b2 are matched.
  • the image data of the pixel point in the HDR image can be calculated by using the set synthesis algorithm and the image data of the pixel points having the matching relationship among the two images a1 and a2 of the A view point, thereby obtaining A.
  • One frame of HDR image of the viewpoint similarly, the image data of the pixel point in the HDR image can be calculated by using the set synthesis algorithm and the image data of the pixel points having the matching relationship among the two images b1 and b2 of the B view. And then get a frame of HDR image of the B view.
  • the process of matching pixel points and synthesizing HDR images in the above process is an existing technical solution.
  • the pixel point matching adopts a gray-based template matching algorithm, a feature-based matching algorithm, etc., which is not The invention of the invention is not limited herein.
  • the image processing apparatus may further perform the steps of the image processing method embodiment regarding HDR image synthesis shown in FIG. 5 below to acquire an HDR image of each viewpoint.
  • S12 Calculate a depth image of the target space by using image data of the plurality of HDR images.
  • the depth image of the target space at the acquisition time is calculated by using the image data of the multi-viewpoint, and the image data is specifically visual data such as RGB value, grayscale, and brightness of the color image.
  • the image processing apparatus synthesizes the HDR image of the corresponding viewpoint using the image obtained by the multi-view acquisition, and calculates the depth image of the target space by setting the algorithm and the image data of the HDR image of the different viewpoints.
  • the depth image includes depth information for any pixel point thereon.
  • the manner of calculating the relevant depth information by using the image data of different viewpoints can refer to the existing depth calculation method.
  • the present embodiment does not directly use the acquired image to calculate the depth image, but The depth image is calculated by using the HDR image obtained by the acquired image processing, and the accuracy of the depth calculation for the target space can be improved.
  • step S12 includes the following sub-steps:
  • S121 Acquire a pixel point in the plurality of images for synthesizing the HDR image for each viewpoint to match the preset degree, as a robust pixel point of the HDR image corresponding to the viewpoint.
  • the HDR image of each view is synthesized by a plurality of images acquired by the view.
  • the robust pixel point is a pixel point with higher robustness, that is, a pixel point with higher matching degree among a plurality of images acquired at each viewpoint.
  • a matching degree of a plurality of pixel points corresponding to each spatial point can be obtained.
  • the pixel point e1 of the first image a1 of the A viewpoint and the pixel point e2 of the second image a2 have a matching relationship, and the matching degree of e1 and e2 is 70.
  • the pixel point f1 of the first image a1 of the A viewpoint and the pixel point f2 of the second image a2 have a matching relationship, and the matching degree of f1 and f2 is 30%, thereby obtaining the first image a1 of the A viewpoint And the matching degree of all the pixel points having the matching relationship in the second image a2, similarly, the matching degree of all the pixel points having the matching relationship in the first image b1 and the second image b2 of the B view.
  • the pixel point e corresponding to the obtained pixel point (e.g., e1 and e2) in the HDR image of the viewpoint is determined as a robust pixel point.
  • S122 Perform matching of the robust pixel points between the HDR images of the plurality of viewpoints.
  • S124 Calculate, according to a matching relationship of pixel points between the HDR images of the plurality of viewpoints, a depth image of the target space corresponding to the HDR image of the plurality of viewpoints.
  • the robust pixel points are pixels with higher robustness in the corresponding viewpoints.
  • the robust pixel points of different viewpoint images the matching relationship of other pixels around the pixel can be quickly determined without comparing the entire image, and the calculation amount and calculation time of the depth information are reduced.
  • the matching relationship between the robust pixel points in the HDR images of the plurality of viewpoints can be calculated according to the image data of the robust pixel points of each viewpoint, Thereby, a matching relationship between other pixel points on the HDR image of the plurality of viewpoints is quickly calculated.
  • the robust pixel points h A , j A , k A of the HDR image of the A view and the robust pixel points h B , j B , k B of the HDR image of the B view match one by one, so It may be determined that the pixel points in the triangular regions 41 and 42 respectively formed by the above-described three robust pixel points matched by the HDR image of the A view and the B view have a matching relationship, and the pixels of the triangular region of the HDR image of each view are utilized.
  • the image data of the point can be quickly calculated to obtain a matching relationship between each of the other pixel points in the triangular region of the plurality of viewpoints, such as the pixel point gA in the triangular region 41 of the A viewpoint and the triangular region 42 of the B viewpoint.
  • the g B in the match is the match.
  • each pixel of the HDR image can be matched to the method described above can be obtained in a plurality of viewpoints, and thus each pixel according to a matching (shown in Figure 4 h A -h B, j A - j B , k A - k B , g A - g B ) calculating the depth information of the corresponding spatial point of the set of pixel points at the acquisition time in the image data corresponding to the HDR image, and then calculating the depth image corresponding to the target space .
  • a matching shown in Figure 4 h A -h B, j A - j B , k A - k B , g A - g B
  • the matching relationship of the robust pixel points with higher matching degree determined in each view synthesis HDR image may be utilized to determine the HDR image in different views.
  • the matching relationship between other pixel points related to the robust pixel point position does not need to use the entire HDR image for pixel point matching, thereby reducing the amount of calculation.
  • the robust pixel points are relatively clear pixel points acquired for each viewpoint, so the pixel points of the HDR images of different viewpoints are matched by the above method, and the accuracy of pixel point matching is ensured.
  • FIG. 5 is a flow chart of another embodiment of the image processing method of the present invention.
  • the method is used for the synthesis of an HDR image, which is performed by an image processing apparatus.
  • the method is described by taking two images of different exposure times for each viewpoint to synthesize an HDR image as an example. It can be understood that the same reason can be utilized.
  • the method includes the following steps:
  • S51 Acquire a first image of the first exposure time and a second image of the second exposure time.
  • the first image and the second image are all collected in the same target space at the same viewpoint. It can be understood that the collection of the same target space does not simply mean that the target space collected by the first image and the second image is completely the same.
  • the acquisition of the same target space here should be understood as the image collector is fixed at one viewpoint. The shooting angle is unchanged and the target space is collected.
  • the image collector does not have relative motion with the target space
  • the target space acquired by the first image and the second image should be identical, but when the image collector and the target space have relative motion, as shown in FIG. 2
  • the first image and the second image are two frames of images acquired by the image collector at a time in a target space with relative motion, and the two frames have different light data of the target space due to different exposure times.
  • the first exposure time and the second exposure time are different exposure time lengths, so that the light intensity collected by the corresponding spatial points in the first image and the second image is different.
  • the exposure start points of the first image and the second image may be set to be the same or set to be different.
  • the image collector obtains one frame image as the first image through the first exposure time T1 at time T0, and then at time T2.
  • a frame image is obtained as the second image through the second exposure time T3.
  • step S52 Determine whether at least one of the first image and the second image satisfies a deblurring condition. If yes, step S53 is performed, otherwise step S54 is performed.
  • the image collector and the target space it is difficult for the image collector and the target space to be absolutely stationary. For example, there is a hand shake when the image is acquired by a human image collector, or the image collector is placed in a moving vehicle to perform a front or surrounding environment. Image acquisition, so the image acquired by the image collector is prone to blur, especially in images with long exposure time; or because of the light intensity of the acquisition environment, the captured image may be overexposed or underexposed. The overexposed or underexposed area may also be referred to as the above fuzzy portion. Therefore, in the embodiment, when the image processing device acquires the first image and the second image at a viewpoint, the image processing device may perform deblurring determination on at least one of the first image and the second image. Further deblurring is performed accordingly.
  • the image that needs to be blurred can be set according to the actual application (hereinafter simple It is called a setting image) as a first image, or a second image or two images.
  • the image processing apparatus selects only images of the two images that have a longer exposure time for deblurring determination.
  • the image processing apparatus may directly execute S53 without performing the determination described in S52.
  • the specific settings can be made according to actual needs.
  • the deblurring condition may be specifically set to be related to a blurring coefficient of a pixel point in the set image.
  • the S52 specifically includes the following sub-steps:
  • S521 Calculate depth information of at least one of the first image and the second image by using depth information of the depth image of the target space that has been calculated, and determine the first image according to the depth information. A blur factor for each pixel of at least one of the second images.
  • an image is acquired every set time and a depth image of the target space is calculated once.
  • the depth information of the depth image of the target space calculated by the previous calculation may be calculated by using the image acquired in the previous time, that is, the HDR image of the plurality of viewpoints obtained by using the image acquired before the first image, thereby calculating The depth image of the target space at the previous acquisition time is obtained.
  • the depth image includes depth information for any pixel point thereon.
  • the image processing device selects at least one image from the first image and the second image as the setting image to be deblurred, wherein the selection may be performed according to the user's original setting information or a preset selection algorithm.
  • the image processing device calculates the depth information of the set image by using the depth information of the previously acquired depth image, for example, obtaining the depth information of the previous frame depth image and the previous frame depth image from the previously acquired depth image.
  • the depth change information between the two frames of depth images, and the relative motion information between the current view point and the target space of the previous frame is calculated from the depth change information (eg, according to the depth image of the previous frame and the depth image of the first two frames)
  • the depth difference and the acquisition interval of the two-frame depth image are used to calculate the speed, position, angle, distance change, etc. of the acquisition viewpoint relative to the target space, and the current information is calculated according to the depth information of the previous frame depth image and the relative operation information.
  • Setting depth information of the image for example, calculating a product of the uniform velocity of the viewpoint relative to the target space and the image acquisition interval, and comparing the depth value of each pixel of the previous frame depth image with the product to obtain a correspondence of the currently set image The depth value of the pixel). Further, the image processing apparatus calculates the blur coefficient of the set image from the depth information of the current set image. In an embodiment, the image processing device may directly pre-set the depth information of the previous frame depth image, The first relationship between the depth change information of the previous frame depth image and the first two frame depth images and the blur coefficient, so the image processing device can acquire the depth information in the previous frame depth image of the target space and the previous frame.
  • the depth change information between the depth image and the depth image of the first two frames, and the blur coefficient of each pixel of the set image can be calculated according to the first relationship.
  • the image processing device may pre-set the depth information of the previous frame depth image, the second relationship between the depth change information between the previous frame depth image and the first two frame depth images, and the current depth information, and The third relationship between the current depth information and the blur coefficient, so the image processing device can acquire the depth information in the previous frame depth image of the target space and the depth change between the previous frame depth image and the first two frame depth images.
  • Information, and according to the second relationship, the depth information of each pixel of the set image can be calculated, and the blur coefficient of each pixel of the set image is calculated according to the third relationship.
  • the first relationship, the second relationship, and the third relationship may be existing related algorithms, and are not limited herein.
  • the first image of the first exposure time and the second image of the second exposure time are alternately collected forward at intervals of the set time in the A view and the B view as shown in FIG. 2, so that each view can obtain the singular number.
  • the frame image is a first image
  • the even-numbered frame image is a second image.
  • an HDR image is synthesized by using two adjacent frames of each view, for example, corresponding to the A view point, and the first frame image and the second frame image are synthesized.
  • the image processing device calculates a depth image of the target space corresponding to the HDR image of the plurality of viewpoints based on the HDR images of the plurality of viewpoints.
  • the image processing device calculates a depth image of the first HDR image from the first HDR image of the A view and the B view.
  • the image processing device determines a blur coefficient for each pixel of the second frame image and/or the third frame image for synthesizing the second HDR image according to the predetermined third relationship.
  • S522 Find a pixel point in the at least one image that the blurring coefficient is greater than a set value.
  • the image processing device traverses the blur coefficient of each pixel of the set image to search for a pixel whose blur coefficient is larger than a set value.
  • the de-blurring condition is also set according to the actual application, and the image processing device corresponds to the de-blurring condition according to the determined manner of the searched pixel point, which is not limited herein.
  • the deblurring condition is that the degree of concentration of the pixel points whose blurring coefficient is greater than the set value is greater than the set ratio, and the image processing apparatus counts the concentration degree of the searched pixel points, and when it is determined that the concentration degree is greater than the set ratio, the determination is performed. This image satisfies the deblurring condition.
  • the set value and the set ratio can be adjusted according to actual needs.
  • S53 Calculate depth information of at least one of the first image and the second image according to the depth information in the depth image of the target space that has been calculated, and according to the first image and the second image.
  • the depth information of at least one of the images deblurs at least one of the first image and the second image.
  • the S52 may include: calculating, by using the depth information of the depth image that has been calculated, a deblurring process for matching the at least one image (that is, the set image described above) with the depth information.
  • the conventional conventional deblurring method is based on the assumption that the depth of each pixel in the image is the same. However, when there are objects of different distances in the target space, the method tends to cause a large error in the deblurring.
  • the image processing apparatus can determine the depth information of the currently acquired set image (the first image and/or the second image) according to the depth information of the depth image that has been calculated (for example, directly calculating the previously obtained depth)
  • the depth information of the image is used as the depth information of the currently acquired set image, or as described above, according to the previously calculated depth information of the previous depth image and the relative motion information between the collected viewpoint and the target space.
  • the information is deblurred differently for the corresponding pixel of the set image.
  • the depth information of the previously acquired image can be referred to the description in S52 above.
  • the image processing apparatus may deblur only the area where the concentration of the pixel points searched in the set image exceeds the set ratio.
  • the first image and the second image may be pixel-matched according to an existing algorithm to obtain robust pixel points on the first image and the second image.
  • the image processing device calculates the image data of the pixel in the HDR image by using the image data of the pixel having the matching relationship in the first image and the second image according to the setting synthesis algorithm, thereby obtaining a frame HDR of the viewpoint. image.
  • the image processing device performs deblurring on the acquired image before synthesizing the HDR image, improving the sharpness of the HDR image and the accuracy of the HDR image data, and avoiding the acquired image after the deblurring process
  • Light problems cause unusable image data, which in turn improves the adaptability of image acquisition to complex light.
  • the depth calculation is performed using the HDR image, the accuracy of the depth calculation can be further improved.
  • the effect of motion on image acquisition can be removed, so HDR image synthesis and depth calculation in adaptive motion state are realized.
  • the image processing apparatus of the above method can be applied to an in-vehicle system, that is, an image collector is disposed on a vehicle to perform image acquisition on a surrounding environment of the vehicle, and an image processing apparatus in the in-vehicle system acquires an image acquired by the image collector, and executes the present invention.
  • the method obtains depth information of the surrounding environment or obtains a corresponding HDR image. Since each frame of image has large image data, the advantage of the big data of the image can be utilized to enable the in-vehicle system to obtain more effective data.
  • FIG. 7 is a schematic structural diagram of an embodiment of an image processing apparatus according to the present invention.
  • the image processing apparatus can perform the steps of the above method.
  • the image processing device 70 includes a processor 71 and a memory 72 connected to the processor 71.
  • the memory 72 is used to store computer instructions, data, and the like, computer instructions executed by the processor 71.
  • the processor 71 executes the computer instructions for performing at least one of the following first and second aspects.
  • the processor 71 obtains a plurality of HDR images of the multi-viewpoint, wherein the plurality of HDR images are obtained by acquiring the same target space;
  • a depth image of the target space is calculated using image data of the plurality of HDR images.
  • the processor 71 acquiring the plurality of high dynamic illumination rendering HDR images of the multi-view includes: performing pixel point matching on the plurality of images of different exposure times of each view; each of the pixels according to the pixel matching result The plurality of images of the viewpoint synthesize an HDR image of the viewpoint.
  • the processor 71 is further configured to: according to the depth information in the depth image of the target space that has been calculated At least one of the plurality of images of the at least one viewpoint performs a deblurring process that matches the depth information.
  • the processor 71 is further configured to: calculate depth information of the at least one image according to depth information of the depth image of the target space that has been calculated, and determine the at least one according to depth information of the at least one image a blur coefficient of each pixel of the image; finding a pixel point in the image whose blur coefficient is greater than a set value; when the pixel point whose blur coefficient is greater than a set value satisfies a set pixel point condition, The obtained depth information in the depth image of the target space performs deblurring processing matching the depth information on at least one of the plurality of images of the at least one viewpoint.
  • the calculating, by the processor 71, the depth image of the target space by using image data of the plurality of HDR images comprises: acquiring pixels of the plurality of images of each view that have a matching degree exceeding a preset degree value Point, as a robust pixel point corresponding to the viewpoint; matching the robust pixel points between the HDR images of the plurality of viewpoints; according to the matching relationship of the robust pixel points and the robust pixel points Corresponding to a positional relationship between other pixel points in the HDR image of the viewpoint, determining a matching relationship of other pixel points in the HDR image of the plurality of viewpoints; matching of pixel points between the HDR images according to the plurality of viewpoints The relationship further calculates a depth image of the target space corresponding to the HDR image of the plurality of viewpoints.
  • the second aspect is a first aspect:
  • the processor 71 is configured to acquire a plurality of images of different exposure times, wherein the plurality of images are all collected in the same target space at the same viewpoint;
  • the processor 71 before performing deblurring on the at least one of the plurality of images, is further configured to: determine whether at least one of the plurality of images satisfies a deblurring condition; if the deblurring condition is met And performing deblurring processing on at least one of the plurality of images.
  • the processor 71 determines whether the at least one of the plurality of images satisfies the deblurring condition, and includes: calculating, according to the depth information of the depth image of the target space that has been calculated, the current view point pair Depth information of at least one of the plurality of images collected by the target space, and determining, according to the depth information, a blur coefficient of each pixel of the at least one of the plurality of images; finding the at least one A pixel point in the image whose blurring coefficient is greater than a set value; and when the pixel point whose blurring coefficient is greater than the set value in the at least one image satisfies a set pixel point condition, determining that the at least one image satisfies a deblurring condition.
  • the processor 71 is further configured to calculate a depth image of the target space according to image data of the HDR images of the plurality of viewpoints acquired by the foregoing steps.
  • the processor 71 calculates the depth image of the target space according to the image data of the HDR images of the plurality of viewpoints acquired by the above steps, and may include: acquiring the plurality of images for synthesizing the HDR image of each view. a pixel point whose matching degree exceeds a preset degree value, as a robust pixel point corresponding to the viewpoint; matching the robust pixel point between the HDR images of the plurality of viewpoints acquired by the above steps; according to the robustness a matching relationship between the pixel points and a positional relationship between the robust pixel point and other pixel points in the HDR image of the corresponding viewpoint, and determining a matching relationship of the other pixel points in the HDR image of the plurality of viewpoints; A matching relationship of pixel points between the HDR images of the plurality of viewpoints is calculated, and a depth image of the target space corresponding to the HDR image of the plurality of viewpoints is calculated.
  • the image processing apparatus 70 further includes an image collector 73 for acquiring an image, for example, acquiring a target space with relative motion at different times.
  • the frame image is sent to the memory 72, which is also used to retrieve the first image and the second image from the memory 72.
  • the image collector 73 may include a first image collector and a second image collector, the first image collector and the second image collector being disposed differently From the point of view A frame of image is acquired to the same target space every set time.
  • the present invention also provides a non-volatile storage medium storing computer instructions executable by a processor for performing the above-described method embodiments, such as a memory 72.
  • the image processing device does not directly calculate the depth information by using the acquired image, but calculates the depth information by using the HDR image obtained by the collected image processing, thereby improving the accuracy of the depth calculation on the target space.
  • the matching relationship between the other pixel points related to the position of the robust pixel in the different view HDR images may be determined by using the matching relationship of the robust pixel points with high matching degree determined in the synthesized HDR image. Without the use of the entire HDR image for pixel matching, the amount of image depth calculation is reduced.
  • the acquired image can be deblurred first, the definition of the HDR image and the accuracy of the HDR image data are improved, and the image that cannot be used due to the light problem is avoided after the deblurring process.
  • the data improves the adaptability of image acquisition to complex light.
  • the effect of motion on image acquisition can be removed, so HDR image synthesis and depth calculation under adaptive motion state are realized.
  • the disclosed system, apparatus, and method may be implemented in other manners.
  • the device implementations described above are merely illustrative.
  • the division of the modules or units is only a logical function division.
  • there may be another division manner for example, multiple units or components may be used. Combinations can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • the technical solution of the present invention which is essential or contributes to the prior art, or all or part of the technical solution, may be embodied in the form of a software product stored in a storage medium.
  • a number of instructions are included to cause a computer device (which may be a personal computer, server, or network device, etc.) or a processor to perform all or part of the steps of the methods of the various embodiments of the present invention.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like. .

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

本发明公开了图像处理方法、图像处理设备及存储介质。其中,所述深度计算方法包括:获取多视点的多个HDR图像,其中,所述多个HDR图像均是对同一目标空间进行采集处理得到;利用所述多个HDR图像的图像数据计算得到所述目标空间的深度图像。通过上述方式,能够提高计算得到的深度图像中的深度信息的准确性。

Description

图像处理方法、图像处理设备及存储介质 【技术领域】
本发明涉及图像处理技术领域,特别是涉及图像处理方法、图像处理设备及存储介质。
【背景技术】
目前,图像信息中的深度信息的获取方式通常是利用摄像头等直接采集得到相关图像如彩色图像,并通过采用相关算法对该相关图像的图像数据计算得到深度图像,所述深度图像包括该图像上任意点的深度信息。
随着深度信息的应用益发广泛,通常需要实时获取移动中的物体的深度信息。然而,采集移动中的物体得到的图像容易出现模糊或在光线变化时没有及时正确曝光导致高光或低光区域的图像数据的丢失,进而导致由该采集到的图像计算到的深度信息不准确。
【发明内容】
本发明主要解决的技术问题是提供图像处理方法、图像处理设备及存储介质,能够提高深度信息的计算准确性。
为解决上述技术问题,本发明采用的第一个技术方案是:提供一种图像处理方法,包括:获取多视点的多个高动态光照渲染HDR图像,其中,所述多个HDR图像均是对同一目标空间进行采集处理得到;利用所述多个HDR图像的图像数据计算得到所述目标空间的深度图像。
为解决上述技术问题,本发明采用的第二技术方案是,提供一种图像处理设备,包括相互连接的处理器、存储器;所述存储器用于存储计算机指令以及数据;所述处理器执行所述计算机指令,用于:获取多视点的多个高动态光照渲染HDR图像,其中,所述多个HDR图像均是对同一目标空间进行采集处理得到;利用所述多个HDR图像的图像数据计算得到所述目标空间的深度图像。
为解决上述技术问题,本发明采用的第三技术方案是,提供一种非易 失性存储介质,存储有处理器可运行的计算机指令,所述计算机指令用于第一技术方案所述的图像处理方法。
为解决上述技术问题,本发明采用的第四技术方案是,提供一种图像处理方法,包括:获取不同曝光时间的多个图像,其中,所述多个图像均为在同一视点对同一目标空间采集得到的;利用已经计算得到的所述目标空间的深度图像的深度信息,计算得到当前所述视点对所述目标空间采集到的多个图像中的至少一个图像的深度信息,并根据该深度信息对所述多个图像中的至少一个图像进行去模糊;将去模糊后的所述多个图像进行像素点匹配;根据所述像素点匹配结果将所述多个图像合成得到高动态光照渲染HDR图像。
为解决上述技术问题,本发明采用的第五技术方案是,提供一种图像处理设备,包括相互连接的处理器、存储器;所述存储器用于存储计算机指令以及数据;所述处理器执行所述计算机指令,用于:获取不同曝光时间的多个图像,其中,所述多个图像均为在同一视点对同一目标空间采集得到的;利用已经计算得到的所述目标空间的深度图像的深度信息,计算得到当前所述视点对所述目标空间采集到的多个图像中的至少一个图像的深度信息,并根据该深度信息对所述多个图像中的至少一个图像进行去模糊;将去模糊后的所述多个图像进行像素点匹配;根据所述像素点匹配结果将所述多个图像合成得到高动态光照渲染HDR图像。
为解决上述技术问题,本发明采用的第六技术方案是,提供一种非易失性存储介质,其特征在于,存储有处理器可运行的计算机指令,所述计算机指令用于执行第四技术方案所述的图像处理方法。
以上方案,利用对采集的图像处理得到的HDR图像,并利用HDR图像来计算深度图像,能够提高计算目标空间的深度信息的准确性。
另外,在合成HDR图像时可先对采集图像进行去模糊处理,提高了HDR图像的清晰度和HDR图像数据的准确性,而且经去模糊处理后避免了采集图像由于光线问题导致无法使用的图像数据,进而提高了图像采集对复杂光线的适应性,而且,经去模糊处理后可除掉由于运动对图像采集的影响,故实现了可适应运动状态下的HDR图像合成和深度计算。
【附图说明】
图1是本发明图像处理方法一实施例的流程图;
图2是图1所示一应用场景的图像采集示意图;
图3是图1所示的S12步骤在另一实施例中的流程图;
图4是图3所示一应用场景中的图像匹配示意图;
图5是本发明图像处理方法另一实施例的流程图;
图6是图5所示的S52步骤在再一实施例中的流程图;
图7是本发明图像处理设备一实施例的结构示意图。
【具体实施方式】
以下描述中,为了说明而不是为了限定,提出了诸如特定系统结构、接口、技术之类的具体细节,以便透彻理解本申请。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施方式中也可以实现本申请。在其它情况中,省略对众所周知的装置、电路以及方法的详细说明,以免不必要的细节妨碍本申请的描述。
本发明提出的方案可应用于图像采集器与拍摄的目标空间具有相对运动的场景中,例如图像采集器设置在行驶的车辆中或倒车的车辆中,通过图像采集器实时采集(例如每隔设定时间采集一次)到的周边环境图像计算得到当前该周边环境的深度信息,也即得到当前车辆与周边环境物之间的距离信息。又例如,拍摄目标为道路上的车辆,该图像采集器固定设置在路边,通过图像采集器实时采集经过的车辆的图像,以计算得到当前该车辆的深度信息,也即该车辆与图像采集器之间的距离信息。
请参阅图1,图1是本发明图像处理方法一实施例的流程图。本实施例中,该方法用于计算目标空间的深度图像,由一图像处理设备执行,包括以下步骤:
S11:获取多视点的多个HDR图像。
其中,所述多个HDR(High-Dynamic Range,高动态光照渲染)图像均是对同一目标空间进行采集处理得到,即该多视点的多个HDR图像具有一定的重叠。在本实施例中,该多个视点同时采集图像,且采集的目标空 间有一定的重叠。在其他实施例中,所述多个视点的HDR图像也可以是同一个视点在不同时间不同位置对同一目标空间采集并合成的HDR图像。
具体地,每个视点的HDR图像可由该视点在不同曝光时间下分别采集得到的多个图像合成得到。例如,该S11具体包括:将每个视点的不同曝光时间的多个图像进行像素点匹配;根据所述像素点匹配结果将每个视点的多个图像合成所述视点的HDR图像。以每个视点分别在两个不同曝光时间采集得到的第一图像和第二图像为例,查找到每个视点的该第一图像和第二图像中匹配的像素点,并根据该视点匹配的像素点以及第一图像和第二图像中的图像数据计算得到该视点的HDR图像数据。
如图2所示,图像处理设备先利用设置在A和B视点的图像采集器21和22分别对道路设定方向进行实时采集得到A视点下的第一图像a1和第二图像a2、B视点下的第一图像b1和第二图像b2。其中,上述第一图像a1和第一图像b1是在第一曝光时间t1下采集得到的,第二图像a2和第二图像b2是在第二曝光时间t2下采集得到的。该第一曝光时间不同于第二曝光时间,在本实施例中,所述第一曝光时间比第二曝光时间大Δt。图像处理设备分别将每个视点采集得到的两个图像进行像素点匹配,即将A视点的第一图像a1和第二图像a2中分别对应相同空间点的像素点进行匹配,将B视点的第一图像b1和第二图像b2中分别对应相同空间点的像素点进行匹配。在进行上述匹配后,即可利用设定的合成算法以及A视点的两个图像a1和a2中具有匹配关系的像素点的图像数据计算得到该像素点在HDR图像中的图像数据,进而得到A视点的一帧HDR图像;同理,即可利用设定的合成算法以及B视点的两个图像b1和b2中具有匹配关系的像素点的图像数据计算得到该像素点在HDR图像中的图像数据,进而得到B视点的一帧HDR图像。可以理解的是,上述过程中的像素点匹配、合成HDR图像的过程都是现有的技术方案,例如该像素点匹配采用基于灰度的模板匹配算法、基于特征的匹配算法等,其不是本发明的发明点,在此不做限定。
进一步地,该图像处理设备还可具体执行下文图5所示的关于HDR图像合成的图像处理方法实施例的步骤来获取每个视点的HDR图像。
S12:利用所述多个HDR图像的图像数据计算得到所述目标空间的深度图像。
本实施例中,利用多视点的图像数据计算得到采集时刻的目标空间的深度图像,该图像数据具体如为彩色图像的RGB值、灰阶、亮度等视觉性数据。例如,图像处理设备利用多视点采集得到的图像合成对应视点的HDR图像,并通过设定算法以及不同视点的HDR图像的图像数据来计算得到目标空间的深度图像。所述深度图像包括其上的任意像素点的深度信息。其中,利用不同视点的图像数据计算得到相关的深度信息的方式可参阅现有的深度计算方式。
由于相比普通的图像,HDR图像可以提供更多的动态范围和图像细节,更好地反映出真实环境中的视觉效果,故,本实施例不直接利用采集得到的图像计算深度图像,而是利用对采集的图像处理得到的HDR图像计算得到深度图像,能够提高对目标空间的深度计算的准确性。
在另一实施例中,请结合参阅图3,所述步骤S12包括以下子步骤:
S121:获取每个视点的用于合成HDR图像的多个图像中匹配度超过预设程度的像素点,作为对应视点的HDR图像的鲁棒像素点。其中,所述每个视点的HDR图像是由该视点采集的多个图像所合成的。
该鲁棒像素点即为鲁棒性较高的像素点,也即表示在每个视点采集的多个图像中匹配度较高的像素点。在对每个视点采集的多个图像进行像素点的匹配时,可得到对应每个空间点的多个像素点的匹配度。如图2所示得到的A和B视点图像中,计算出A视点的第一图像a1的像素点e1和第二图像a2中的像素点e2具有匹配关系,且e1和e2的匹配度为70%;计算出A视点的第一图像a1的像素点f1和第二图像a2中的像素点f2具有匹配关系,且f1和f2的匹配度为30%,由此可得到A视点第一图像a1和第二图像a2中所有具有匹配关系的像素点的匹配度,同理也可得到B视点的第一图像b1和第二图像b2中所有具有匹配关系的像素点的匹配度。将每个视点的两个图像中具有匹配关系的像素点的匹配度与预设程度值(例如:60%)进行比较,得到匹配度超过该预设程度值的像素点(如e1与e2),将得到的该像素点(如e1与e2)对应在该视点的HDR图像中的像素点e确定为鲁棒像素点。
S122:对所述多个视点的HDR图像之间进行所述鲁棒像素点的匹配。
S123:根据所述鲁棒像素点的匹配关系以及所述鲁棒像素点与对应视 点的HDR图像中的其他像素点之间的位置关系,确定所述多个视点的HDR图像中的其他像素点的匹配关系。
S124:根据所述多个视点的HDR图像之间的像素点的匹配关系,进一步计算得到所述多个视点的HDR图像对应的目标空间的深度图像。
本实施例中,由于该多个视点的图像是对同一目标空间采集得到,即该多个视点图像至少存在重叠部分,且该鲁棒像素点为对应视点中鲁棒性较高的像素点,为采集得到的比较清晰的像素点,故通过比对不同视点图像的鲁棒像素点,可快速确定其周边其他像素点的匹配关系,而无需对比整个图像,减少深度信息的计算量和计算时间。因此,在计算多个视点图像对应目标空间的深度信息时,首先根据每个视点的鲁棒像素点的图像数据可计算得到多个视点的HDR图像中的鲁棒像素点之间的匹配关系,从而快速计算得到所述多个视点的HDR图像上的其他像素点之间的匹配关系。
例如图4所示,A视点的HDR图像的鲁棒像素点hA,jA,kA和B视点的HDR图像中的鲁棒像素点hB,jB,kB一一对应匹配,故可确定位于由A视点和B视点的HDR图像匹配的上述三个鲁棒像素点分别形成的三角形区域41和42内的像素点具有匹配关系,利用每个视点的HDR图像的该三角形区域的像素点的图像数据可快速计算得到该多个视点的该三角形区域中的每个其他像素点之间的匹配关系,如A视点的该三角形区域41中的像素点gA和B视点的该三角形区域42中的gB匹配。
根据上述方法可得到多个视点的HDR图像中的每个可以匹配的像素点之间的匹配关系,进而根据每组匹配的像素点(如图4所示的hA-hB,jA-jB,kA-kB,gA-gB)在对应HDR图像中的图像数据计算得到该组像素点对应空间点在采集时刻的深度信息,进而计算得到对应所述目标空间的深度图像。
本该发明实施例中,在计算深度信息并合成深度图像时,可利用每个视点合成HDR图像中确定的匹配度较高的鲁棒像素点的匹配关系,确定在不同视点的HDR图像中与鲁棒像素点位置相关的其他像素点之间的匹配关系,而无需利用整个HDR图像进行像素点的匹配,故减少了计算量。而且,鲁棒像素点为每个视点采集得到的比较清晰的像素点,故利用上述方式实现不同视点的HDR图像的像素点的匹配,保证了像素点匹配的准确性。
请参阅图5,图5是本发明图像处理方法的另一实施例的流程图。本实 施例中,该方法用于HDR图像的合成,由图像处理设备执行,该方法以每个视点利用不同曝光时间的两个图像来合成HDR图像为例进行说明,可以理解,同理可得利用三个及三个以上不同曝光时间的多个图像来合成HDR图像的方法。在本实施例中,该方法包括以下步骤:
S51:获取第一曝光时间的第一图像和第二曝光时间的第二图像。
其中,所述第一图像和第二图像均为在同一视点对同一目标空间采集得到的。可以理解的是,该对同一目标空间采集并不简单意味着该第一图像和第二图像采集到的目标空间完全相同,此处对同一目标空间采集得到应理解为图像采集器固定在一视点上且拍摄角度不变以对目标空间进行采集。当图像采集器与目标空间不具有相对运动时,该第一图像和第二图像采集到的目标空间应是完全相同,但当图像采集器及其与目标空间具有相对运动时,如图2所示,该第一图像和第二图像为图像采集器在先后时刻对与其具有相对运动的目标空间采集得到的两帧图像,该两帧图像由于曝光时间不同而存在不同的目标空间的光线数据。
上述第一曝光时间和第二曝光时间为不同的两个曝光时长,故导致第一图像和第二图像中对应空间点采集到的光强不同。具体,该第一图像和第二图像的曝光开始点可设置为相同或者设置为不同,例如,图像采集器在T0时刻经第一曝光时间T1得到一帧图像作为第一图像,随后在T2时刻经第二曝光时间T3得到一帧图像作为第二图像。
S52:判断所述第一图像和第二图像中的至少一个图像是否满足去模糊条件。若满足,则执行步骤S53,否则执行步骤S54。
在不同应用中,图像采集器与目标空间难以做到绝对静止,例如,人为操作图像采集器进行图像采集时存在手抖情况,或者图像采集器设置在移动的车辆中以对前方或周边环境进行图像采集,因此图像采集器采集得到的图像容易出现模糊部分,特别是曝光时间较长的图像中;又或者由于采集环境的光线强弱变化,其采集得到的图像可能出现过曝或者欠曝区域,该过曝或者欠曝区域也可称为上述模糊部分。因此,本实施例中,图像处理设备在获取图像采集器在一视点上采集得到第一图像和第二图像时,可对该第一图像和第二图像中的至少一个图像进行去模糊判断,进而相应进行去模糊处理。具体可根据实际应用设定需要去模糊判断的图像(下文简 称为设定图像)如为第一图像、或者第二图像或者两个图像。在一实施例中,图像处理设备仅选择两个图像中曝光时间较长的图像进行去模糊判断。
可以理解的是,在另一实施中,图像处理设备也可不执行S52所述的判断而直接执行S53。具体可根据实际需求进行设定。
本实施例中,该去模糊条件具体可设置为与设定图像中像素点的模糊系数相关。对应地,结合参阅图6,该S52具体包括以下子步骤:
S521:利用已经计算得到的所述目标空间的深度图像的深度信息计算得到所述第一图像和第二图像中的至少一个图像的深度信息,并根据所述深度信息确定所述第一图像和第二图像中的至少一个图像的每个像素点的模糊系数。
需要说明的是,在本实施例中,每隔设定时间即采集图像并计算一次所述目标空间的深度图像。其中,该之前计算得到的所述目标空间的深度图像的深度信息可由之前一次采集得到的图像计算得到,也即利用在第一图像之前采集的图像处理得到的多个视点的HDR图像,从而计算得到该之前采集时刻的目标空间的深度图像,具体可参考上述深度计算方法的实施例。所述深度图像包括其上的任意像素点的深度信息。
图像处理设备从第一图像和第二图像中至少选择一个图像作为需去模糊判断的设定图像,其中,可根据用户原先设置信息或者自身预设选择算法进行上述选择。图像处理设备再由该之前采集的深度图像的深度信息计算出该设定图像的深度信息,具体如从之前采集的深度图像中获得前一帧深度图像的深度信息以及前一帧深度图像与前两帧深度图像之间的深度变化信息,由该深度变化信息计算出其前一帧当前该采集视点与目标空间之间的相对运动信息(如根据前一帧深度图像与前两帧深度图像间的深度差以及该两帧深度图像的采集间隔来计算得到采集视点相对目标空间的速度、位置、角度、距离变化等),根据该前一帧深度图像的深度信息以及该相对运行信息计算得到当前设定图像的深度信息(如计算该视点相对目标空间的匀速度与图像采集间隔的乘积,并将前帧深度图像的每个像素点的深度值与该乘积作差得到当前设定图像的对应像素点的深度值)。进而,图像处理设备由当前的设定图像的深度信息算得该设定图像的模糊系数。在一实施例中,该图像处理设备中可直接预设有前一帧深度图像的深度信息、 前一帧深度图像与前两帧深度图像间的深度变化信息与模糊系数的第一关系,故图像处理设备可获取所述目标空间的前一帧深度图像中的深度信息来以及该前一帧深度图像与前两帧深度图像间的深度变化信息,并根据上述第一关系即可计算出该设定图像的每个像素点的模糊系数。在另一实施例中,该图像处理设备可预设有前一帧深度图像的深度信息、前一帧深度图像与前两帧深度图像间的深度变化信息与当前深度信息的第二关系,以及该当前深度信息与模糊系数的第三关系,故图像处理设备可获取所述目标空间的前一帧深度图像中的深度信息来以及该前一帧深度图像与前两帧深度图像间的深度变化信息,并根据上述第二关系即可计算出设定图像的每个像素点的深度信息,进而根据上述第三关系计算得到该设定图像的每个像素点的模糊系数。其中,上述第一关系、第二关系以及第三关系可为现有相关算法,在此不做限定。
例如,如图2所示的在A视点和B视点均每隔设定时间向前方交替采集第一曝光时间的第一图像和第二曝光时间的第二图像,故每个视点可得到第单数帧图像为第一图像,第偶数帧图像为第二图像,每次利用每个视点的相邻两帧图像合成一HDR图像,例如对应A视点,利用其第一帧图像和第二帧图像合成第一HDR图像,利用其第二帧图像和第三帧图像合成第二HDR图像,利用其第三帧图像和第四帧图像合成第三HDR图像,B视点同理可合成第一HDR图像、第二HDR图像以及第三HDR图像。图像处理设备根据所述多个视点的HDR图像计算得到所述多个视点的HDR图像对应的目标空间的深度图像。图像处理设备根据A视点和B视点的第一HDR图像计算得到该第一HDR图像的深度图像。而后,在合成第二HDR图像过程中,对其第二帧图像和/或第三帧图像进行去模糊判断时,即根据之前利用A视点和B视点的第一HDR图像计算得到的深度图像中的深度信息以及视点的运动信息,并按照预定第二关系来计算得到该第二HDR图像的深度信息。而后,图像处理设备再根据预定第三关系确定用于合成该第二HDR图像的第二帧图像和/或第三帧图像的每个像素点的模糊系数。
S522:查找出所述至少一个图像中模糊系数大于设定数值的像素点。
S523:在所述至少一个图像中模糊系数大于设定数值的像素点满足设定像素点条件时,确定所述至少一个图像满足去模糊条件,否则所述至少 一个图像不满足去模糊条件。
图像处理设备遍历上述设定图像的每个像素点的模糊系数,以搜索出模糊系数大于设定数值的像素点。该去模糊条件也可根据实际应用进行设定,图像处理设备根据该搜索出的像素点的判断方式与该去模糊条件对应,在此不做限定。例如,该去模糊条件为模糊系数大于设定数值的像素点的集中程度大于设定比例,图像处理设备则统计搜索出的像素点的集中程度,当确定其集中程度大于设定比例,则确定该图像满足去模糊条件。
其中,该设定数值、设定比例均可根据实际需求进行调整。
S53:根据已经计算得到的所述目标空间的深度图像中的深度信息,计算得到所述第一图像和第二图像中的至少一个图像的深度信息,并根据所述第一图像和第二图像中的至少一个图像的深度信息对所述第一图像和第二图像中的至少一个图像进行去模糊。
具体地,该S52可包括:利用已经计算得到的深度图像的深度信息计算得到对所述至少一个图像(也即上述的设定图像)进行与所述深度信息匹配的去模糊处理。现有常规的去模糊方式均是基于假设图像中每个像素点的深度相同进行的,然而,在目标空间存在不同距离的物体时,该方式容易导致去模糊存在较大误差。由此,图像处理设备可根据已经计算得到的深度图像的深度信息确定当前采集到的该设定图像(第一图像和/或第二图像)的深度信息(例如,直接将之前计算得到的深度图像的深度信息作为当前采集到的该设定图像的深度信息,又或者如前面所述,根据之前计算得到的前一深度图像的深度信息以及采集视点与目标空间之间的相对运动信息,计算得到当前采集到的设定图像的深度信息),在采用预设去模糊算法如点扩展函数进行去模糊处理时,结合确定的该设定图像的每个像素点的深度信息,以根据不同深度信息对该设定图像的相应像素点进行不同的去模糊。该之前采集得到的图像的深度信息可参阅上述S52中的描述。
继续S523例子,该图像处理设备可仅对设定图像中搜索出的像素点的集中程度超过设定比例的区域进行去模糊。
S54:将所述第一图像和第二图像进行像素点匹配。
可以按照现有的算法将所述第一图像和第二图像进行像素点匹配,从而得到第一图像和第二图像上的鲁棒像素点。
S55:由匹配后的所述第一图像和第二图像合成得到HDR图像。
例如,图像处理设备根据设定合成算法,利用第一图像和第二图像中具有匹配关系的像素点的图像数据计算得到该像素点在HDR图像中的图像数据,进而得到该视点的一帧HDR图像。
本实施例中,图像处理设备在合成HDR图像之前,对采集得到的图像进行去模糊处理,提高了HDR图像的清晰度和HDR图像数据的准确性,而且经去模糊处理后避免了采集图像由于光线问题导致无法使用的图像数据,进而提高了图像采集对复杂光线的适应性。进一步地,在利用该HDR图像进行深度计算时,可进一步提高深度计算的准确性。而且,经去模糊处理后可除掉由于运动对图像采集的影响,故实现了可适应运动状态下的HDR图像合成和深度计算。
上述方法的图像处理设备可应用在车载系统中,即图像采集器设置在车辆上以对车辆周边环境进行图像采集,车载系统中的图像处理设备获取图像采集器采集的到图像后,执行本发明方法得到周边环境的深度信息或者得到对应的HDR图像。由于每帧图像均具有较大图像数据,故可利用图像的大数据优势让车载系统得到更有效的数据。
请参阅图7,图7是本发明图像处理设备一实施例的结构示意图。该图像处理设备可以执行上述方法的步骤。相关内容请参见上述方法中的详细说明,在此不再赘叙。
本实施例中,该图像处理设备70包括:处理器71、与处理器71连接的存储器72。
存储器72用于存储计算机指令以及数据等,处理器71执行的计算机指令。
处理器71执行所述计算机指令,可用于执行以下第一方面和第二方面的至少一个。
第一方面:
处理器71获获取多视点的多个HDR图像,其中,所述多个HDR图像均是对同一目标空间进行采集处理得到;
利用所述多个HDR图像的图像数据计算得到所述目标空间的深度图像。
可选地,所述处理器71获取多视点的多个高动态光照渲染HDR图像包括:将每个视点的不同曝光时间的多个图像进行像素点匹配;根据所述像素点匹配结果将每个视点的所述多个图像合成所述视点的HDR图像。
进一步地,在所述将每个视点的不同曝光时间的多个图像进行像素点匹配之前,所述处理器71还可用于:根据已经计算得到的所述目标空间的深度图像中的深度信息对至少一个视点的所述多个图像中的至少一个图像进行与所述深度信息匹配的去模糊处理。
进一步地,在所述根据已经计算得到的所述目标空间的深度图像中的深度信息对至少一个视点的所述多个图像中的至少一个图像进行与所述深度信息匹配的去模糊处理之前,所述处理器71还可用于:根据已经计算得到的所述目标空间的深度图像的深度信息计算得到所述至少一个图像的深度信息,并根据所述至少一个图像的深度信息确定所述至少一个图像的每个像素点的模糊系数;查找出所述图像中模糊系数大于设定数值的像素点;当所述模糊系数大于设定数值的像素点满足设定像素点条件时,再根据已经计算得到的所述目标空间的深度图像中的深度信息对至少一个视点的所述多个图像中的至少一个图像进行与所述深度信息匹配的去模糊处理。
可选地,所述处理器71利用所述多个HDR图像的图像数据计算得到所述目标空间的深度图像包括:获取每个视点的所述多个图像中匹配度超过预设程度值的像素点,作为对应视点的鲁棒像素点;对所述多个视点的HDR图像之间进行所述鲁棒像素点的匹配;根据所述鲁棒像素点的匹配关系以及所述鲁棒像素点与对应视点的HDR图像中的其他像素点之间的位置关系,确定所述多个视点的HDR图像中的其他像素点的匹配关系;根据所述多个视点的HDR图像之间的像素点的匹配关系,进一步计算得到所述多个视点的HDR图像对应的目标空间的深度图像。
第二方面:
处理器71用于获取不同曝光时间的多个图像,其中,所述多个图像均为在同一视点对同一目标空间采集得到的;
利用已经计算得到的所述目标空间的深度图像的深度信息,计算得到当前所述视点对所述目标空间采集到的多个图像中的至少一个图像的深度信息,并根据该深度信息对所述多个图像中的至少一个图像进行去模糊;
将去模糊后的所述多个图像进行像素点匹配;
根据所述像素点匹配结果将所述多个图像合成得到高动态光照渲染HDR图像。
可选地,在对所述多个图像中的至少一个图像进行去模糊之前,处理器71还用于:判断所述多个图像中的至少一个图像是否满足去模糊条件;若满足去模糊条件,再对所述多个图像中的至少一个图像进行去模糊处理。
进一步地,所述处理器71判断所述多个图像中的至少一个图像是否满足去模糊条件,包括:根据已经计算得到的所述目标空间的深度图像的深度信息,计算得到当前所述视点对所述目标空间采集到的多个图像中的至少一个图像的深度信息,并根据该深度信息确定所述多个图像中的至少一个图像的每个像素点的模糊系数;查找出所述至少一个图像中模糊系数大于设定数值的像素点;在所述至少一个图像中模糊系数大于设定数值的像素点满足设定像素点条件时,确定所述至少一个图像满足去模糊条件。
可选地,处理器71还用于根据由上述步骤获取的多个视点的HDR图像的图像数据计算得到所述目标空间的深度图像。
进一步地,处理器71根据由上述步骤获取的多个视点的HDR图像的图像数据计算得到所述目标空间的深度图像,可包括:获取每个视点的所述用于合成HDR图像的多个图像中匹配度超过预设程度值的像素点,作为对应视点的鲁棒像素点;对由上述步骤获取的多个视点的HDR图像之间进行所述鲁棒像素点的匹配;根据所述鲁棒像素点的匹配关系以及所述鲁棒像素点与对应视点的HDR图像中的其他像素点之间的位置关系,计算确定所述多个视点的HDR图像中的其他像素点的匹配关系;根据所述多个视点的HDR图像之间的像素点的匹配关系,计算得到所述多个视点的HDR图像对应的目标空间的深度图像。
可选地,结合上述第一方面和/或第二方面,上述图像处理设备70还包括图像采集器73,用于采集得到图像,例如在不同时刻对与其具有相对运动的目标空间采集得到的多帧图像,并将图像发送至存储器72,处理器71还用于从存储器72获取第一图像和第二图像。其中,在一图像处理设备用于计算深度图像的实施例中,该图像采集器73可包括第一图像采集器和第二图像采集器,第一图像采集器和第二图像采集器设置在不同视点上,以 每隔设定时间则向同一目标空间采集一帧图像。
本发明还提供一种非易失性存储介质,存储有处理器可运行的计算机指令,所述计算机指令用于执行上述方法实施例,具体如为一上述存储器72。
以上方案,图像处理设备不直接利用采集得到的图像计算深度信息,而是利用对采集的图像处理得到的HDR图像计算深度信息,能够提高对目标空间的深度计算的准确性。进一步地,在深度计算时,可利用合成HDR图像中确定的匹配度较高的鲁棒像素点的匹配关系确定不同视点HDR图像中该鲁棒像素点位置相关的其他像素点之间的匹配关系,而无需利用整个HDR图像进行像素点的匹配,故减少了图像深度计算量。另外,在合成HDR图像时可先对采集图像进行去模糊处理,提高了HDR图像的清晰度和HDR图像数据的准确性,而且经去模糊处理后避免了采集图像由于光线问题导致无法使用的图像数据,进而提高了图像采集对复杂光线的适应性,而且,经去模糊处理后可除掉由于运动对图像采集的影响,故实现了可适应运动状态下的HDR图像合成和深度计算。
在本发明所提供的几个实施方式中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施方式仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施方式方案的目的。
另外,在本发明各个实施方式中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(processor)执行本发明各个实施方式所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。

Claims (26)

  1. 一种图像处理方法,其特征在于,包括:
    获取多视点的多个高动态光照渲染HDR图像,其中,所述多个HDR图像均是对同一目标空间进行采集处理得到;
    利用所述多个HDR图像的图像数据计算得到所述目标空间的深度图像。
  2. 根据权利要求1所述的方法,其特征在于,所述获取多视点的多个HDR图像包括:
    将每个视点的不同曝光时间的多个图像进行像素点匹配;
    根据所述像素点匹配结果将每个视点的所述多个图像合成所述视点的HDR图像。
  3. 如权利要求2所述的方法,其特征在于,所述将每个视点的不同曝光时间的多个图像进行像素点匹配之前,还包括:
    根据已经计算得到的所述目标空间的深度图像中的深度信息对至少一个视点的所述多个图像中的至少一个图像进行与所述深度信息匹配的去模糊处理。
  4. 如权利要求3所述的方法,其特征在于,在所述根据已经计算得到的所述目标空间的深度图像中的深度信息对至少一个视点的所述多个图像中的至少一个图像进行与所述深度信息匹配的去模糊处理之前,还包括:
    根据已经计算得到的所述目标空间的深度图像的深度信息计算得到所述至少一个图像的深度信息,并根据所述至少一个图像的深度信息确定所述至少一个图像的每个像素点的模糊系数;
    查找出所述至少一个图像中模糊系数大于设定数值的像素点;
    当所述模糊系数大于设定数值的像素点满足设定像素点条件时,再根据已经计算得到的所述目标空间的深度图像中的深度信息对至少一个视点的所述多个图像中的至少一个图像进行与所述深度信息匹配的去模糊处理。
  5. 如权利要求2所述的方法,其特征在于,所述利用所述多个HDR图像的图像数据计算得到所述目标空间的深度图像,包括:
    获取每个视点的所述多个图像中匹配度超过预设程度值的像素点,作为对应视点的鲁棒像素点;
    对所述多个视点的HDR图像之间进行所述鲁棒像素点的匹配;
    根据所述鲁棒像素点的匹配关系以及所述鲁棒像素点与对应视点的HDR图像中的其他像素点之间的位置关系,确定所述多个视点的HDR图像中的其他像素点的匹配关系;
    根据所述多个视点的HDR图像之间的像素点的匹配关系,进一步计算得到所述多个视点的HDR图像对应的目标空间的深度图像。
  6. 如权利要求2所述的方法,其特征在于,所述多个图像为图像采集器在不同时刻对与其具有相对运动的目标空间采集得到的多帧图像。
  7. 一种图像处理设备,其特征在于,包括相互连接的处理器、存储器;
    所述存储器用于存储计算机指令以及数据;
    所述处理器执行所述计算机指令,用于:
    获取多视点的多个高动态光照渲染HDR图像,其中,所述多个HDR图像均是对同一目标空间进行采集处理得到;
    利用所述多个HDR图像的图像数据计算得到所述目标空间的深度图像。
  8. 如权利要求7所述的图像处理设备,其特征在于,所述处理器获取多视点的多个高动态光照渲染HDR图像包括:
    将每个视点的不同曝光时间的多个图像进行像素点匹配;
    根据所述像素点匹配结果将每个视点的所述多个图像合成所述视点的HDR图像。
  9. 如权利要求8所述的图像处理设备,其特征在于,在所述将每个视点的不同曝光时间的多个图像进行像素点匹配之前,所述处理器还用于:
    根据已经计算得到的所述目标空间的深度图像中的深度信息对至少一个视点的所述多个图像中的至少一个图像进行与所述深度信息匹配的去模糊处理。
  10. 如权利要求9所述的图像处理设备,其特征在于,在所述根据已经计算得到的所述目标空间的深度图像中的深度信息对至少一个视点的所述多个图像中的至少一个图像进行与所述深度信息匹配的去模糊处理之 前,所述处理器还用于:
    根据已经计算得到的所述目标空间的深度图像的深度信息计算得到所述至少一个图像的深度信息,并根据所述至少一个图像的深度信息确定所述至少一个图像的每个像素点的模糊系数;
    查找出所述图像中模糊系数大于设定数值的像素点;
    当所述模糊系数大于设定数值的像素点满足设定像素点条件时,再根据已经计算得到的所述目标空间的深度图像中的深度信息对至少一个视点的所述多个图像中的至少一个图像进行与所述深度信息匹配的去模糊处理。
  11. 如权利要求8所述的图像处理设备,其特征在于,所述处理器利用所述多个HDR图像的图像数据计算得到所述目标空间的深度图像包括:
    获取每个视点的所述多个图像中匹配度超过预设程度值的像素点,作为对应视点的鲁棒像素点;
    对所述多个视点的HDR图像之间进行所述鲁棒像素点的匹配;
    根据所述鲁棒像素点的匹配关系以及所述鲁棒像素点与对应视点的HDR图像中的其他像素点之间的位置关系,确定所述多个视点的HDR图像中的其他像素点的匹配关系;
    根据所述多个视点的HDR图像之间的像素点的匹配关系,进一步计算得到所述多个视点的HDR图像对应的目标空间的深度图像。
  12. 如权利要求8所述的图像处理设备,其特征在于,还包括图像采集器,用于在不同时刻对与其具有相对运动的目标空间采集得到所述多帧图像。
  13. 一种非易失性存储介质,其特征在于,存储有处理器可运行的计算机指令,所述计算机指令用于执行权利要求1至6任一项所述的图像处理方法。
  14. 一种图像处理方法,其特征在于,包括:
    获取不同曝光时间的多个图像,其中,所述多个图像均为在同一视点对同一目标空间采集得到的;
    利用已经计算得到的所述目标空间的深度图像的深度信息,计算得到当前所述视点对所述目标空间采集到的多个图像中的至少一个图像的深度 信息,并根据该深度信息对所述多个图像中的至少一个图像进行去模糊;
    将去模糊后的所述多个图像进行像素点匹配;
    根据所述像素点匹配结果将所述多个图像合成得到高动态光照渲染HDR图像。
  15. 如权利要求14所述的方法,其特征在于,在对所述多个图像中的至少一个图像进行去模糊之前,还包括:
    判断所述多个图像中的至少一个图像是否满足去模糊条件;
    若满足去模糊条件,再对所述多个图像中的至少一个图像进行去模糊处理。
  16. 如权利要求15所述的方法,其特征在于,所述判断所述多个图像中的至少一个图像是否满足去模糊条件,包括:
    根据已经计算得到的所述目标空间的深度图像的深度信息,计算得到当前所述视点对所述目标空间采集到的多个图像中的至少一个图像的深度信息,并根据该深度信息确定所述多个图像中的至少一个图像的每个像素点的模糊系数;
    查找出所述至少一个图像中模糊系数大于设定数值的像素点;
    在所述至少一个图像中模糊系数大于设定数值的像素点满足设定像素点条件时,确定所述至少一个图像满足去模糊条件。
  17. 如权利要求14所述的方法,其特征在于,还包括:
    根据由上述步骤获取的多个视点的HDR图像的图像数据计算得到所述目标空间的深度图像。
  18. 如权利要求17所述的方法,其特征在于,所述根据由上述步骤获取的多个视点的HDR图像的图像数据计算得到所述目标空间的深度图像,包括:
    获取每个视点的所述用于合成HDR图像的多个图像中匹配度超过预设程度值的像素点,作为对应视点的鲁棒像素点;
    对由上述步骤获取的多个视点的HDR图像之间进行所述鲁棒像素点的匹配;
    根据所述鲁棒像素点的匹配关系以及所述鲁棒像素点与对应视点的HDR图像中的其他像素点之间的位置关系,计算确定所述多个视点的HDR 图像中的其他像素点的匹配关系;
    根据所述多个视点的HDR图像之间的像素点的匹配关系,计算得到所述多个视点的HDR图像对应的目标空间的深度图像。
  19. 如权利要求14所述的方法,其特征在于,所述多个图像为图像采集器在不同时刻对与其具有相对运动的目标空间采集得到的多帧图像。
  20. 一种图像处理设备,其特征在于,包括相互连接的处理器、存储器;
    所述存储器用于存储计算机指令以及数据;
    所述处理器执行所述计算机指令,用于:
    获取不同曝光时间的多个图像,其中,所述多个图像均为在同一视点对同一目标空间采集得到的;
    利用已经计算得到的所述目标空间的深度图像的深度信息,计算得到当前所述视点对所述目标空间采集到的多个图像中的至少一个图像的深度信息,并根据该深度信息对所述多个图像中的至少一个图像进行去模糊;
    将去模糊后的所述多个图像进行像素点匹配;
    根据所述像素点匹配结果将所述多个图像合成得到高动态光照渲染HDR图像。
  21. 如权利要20所述的图像处理设备,其特征在于,在对所述多个图像中的至少一个图像进行去模糊之前,所述处理器还用于:
    判断所述多个图像中的至少一个图像是否满足去模糊条件;
    若满足去模糊条件,再对所述多个图像中的至少一个图像进行去模糊处理。
  22. 如权利要求21所述的图像处理设备,其特征在于,所述处理器判断所述多个图像中的至少一个图像是否满足去模糊条件,包括:
    根据已经计算得到的所述目标空间的深度图像的深度信息,计算得到当前所述视点对所述目标空间采集到的多个图像中的至少一个图像的深度信息,并根据该深度信息确定所述多个图像中的至少一个图像的每个像素点的模糊系数;
    查找出所述至少一个图像中模糊系数大于设定数值的像素点;
    在所述至少一个图像中模糊系数大于设定数值的像素点满足设定像素 点条件时,确定所述至少一个图像满足去模糊条件。
  23. 如权利要求20所述的图像处理设备,其特征在于,所述处理器还用于:
    根据由上述步骤获取的多个视点的HDR图像的图像数据计算得到所述目标空间的深度图像。
  24. 如权利要求23所述的图像处理设备,其特征在于,所述处理器根据由上述步骤获取的多个视点的HDR图像的图像数据计算得到所述目标空间的深度图像,包括:
    获取每个视点的所述用于合成HDR图像的多个图像中匹配度超过预设程度值的像素点,作为对应视点的鲁棒像素点;
    对由上述步骤获取的多个视点的HDR图像之间进行所述鲁棒像素点的匹配;
    根据所述鲁棒像素点的匹配关系以及所述鲁棒像素点与对应视点的HDR图像中的其他像素点之间的位置关系,计算确定所述多个视点的HDR图像中的其他像素点的匹配关系;
    根据所述多个视点的HDR图像之间的像素点的匹配关系,计算得到所述多个视点的HDR图像对应的目标空间的深度图像。
  25. 如权利要求20所述的图像处理设备,其特征在于,还包括图像采集器,用于在不同时刻对与其具有相对运动的目标空间采集得到的多帧图像。
  26. 一种非易失性存储介质,其特征在于,存储有处理器可运行的计算机指令,所述计算机指令用于执行权利要求14至19任一项所述的HDR图像合成方法。
PCT/CN2017/084736 2017-05-17 2017-05-17 图像处理方法、图像处理设备及存储介质 WO2018209603A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2017/084736 WO2018209603A1 (zh) 2017-05-17 2017-05-17 图像处理方法、图像处理设备及存储介质
CN201780034126.2A CN109314776B (zh) 2017-05-17 2017-05-17 图像处理方法、图像处理设备及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/084736 WO2018209603A1 (zh) 2017-05-17 2017-05-17 图像处理方法、图像处理设备及存储介质

Publications (1)

Publication Number Publication Date
WO2018209603A1 true WO2018209603A1 (zh) 2018-11-22

Family

ID=64273047

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/084736 WO2018209603A1 (zh) 2017-05-17 2017-05-17 图像处理方法、图像处理设备及存储介质

Country Status (2)

Country Link
CN (1) CN109314776B (zh)
WO (1) WO2018209603A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112488964B (zh) * 2020-12-18 2024-04-16 深圳市镜玩科技有限公司 针对滑动列表的图像处理方法、相关装置、设备及介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002296494A (ja) * 2001-03-30 2002-10-09 Minolta Co Ltd 結像位置検出プログラムおよびカメラ
US20080095408A1 (en) * 2006-10-23 2008-04-24 Sanyo Electric Co., Ltd. Imaging apparatus and method thereof
CN101616260A (zh) * 2008-06-27 2009-12-30 索尼株式会社 信号处理装置、信号处理方法、程序和记录介质
CN101916455A (zh) * 2010-07-01 2010-12-15 清华大学 一种高动态范围纹理三维模型的重构方法及装置
CN103026171A (zh) * 2011-05-27 2013-04-03 松下电器产业株式会社 图像处理装置及图像处理方法
CN104299268A (zh) * 2014-11-02 2015-01-21 北京航空航天大学 一种高动态范围成像的火焰三维温度场重建方法
CN104935911A (zh) * 2014-03-18 2015-09-23 华为技术有限公司 一种高动态范围图像合成的方法及装置

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120056982A1 (en) * 2010-09-08 2012-03-08 Microsoft Corporation Depth camera based on structured light and stereo vision
US8274552B2 (en) * 2010-12-27 2012-09-25 3Dmedia Corporation Primary and auxiliary image capture devices for image processing and related methods
CN102436639B (zh) * 2011-09-02 2013-12-04 清华大学 一种去除图像模糊的图像采集方法和图像采集系统
JP5843599B2 (ja) * 2011-12-19 2016-01-13 キヤノン株式会社 画像処理装置および撮像装置並びにその方法
US9294754B2 (en) * 2012-02-03 2016-03-22 Lumentum Operations Llc High dynamic range and depth of field depth camera
US9571818B2 (en) * 2012-06-07 2017-02-14 Nvidia Corporation Techniques for generating robust stereo images from a pair of corresponding stereo images captured with and without the use of a flash device
CN105959578A (zh) * 2016-07-18 2016-09-21 四川君逸数码科技股份有限公司 一种基于智能视频技术的宽动态高清摄像机

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002296494A (ja) * 2001-03-30 2002-10-09 Minolta Co Ltd 結像位置検出プログラムおよびカメラ
US20080095408A1 (en) * 2006-10-23 2008-04-24 Sanyo Electric Co., Ltd. Imaging apparatus and method thereof
CN101616260A (zh) * 2008-06-27 2009-12-30 索尼株式会社 信号处理装置、信号处理方法、程序和记录介质
CN101916455A (zh) * 2010-07-01 2010-12-15 清华大学 一种高动态范围纹理三维模型的重构方法及装置
CN103026171A (zh) * 2011-05-27 2013-04-03 松下电器产业株式会社 图像处理装置及图像处理方法
CN104935911A (zh) * 2014-03-18 2015-09-23 华为技术有限公司 一种高动态范围图像合成的方法及装置
CN104299268A (zh) * 2014-11-02 2015-01-21 北京航空航天大学 一种高动态范围成像的火焰三维温度场重建方法

Also Published As

Publication number Publication date
CN109314776B (zh) 2021-02-26
CN109314776A (zh) 2019-02-05

Similar Documents

Publication Publication Date Title
US11877086B2 (en) Method and system for generating at least one image of a real environment
KR102480245B1 (ko) 패닝 샷들의 자동 생성
US8941750B2 (en) Image processing device for generating reconstruction image, image generating method, and storage medium
KR101699919B1 (ko) 다중 노출 퓨전 기반에서 고스트 흐림을 제거한 hdr 영상 생성 장치 및 방법
KR101643607B1 (ko) 영상 데이터 생성 방법 및 장치
JP5156837B2 (ja) 領域ベースのフィルタリングを使用する奥行マップ抽出のためのシステムおよび方法
WO2019105154A1 (en) Image processing method, apparatus and device
EP3216216B1 (en) Methods and systems for multi-view high-speed motion capture
CN105282421B (zh) 一种去雾图像获取方法、装置及终端
JP5197279B2 (ja) コンピュータによって実施されるシーン内を移動している物体の3d位置を追跡する方法
WO2019042216A1 (zh) 图像虚化处理方法、装置及拍摄终端
CN110324532B (zh) 一种图像虚化方法、装置、存储介质及电子设备
JP5725953B2 (ja) 撮像装置及びその制御方法、並びに情報処理装置
CN110349163B (zh) 图像处理方法和装置、电子设备、计算机可读存储介质
KR101853269B1 (ko) 스테레오 이미지들에 관한 깊이 맵 스티칭 장치
KR20140118031A (ko) 영상 처리 장치 및 방법
WO2015156149A1 (ja) 画像処理装置および画像処理方法
JP3990271B2 (ja) 簡易ステレオ画像入力装置、方法、プログラム、および記録媒体
JP7374582B2 (ja) 画像処理装置、画像生成方法およびプログラム
JP6270413B2 (ja) 画像処理装置、撮像装置、および画像処理方法
WO2018209603A1 (zh) 图像处理方法、图像处理设备及存储介质
CN107845108B (zh) 一种光流值计算方法、装置及电子设备
CN116266356A (zh) 全景视频转场渲染方法、装置和计算机设备
CN114494445A (zh) 一种视频合成方法、装置及电子设备
JP2018147241A (ja) 画像処理装置、画像処理方法、及び画像処理プログラム

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17910124

Country of ref document: EP

Kind code of ref document: A1