WO2023225825A1 - Procédé et appareil de génération de graphe de différence de position, dispositif électronique, puce et support - Google Patents

Procédé et appareil de génération de graphe de différence de position, dispositif électronique, puce et support Download PDF

Info

Publication number
WO2023225825A1
WO2023225825A1 PCT/CN2022/094569 CN2022094569W WO2023225825A1 WO 2023225825 A1 WO2023225825 A1 WO 2023225825A1 CN 2022094569 W CN2022094569 W CN 2022094569W WO 2023225825 A1 WO2023225825 A1 WO 2023225825A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
position difference
image
pixels
value
Prior art date
Application number
PCT/CN2022/094569
Other languages
English (en)
Chinese (zh)
Inventor
李超
胡毅
Original Assignee
上海玄戒技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海玄戒技术有限公司 filed Critical 上海玄戒技术有限公司
Priority to PCT/CN2022/094569 priority Critical patent/WO2023225825A1/fr
Priority to CN202280004634.7A priority patent/CN116438568A/zh
Publication of WO2023225825A1 publication Critical patent/WO2023225825A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/122Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues

Definitions

  • the present disclosure relates to the field of image processing, and in particular, to a position difference map generation method and device, electronic equipment, chips and media.
  • the optical flow map between different images taken at different times can be obtained to predict the movement of the moving object based on the optical flow map, and then adjust the image based on the predicted movement. Make adjustments.
  • the present disclosure provides a position difference map generation method and device, electronic equipment, chips and media, which can accurately generate a position difference map between two images.
  • a method for generating a location difference map including:
  • the initial position difference map contains at least one hole pixel with an unknown position difference value
  • the initial position difference map is divided into super pixels.
  • Each super pixel obtained by the division contains multiple pixels, and the corresponding holes are classified based on the position difference values of other pixels in the super pixel where each hole pixel is located. Complete the pixel value of the pixel.
  • a location difference map generating device including:
  • the calculation unit performs disparity calculation on the first image and the second image to obtain an initial position difference map;
  • the initial position difference map contains at least one hole pixel with an unknown position difference value;
  • the dividing unit divides the initial position difference map into super pixels.
  • Each super pixel obtained by the division contains multiple pixel points, and the position difference value pair is based on the position difference value of other pixel points in the super pixel where each hole pixel point is located. The pixel value of the corresponding hole pixel is completed.
  • a position difference map generation method is provided, which is applied to an image processor and includes:
  • the initial position difference map is divided into super pixels.
  • Each super pixel obtained by the division contains multiple pixels, and the corresponding holes are classified based on the position difference values of other pixels in the super pixel where each hole pixel is located.
  • the position difference value of the pixel is completed.
  • an electronic device including:
  • Memory used to store instructions executable by the processor
  • the processor implements the method described in the first aspect by running the executable instructions.
  • a computer-readable storage medium is provided, computer instructions are stored thereon, and when the instructions are executed by a processor, the steps of the method described in the first aspect are implemented.
  • dislocation calculation can be performed on the two to obtain an initial position difference map.
  • the initial position difference map can be divided into super pixels, and the position difference values of the hole pixels contained in the super pixels can be completed based on the position difference values of each pixel in the divided super pixels. It should be understood that since the present disclosure completes the position difference values of the hole pixels contained in each of them based on the divided super pixels, it avoids the problem of hole pixels in the position difference map generated by the related technology, which affects the final imaging. .
  • Figure 1 is a flow chart of a method for generating a position difference map according to an exemplary embodiment of the present disclosure
  • Figure 2 is a flow chart of a method for generating a disparity map according to an exemplary embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of an initial disparity map according to an exemplary embodiment of the present disclosure
  • FIG. 4 is a schematic histogram diagram of an initial disparity map according to an exemplary embodiment of the present disclosure
  • FIG. 5 is a schematic diagram of an adjusted initial disparity map according to an exemplary embodiment of the present disclosure.
  • Figure 6 is a schematic diagram of a super-pixel division according to an exemplary embodiment of the present disclosure.
  • FIG. 7 is a schematic diagram of a brightness diagram of a main image according to an exemplary embodiment of the present disclosure.
  • FIG. 8 is a schematic diagram after adjusting the pixel points included in the super pixel according to an exemplary embodiment of the present disclosure
  • Figure 9 is a block diagram of a position difference map generating device according to an exemplary embodiment of the present disclosure.
  • Figure 10 is a block diagram of another location difference map generating device according to an exemplary embodiment of the present disclosure.
  • Figure 11 is a schematic structural diagram of an electronic device in an exemplary embodiment of the present disclosure.
  • first, second, third, etc. may be used to describe various information in the embodiments of the present disclosure, the information should not be limited to these terms. These terms are only used to distinguish information of the same type from each other.
  • first information may also be called second information, and similarly, the second information may also be called first information.
  • word “if” as used herein may be interpreted as "when” or "when” or "in response to determining.”
  • the terms used in this article are “greater than” or “less than”, “higher than” or “lower than” when characterizing size relationships. But for those skilled in the art, it can be understood that: the term “greater than” also covers the meaning of “greater than or equal to”, and “less than” also covers the meaning of “less than or equal to”; the term “higher than” covers the meaning of “higher than or equal to”. “The meaning of “less than” also covers the meaning of "less than or equal to”.
  • an image composed of position difference values between various pixel points of different images may be called a position difference map.
  • the accuracy of the generated position difference map often determines the final imaging effect.
  • the two most typical position difference maps are: disparity map and optical flow map.
  • the disparity map refers to the image used to represent the position difference between the images captured by different cameras
  • the optical flow map refers to the image used to represent the position difference between the images captured by the moving object at different times. position difference between images.
  • imaging principle can be summarized as follows: image fusion of images captured by multiple cameras to supplement the details in the image, thereby improving the image quality.
  • the position difference of the same picture content in different images is represented by the above-mentioned disparity map.
  • disparity calculation can be performed based on the images captured by the two cameras, and we get Disparity map between two images. Since the position difference between the two cameras is fixed at the factory and is known, after obtaining the disparity map, based on the position difference between the two cameras and the disparity value of each pixel in the disparity map, The depth information of each pixel in the original image is obtained, and the two images are fused based on the depth information. Similar to the dual camera, when at least three cameras are used to capture images, the above operations can also be performed on any two captured images to obtain the depth information of the corresponding pixels, and then perform image fusion. Here No longer.
  • a disparity map is usually generated by performing disparity calculation on two captured images.
  • the principle of disparity calculation is: match pixels representing the same picture content in two images, and use the distance between the two matched pixels as the disparity value of the corresponding pixel.
  • To generate a disparity map in this way it is necessary to accurately match the content of the two images in order to accurately generate the disparity map.
  • every pixel in the two images will be accurately matched. It often happens that individual pixels in one image cannot be matched to the corresponding pixels in the other image.
  • the method of generating an optical flow map in related technologies is similar to the method of generating a disparity map, except that at least two images used to generate the optical flow map are: images taken by the same camera at different times of the same subject, and Images taken by different cameras on the same subject.
  • images taken by the same camera at different times of the same subject are: images taken by the same camera at different times of the same subject, and Images taken by different cameras on the same subject.
  • there are hole pixels with unknown optical flow values in the optical flow map obtained by the related technology which leads to poor quality of the adjusted image after any image is adjusted based on the optical flow map.
  • the present disclosure proposes a position difference map generation method to avoid the problem in related technologies that the final imaging based on the position difference map has poor quality due to the presence of hole pixels in the position difference map.
  • Figure 1 illustrates a method for generating a position difference map according to an exemplary embodiment of the present disclosure. As shown in Figure 1, the method may include the following steps:
  • Step 102 Calculate the disparity between the first image and the second image to obtain an initial position difference map; the initial position difference map contains at least one hole pixel with an unknown position difference value.
  • the present disclosure will further complete the position difference value operation of the hole pixels in the obtained initial position difference map to avoid the position difference value in the position difference map.
  • the initial position difference map can be divided into super pixels, where each super pixel obtained by the division includes multiple pixel points.
  • the position difference value of the hole pixel can be completed based on the position difference value of other pixels in the superpixel where each hole pixel is located.
  • this disclosure can refer to any two captured images as the first image and the second image, and perform position difference calculation on the two to obtain an initial position difference map.
  • any algorithm can be used to calculate the dislocation, for example, the BM (block matching, block matching) algorithm or the SGM (semi-global matching, semi-global matching algorithm) algorithm can be used.
  • the above two algorithms are schematic.
  • the specific algorithm used to calculate the dislocation can be determined by those skilled in the art according to actual needs, and this disclosure does not limit this.
  • the first image and the second image can be dedistorted before performing disparity calculation.
  • any de-distortion algorithm can be used to de-distort the first image and the second image, and the present disclosure does not limit this.
  • the first image and the second image can also be image aligned to more accurately calculate the disparity of the images.
  • the above-mentioned hole pixels can be filled in various scenarios.
  • different cameras can be used to capture images of the same subject to obtain a first image and a second image.
  • the disparity calculation of the first image and the second image can be Parallax calculation
  • the calculated position difference map can be a parallax map; for another example, in a sports scene, the same camera can be used to capture images of the same subject at different times to obtain the first image and the second image.
  • the position difference calculation performed on the first image and the second image may be an optical flow calculation
  • the calculated position difference map may be an optical flow map.
  • multi-camera scenes and sports scenes are schematic. The specific scene to which the technical solution of the present disclosure is applied can be determined by those skilled in the art according to actual needs, and the present disclosure does not limit this.
  • Step 104 Perform super-pixel division on the initial position difference map.
  • Each super pixel obtained by division includes multiple pixel points, and the position difference value pair is based on the position difference value of other pixel points in the super pixel where each hole pixel point is located. The position difference value of the corresponding hole pixel is completed.
  • the meaning of superpixel is: a set of pixels composed of multiple pixels.
  • the process of superpixel division can also be regarded as: the process of assigning all pixel points in the initial position difference map to different pixel point sets.
  • the initial position difference map can be divided into super pixels according to a preset size, so that each divided super pixel contains a preset number of pixels.
  • superpixels can be divided with a size of "3*3" so that each divided superpixel contains 9 pixels.
  • this example is only illustrative.
  • the specific size used for super-pixel division can be determined by those skilled in the art according to actual needs, and this disclosure does not limit this.
  • both the pixel value distribution of the image and the position difference value distribution of the position difference map show a certain degree of continuity, and the continuity of the two is usually similar.
  • the distribution of pixel values can reflect the distribution of position difference values to a certain extent.
  • the superpixels can also be adjusted based on the distribution of pixel values, so that there is a certain continuity between the pixels inside the adjusted superpixels, and thus the superpixels can be more accurately classified. Hole pixels inside the pixels are filled.
  • the pixel value distribution of the first image can be obtained, and the pixel points included in each superpixel can be adjusted according to the pixel value distribution.
  • the pixel value distribution can be used to characterize the flat area and the edge area in the first image, where the flat area refers to the area where the pixel value changes relatively gently, and the edge area refers to the area where the pixel value changes relatively rapidly. Area.
  • the present disclosure can set a preset value as a standard for dividing flat areas and edge areas. Pixels whose pixel difference from its own neighborhood pixels is not greater than the preset value can be determined to belong to the flat area, and pixels that are different from its own neighborhood pixels can be determined to belong to the flat area.
  • Pixels whose pixel difference value of neighboring pixels is greater than the preset value are determined to belong to the edge area.
  • the pixels included in the superpixel can be adjusted so that the pixels included in the adjusted superpixel belong to the same flat area or the same edge area.
  • the hole pixels contained in the super pixel are completed based on the position difference values of the pixels in the super pixels, and the position difference values completed for the hole pixels are more accurate.
  • a brightness map of the first image can be obtained as the above-mentioned pixel value distribution.
  • the pixels contained in each divided super pixel can be adjusted, so that each pixel in the same super pixel after adjustment and its neighbor pixels The brightness difference of the points does not exceed the preset brightness value.
  • the brightness map represents the brightness distribution of the first image, which is equivalent to determining the distribution of position difference values based on the distribution of brightness values.
  • the distribution of position difference values can be determined through the distribution of other types of pixel values, such as RGB values.
  • the first image refers to the image that is used as the reference image when calculating the disparity. For example, in a multi-camera scenario, since the imaging quality of the main camera is higher, the image captured by the main camera is usually used as the reference image when calculating the disparity.
  • the positional difference values of the hole pixels contained in the superpixels can be completed based on the positional difference values of the pixels contained in the superpixels.
  • This disclosure can use multiple methods to complete hole pixels.
  • the position difference values of other pixels in the superpixel where the hole pixel is located can be obtained first, and the obtained position differences can be calculated.
  • the calculated average value is used as the position difference value of any hole pixel.
  • weight values can also be added to the above-mentioned other pixel points. Then, the calculated average value of the position difference value of the other pixel points can be weighted average value.
  • weight values can be set for the above-mentioned other pixels in various ways. For example, the weight value can be set according to the distance between each pixel and the hole pixel, where the weight value of any pixel can be negatively correlated with the distance, that is, the weight of the pixel in the super pixel is closer to the hole pixel. The higher the value.
  • the weight of each pixel point is set in this way. value, the position difference value of the hole pixel can be determined more accurately.
  • the method of setting the weight value for each pixel according to the distance from the hole pixel is only illustrative. Those skilled in the art can also use other methods to set the weight value for each pixel according to actual needs. For example, they can also set the weight value for each pixel according to the distance from the hole pixel. The distance from the center of the superpixel to which it belongs sets a weight value for each pixel, and the relationship between the two can also be a negative correlation, which is not limited by this disclosure.
  • the position difference values of other pixels in the superpixel where the hole pixel is located can be obtained first, and each obtained position can be obtained.
  • the median value of the difference value is used as the position difference value of any hole pixel.
  • the position difference values of pixels whose position difference values are not within the preset disparity range in the initial position difference map can also be adjusted to unknown, so that the corresponding pixel points are converted into hole pixel points.
  • the position calculated based on the two pixels with the same picture content in the two images The difference value is usually not too large. If the position difference of a certain pixel is large and exceeds the preset range, it is likely that a matching error occurred when matching pixels based on the image content mentioned above. It can be seen that through the above method of converting pixels whose position difference values exceed the preset parallax range into hole pixels, the problem of position difference value calculation errors caused by inaccurate pixel matching can be effectively avoided.
  • the preset parallax range is mostly the upper limit of the position difference value.
  • the position difference value exceeds this
  • the corresponding pixels are converted into hole pixels.
  • a histogram of the initial position difference map can be generated, and the pixels in the histogram whose position difference value is higher than the preset position difference value can be converted into hole pixels.
  • the present disclosure can also determine pixels whose position difference value is smaller than the preset value as mismatched pixels. In this case, it is also possible to generate a histogram of the initial position difference map first, determine the number of values smaller than the preset value based on the histogram, and then convert the pixels with the position difference value to the hole pixels.
  • median filtering can also be performed on the first image to obtain the first image after removing noise, and based on the filtered first image
  • the pixel value continuity of each included pixel point is determined, and then a secondary completion of the position difference value is performed on the position difference map completed by the position difference value based on the pixel value continuity.
  • the bilateral operator principle FastBilateralSolver
  • the first image obtained through filtering and the position difference map completed through the position difference value can be used as the input of the bilateral operator algorithm.
  • the technical solution of the present disclosure can be applied to any type of electronic equipment.
  • the electronic equipment can be mobile terminals such as smartphones and tablet computers, or fixed terminals such as smart TVs and PCs (Personal Computers). terminal.
  • which type of electronic device is specifically used as the execution subject of the technical solution of the present disclosure can be determined by those skilled in the art according to actual needs, and the present disclosure does not limit this.
  • a distortion correction component an image alignment component, a disparity calculation component, a superpixel adjustment component, a position difference value completion component, etc. can be deployed in an electronic device to implement each step in the technical solution of the present disclosure.
  • the disparity calculation can be performed on the first image and the second image to obtain a pixel point containing at least one hole.
  • Initial position difference map the present disclosure can further divide the initial position difference map into super pixels, and supplement the position difference value of the corresponding hole pixel point based on the position difference value of other pixel points in the super pixel where each hole pixel point is located. Complete.
  • the pixel value distribution of the image is usually close to the position difference value distribution of its position difference map. Therefore, after the super pixels are obtained by dividing based on the preset size, the present disclosure can further obtain the pixel value distribution of the first image, and adjust the pixel points contained in the divided super pixels based on the pixel value distribution, so as to Make the pixels included in the adjusted superpixels be located in the same flat area or the same edge area. It should be understood that the position difference values of pixels located in the same flat area or the same edge area show a certain degree of continuity.
  • the pixels in the same super pixel are adjusted to the same flat area or the same edge area, and based on The position difference value of the pixels in the super pixel is used to complete the position difference value of the hole pixels in the super pixel, which can rely on its continuity characteristics to improve the accuracy of parallax completion of the hole pixels.
  • the central processing unit usually has a high load and image processing tasks take up a lot of resources, in order to improve the efficiency of image processing, technicians usually deploy an independent image processor to complete image processing tasks.
  • the present disclosure also proposes a disparity map generation method applied to an image processor.
  • this method most operations are consistent with the disparity map generation method applied to electronic devices described above, except that the image processor is described as the execution subject.
  • the image processor is described as the execution subject.
  • Step 1 Receive a first image generated by a first image sensor and a second image generated by a second image sensor; the first image and the second image are generated by the first camera to which the first image sensor belongs and the second image generated by the second image sensor.
  • the second camera to which the second image sensor belongs takes pictures of the same subject.
  • an electronic device may be equipped with an image sensor for image collection, so as to generate a first image and a second image corresponding to the same subject based on the collected raw data.
  • the image sensor can transmit the generated first image and the second image to the image processor, so that the image sensor performs position difference calculation on the first image and the second image to obtain a position difference map.
  • this embodiment since the technical solution of this embodiment is different from the position difference map generation method described above, only the execution subject is different. Therefore, this embodiment no longer performs disparity calculation, super-pixel division, and disparity value compensation.
  • the congruent operations will not be described in detail. For relevant content, please refer to the introduction above.
  • the present disclosure can be applied to both multi-camera scenes and sports scenes. Therefore, when this embodiment is applied to different scenarios, there are certain differences in the execution of this step. in,
  • the electronic device may be equipped with a first camera and a second camera, where the first camera includes a first image sensor and the second camera includes a second image sensor.
  • the actual execution process of this step may be: image acquisition is performed through the first image sensor and the second image sensor respectively, so that the first image sensor and the second image sensor generate images with the same object based on the collected original data.
  • the first image and the second image corresponding to the subject.
  • the first image sensor and the second image sensor can transmit the generated first image and the second image to the image processor, so that the image processor performs parallax calculation on the first image and the second image to obtain Disparity map.
  • the disparity value in the disparity map represents the difference in depth information, that is, the position information mentioned above refers to the depth information, usually “from the camera to the corresponding object in the image” “distance” (mostly refers to the distance information in the direction of the camera axis).
  • the disparity map is ultimately used for image fusion of the first image and the second image, which essentially improves the quality of the final imaging by improving the accuracy of image fusion.
  • the electronic device can call the assembled image sensor to collect images twice at different times to obtain the first image and the second image.
  • the continuous shooting mode can be turned on to continuously shoot the subject, or Use video mode for video shooting.
  • the subject and the electronic device may move relative to each other, so that the position information of the subject in the two images changes.
  • the image sensor can transmit the first image and the second image to the image processor, so that the image processor performs optical flow calculation on the first image and the second image to obtain an optical flow map.
  • the optical flow value in the optical flow map represents the position difference at different times, that is, the difference in position information mentioned above is the displacement.
  • the optical flow map is usually used to adjust a certain image. For example, based on the optical flow map and the image captured first, the image captured later is adjusted to eliminate afterimages, etc., thereby improving the quality of the final imaging.
  • this example is only illustrative.
  • the specific method of adjusting the image based on the optical flow map can be determined by those skilled in the art according to actual needs, and this embodiment does not limit this.
  • Step 2 Perform disparity calculation on the first image and the second image to obtain an initial disparity map.
  • the initial disparity map contains at least one hole pixel with unknown disparity value.
  • Step 3 Perform super-pixel division on the initial disparity map.
  • Each super-pixel obtained by the division contains multiple pixels, and the corresponding disparity values are based on the disparity values of other pixels in the super-pixel where each hole pixel is located. The disparity value of the hole pixel is completed.
  • the image processor in this embodiment can be mounted on different chips according to the actual situation.
  • it can be mounted on an ISP (Image Signal Processing) chip or an SoC (System on Chip) chip.
  • the specific chip to be mounted on can be determined by those skilled in the art according to actual needs, and this disclosure does not limit this.
  • FIG. 2 is a flowchart of a method for generating a disparity map according to an exemplary embodiment of the present disclosure. As shown in Figure 2, this method is applied to a smartphone equipped with at least two cameras and may include the following steps:
  • Step 201 Capture an image of the subject based on the main photography and the secondary photography.
  • the smartphone can be equipped with a main camera with better imaging effect and a secondary camera with relatively poor imaging effect. Then, when the user takes a picture of the subject through the dual camera mode, the smartphone can simultaneously call the main camera and the secondary camera to take pictures of the subject to obtain the main camera image and the secondary camera image.
  • Step 202 Use the SGM algorithm to calculate parallax on the captured main image and secondary image.
  • disparity calculation can be performed on the two based on the preset SGM algorithm to obtain an initial disparity map.
  • the initial disparity map obtained through disparity calculation can be shown in Figure 3, in which the disparity values of most pixels are known, but the disparity values of some pixels are still unknown.
  • the initial parallax map is mostly generated based on the main camera.
  • the disparity value of any pixel in the generated initial disparity map is used to represent "the distance between a pixel at the same position in the main image and a pixel in the secondary image that matches the content of the pixel.”
  • Step 203 Generate a histogram of the obtained initial disparity map.
  • disparity calculation actually calculates the distance between pixels with consistent content in the two images.
  • the content matching between pixels directly determines the accuracy of disparity calculation.
  • mismatching of pixels will inevitably occur in the actual matching process, resulting in inaccurate disparity values of corresponding pixels.
  • pixels containing inaccurate disparity values due to mismatching may be identified, and the identified pixels may be converted into hole pixels.
  • the pixel can be completed in the subsequent hole pixel completion operation. It is not difficult to see that this process is equivalent to correcting the pixels with inaccurate parallax values caused by mismatching.
  • the histogram of the initial disparity map can be obtained to determine pixels with a disparity value higher than a preset value as pixels with inaccurate disparity values, and convert the pixels into hole pixels.
  • the histogram obtained based on the initial disparity map shown in Figure 3 can be as shown in Figure 4, that is, the number of pixels with disparity values of each value can be statistically obtained. Assuming that the preset upper limit of the disparity value is 8, pixels with a disparity value exceeding 8 can be converted into hole pixels, that is, the initial disparity map shown in Figure 3 is converted into the disparity map shown in Figure 5.
  • pixels with inaccurate disparity values may also be determined without relying on the preset value. For example, after counting the number of pixels with disparity values of each value, the value with the smallest number of pixels can be determined, and the pixels with the disparity value of this value can be converted into hole pixels. How to specifically determine pixels with inaccurate disparity values and convert them into hole pixels can be determined by those skilled in the art according to actual conditions, and this embodiment does not limit this.
  • Step 204 Convert pixels in the initial disparity map whose disparity value exceeds a preset value into hole pixels based on the histogram.
  • Step 205 Perform super-pixel division on the initial disparity map according to a preset size.
  • the initial disparity map can be divided into super pixels according to a preset size. It should be stated that since the size of the initial disparity map is the same as that of the main image, and the pixels at the same position correspond to the same picture content, super-pixel division of the initial disparity map is equivalent to super-pixel division of the main image. , there are only differences in expression, but the actual meaning is the same.
  • the initial disparity map can be divided into super pixels with a size of "3*3" to obtain several super pixels containing 9 pixels as shown in Figure 6, such as super pixels A, B, and C.
  • superpixel A contains hole pixel point a
  • superpixel B contains hole pixel points b1 and b2
  • superpixel C contains hole pixel points c1 and c2.
  • Step 206 Obtain the brightness map of the main image.
  • Step 207 Adjust the pixels contained in the superpixel based on the brightness map.
  • the pixels contained in the super pixels can be further adjusted based on the brightness map.
  • the adjustment standard is: as pointed out above, the pixels in the adjusted super pixels belong to the same flat area or the same edge area. Among them, the number of pixels contained in the superpixels before and after adjustment can be further restricted to be the same.
  • the superpixels obtained by the division shown in Figure 6 are adjusted based on the brightness map shown in Figure 7 .
  • the superpixels obtained by adjusting superpixels A, B, and C respectively are superpixels A’, B’, and C’ as shown in Figure 8.
  • the pixels included in superpixel A' belong to the flat area on the left side of the image; the pixels included in superpixel B' belong to the edge area circled in Figure 7; and superpixel C' belongs to the flat area in the upper right corner of the image. .
  • Step 208 Complete the hole pixels in the superpixel based on the adjusted disparity value of each pixel in the superpixel.
  • hole pixels b1, b2, c1, c2 can also be filled in a similar way until all hole pixels in the image are filled, and a more accurate secondary image can be obtained. Disparity map of the main image. It should be understood that the above examples only use superpixels A, B, and C as examples to introduce the technical solutions in this specification. The operation methods of other pixels are also similar and will not be described again here.
  • the brightness map can also be divided into super pixels, and the divided super pixels in the brightness map can be adjusted.
  • the order of steps 205, 206, 207, and 208 can be adjusted.
  • the brightness map can be obtained first, and then the brightness map can be divided into super pixels based on the preset size, and then based on the brightness value of each pixel in the brightness map Adjust the divided superpixels. On this basis, the operation of completing the hole pixels based on the disparity value within the superpixel is performed.
  • the smartphone in this embodiment can perform disparity calculation on the two to obtain an initial disparity map between the two, and perform a disparity calculation on the initial disparity map.
  • Superpixel partitioning Furthermore, the divided superpixels can also be adjusted based on the brightness value of the main image, so that each pixel in the superpixel belongs to the same flat area or edge area.
  • the hole pixels contained in the super pixel are completed based on the disparity value of each pixel in the super pixel, which avoids the problem of image processing based on the disparity map due to the existence of hole pixels in the disparity map in related technologies. The problem of poor quality of the fused image.
  • FIG. 9 is a block diagram of a position difference map generating device according to an exemplary embodiment of the present disclosure.
  • the device includes a computing unit 901 and a dividing unit 902 .
  • the calculation unit 901 performs dislocation calculation on the first image and the second image to obtain an initial position difference map;
  • the initial position difference map contains at least one hole pixel with an unknown position difference value;
  • the dividing unit 902 performs super-pixel division on the initial position difference map.
  • Each super pixel obtained by division includes multiple pixel points, and is based on the position difference values of other pixel points in the super pixel where each hole pixel point is located. Complete the position difference value of the corresponding hole pixel.
  • the position difference map is a disparity map, and the first image and the second image are captured by different cameras for the same subject; or,
  • the position difference map is an optical flow map, and the first image and the second image are obtained by shooting the same subject at different times with the same camera.
  • the dividing unit 902 is further used for:
  • the initial position difference image is divided into super pixels according to a preset size, so that each divided super pixel contains a preset number of pixel points.
  • the dividing unit 902 is further used for:
  • the dividing unit 902 is further used for:
  • the dividing unit 902 is further used for:
  • the position difference value of each other pixel point in the super pixel where any hole pixel point is located is obtained, and the median value of the obtained position difference values is taken as the position difference value of any hole pixel point.
  • Figure 10 is a block diagram of another location difference map generating device according to an exemplary embodiment of the present disclosure. Based on the aforementioned embodiment shown in Figure 9, this embodiment also includes: a determination unit 903 , adjustment unit 904 and filtering unit 905.
  • Optional also includes:
  • Determining unit 903 determines the distribution of pixel values in the first image.
  • the distribution of pixel values is used to characterize the flat areas and edge areas in the first image; where the pixels in the flat area and their neighbors The pixel difference value of a pixel is not greater than the preset value, and the pixel difference value of a pixel in the edge area and its neighbor pixels is greater than the preset value;
  • the adjustment unit 904 adjusts the pixel points included in each super pixel based on the pixel value distribution, so that the adjusted pixel points in the same super pixel belong to the same flat area or the same edge area.
  • the determining unit 903 is further configured to: obtain the brightness map of the first image
  • the adjustment unit 904 is further configured to: based on the brightness value of each pixel point in the brightness map, adjust the pixel points contained in each divided super pixel, so that each pixel point in the same super pixel after adjustment is the same as its neighbor. The brightness difference of domain pixels does not exceed the preset brightness value.
  • the adjustment unit 904 is also used for:
  • the position difference values of pixels whose position difference values are not within the preset parallax range are adjusted to unknown, so that the corresponding pixels are converted into hole pixels.
  • the dividing unit 902 is also used to: determine the pixel value continuity of each included pixel point based on the filtered first image, and position the position difference map completed by the position difference value based on the determined pixel value continuity. Secondary completion of difference values.
  • the device embodiment since it basically corresponds to the method embodiment, please refer to the partial description of the method embodiment for relevant details.
  • the device embodiments described above are only illustrative.
  • the units described as separate components may or may not be physically separated.
  • the components shown as units may or may not be physical units, that is, they may be located in One location, or it can be distributed across multiple network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the disclosed solution. Persons of ordinary skill in the art can understand and implement the method without any creative effort.
  • the present disclosure also provides a device for generating a location difference map, including: a processor; a memory for storing instructions executable by the processor; wherein the processor is configured to implement as described in any of the above embodiments
  • a method for generating a position difference map may include: performing disparity calculation on the first image and the second image to obtain an initial position difference map; the initial position difference map contains at least one hole pixel with an unknown position difference value. ; Perform super-pixel division on the initial position difference map. Each super-pixel obtained by the division contains multiple pixels, and the corresponding corresponding pixels are calculated based on the position difference values of other pixels in the super-pixel where each hole pixel is located. The position difference value of the hole pixel is completed.
  • the present disclosure also provides an electronic device.
  • the electronic device includes a memory and one or more programs, wherein the one or more programs are stored in the memory and configured to be processed by one or more processors.
  • Executing the one or more programs includes instructions for implementing the position difference map generation method as described in any of the above embodiments.
  • the method may include: performing disparity calculation on the first image and the second image to obtain An initial position difference map; the initial position difference map contains at least one hole pixel with an unknown position difference value; the initial position difference map is divided into super pixels, and each super pixel obtained by the division contains multiple pixels, And the position difference value of the corresponding hole pixel is completed based on the position difference value of other pixels in the superpixel where each hole pixel is located.
  • the present disclosure also provides a chip, which includes one or more interface circuits and one or more processors; the interface circuit is used to receive signals from the memory of the electronic device and send signals to the processor.
  • the signal includes a computer instruction stored in a memory; when the processor executes the computer instruction, the electronic device is caused to execute any of the position difference map generating methods described above.
  • FIG. 11 is a block diagram of a device 1100 for implementing a position difference map generation method according to an exemplary embodiment.
  • the device 1100 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like.
  • the device 1100 may include one or more of the following components: a processing component 1102, a memory 1104, a power supply component 1106, a multimedia component 1108, an audio component 1110, an input/output (I/O) interface 1112, a sensor component 1114, and communications component 1116.
  • Processing component 1102 generally controls the overall operations of device 1100, such as operations associated with display, phone calls, data communications, camera operations, and recording operations.
  • the processing component 1102 may include one or more processors 1120 to execute instructions to complete all or part of the steps of the above method.
  • processing component 1102 may include one or more modules that facilitate interaction between processing component 1102 and other components.
  • processing component 1102 may include a multimedia module to facilitate interaction between multimedia component 1108 and processing component 1102.
  • Memory 1104 is configured to store various types of data to support operations at device 1100 . Examples of such data include instructions for any application or method operating on device 1100, contact data, phonebook data, messages, pictures, videos, etc.
  • Memory 1104 may be implemented by any type of volatile or non-volatile storage device, or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EEPROM), Programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read-only memory
  • EEPROM erasable programmable read-only memory
  • EPROM Programmable read-only memory
  • PROM programmable read-only memory
  • ROM read-only memory
  • magnetic memory flash memory
  • flash memory magnetic or optical disk.
  • Power supply component 1106 provides power to various components of device 1100 .
  • Power supply components 1106 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power to device 1100 .
  • Multimedia component 1108 includes a screen that provides an output interface between the device 1100 and the user.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide action.
  • multimedia component 1108 includes a front-facing camera and/or a rear-facing camera.
  • the front camera and/or the rear camera may receive external multimedia data.
  • Each front-facing camera and rear-facing camera can be a fixed optical lens system or have a focal length and optical zoom capabilities.
  • Audio component 1110 is configured to output and/or input audio signals.
  • audio component 1110 includes a microphone (MIC) configured to receive external audio signals when device 1100 is in operating modes, such as call mode, recording mode, and voice recognition mode. The received audio signals may be further stored in memory 1104 or sent via communications component 1116 .
  • audio component 1110 also includes a speaker for outputting audio signals.
  • the I/O interface 1112 provides an interface between the processing component 1102 and a peripheral interface module.
  • the peripheral interface module may be a keyboard, a click wheel, a button, etc. These buttons may include, but are not limited to: Home button, Volume buttons, Start button, and Lock button.
  • Sensor component 1114 includes one or more sensors for providing various aspects of status assessment for device 1100 .
  • the sensor component 1114 can detect the open/closed state of the device 1100, the relative positioning of components, such as the display and keypad of the device 1100, and the sensor component 1114 can also detect a change in position of the device 1100 or a component of the device 1100. , the presence or absence of user contact with device 1100 , device 1100 orientation or acceleration/deceleration and temperature changes of device 1100 .
  • Sensor assembly 1114 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact.
  • Sensor assembly 1114 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor component 1114 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • Communications component 1116 is configured to facilitate wired or wireless communications between device 1100 and other devices.
  • the device 1100 can access a wireless network based on a communication standard, such as WiFi, 2G or 3G, 4G LTE, 5G NR (New Radio), or a combination thereof.
  • the communication component 1116 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communications component 1116 also includes a near field communications (NFC) module to facilitate short-range communications.
  • the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • apparatus 1100 may be configured by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable Gate array (FPGA), controller, microcontroller, microprocessor or other electronic components are implemented for executing the above method.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGA field programmable Gate array
  • controller microcontroller, microprocessor or other electronic components are implemented for executing the above method.
  • a non-transitory computer-readable storage medium including instructions such as a memory 1104 including instructions, which are executable by the processor 1120 of the device 1100 to complete the above method is also provided.
  • the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

La présente divulgation concerne un procédé et un appareil de génération de graphe de différence de position, un dispositif électronique, une puce et un support. Le procédé consiste : à effectuer un calcul de différence de position sur une première image et une seconde image pour obtenir un graphe de différence de position initiale, le graphe de différence de position initiale comprenant au moins un point de pixel de trou présentant une valeur de différence de position inconnue ; et à effectuer une division de superpixels sur le graphe de différence de position initiale, chaque superpixel obtenu par division comprenant une pluralité de points de pixel, et à compléter la valeur de différence de position du point de pixel de trou correspondant sur la base des valeurs de différence de position d'autres points de pixel dans le superpixel où chaque point de pixel de trou est situé.
PCT/CN2022/094569 2022-05-23 2022-05-23 Procédé et appareil de génération de graphe de différence de position, dispositif électronique, puce et support WO2023225825A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2022/094569 WO2023225825A1 (fr) 2022-05-23 2022-05-23 Procédé et appareil de génération de graphe de différence de position, dispositif électronique, puce et support
CN202280004634.7A CN116438568A (zh) 2022-05-23 2022-05-23 位置差异图生成方法及装置、电子设备、芯片及介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/094569 WO2023225825A1 (fr) 2022-05-23 2022-05-23 Procédé et appareil de génération de graphe de différence de position, dispositif électronique, puce et support

Publications (1)

Publication Number Publication Date
WO2023225825A1 true WO2023225825A1 (fr) 2023-11-30

Family

ID=87106585

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/094569 WO2023225825A1 (fr) 2022-05-23 2022-05-23 Procédé et appareil de génération de graphe de différence de position, dispositif électronique, puce et support

Country Status (2)

Country Link
CN (1) CN116438568A (fr)
WO (1) WO2023225825A1 (fr)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120313932A1 (en) * 2011-06-10 2012-12-13 Samsung Electronics Co., Ltd. Image processing method and apparatus
US20140002605A1 (en) * 2012-06-27 2014-01-02 Imec Taiwan Co. Imaging system and method
CN109584166A (zh) * 2017-09-29 2019-04-05 株式会社理光 视差图稠密化方法、装置和计算机可读存储介质
CN110033426A (zh) * 2018-01-12 2019-07-19 杭州海康威视数字技术股份有限公司 一种用于对视差估计图像进行处理的装置
CN110660088A (zh) * 2018-06-30 2020-01-07 华为技术有限公司 一种图像处理的方法和设备
CN111432194A (zh) * 2020-03-11 2020-07-17 北京迈格威科技有限公司 视差图空洞填充方法、装置及电子设备及存储介质
CN112347882A (zh) * 2020-10-27 2021-02-09 中德(珠海)人工智能研究院有限公司 一种智能分拣控制方法和智能分拣控制系统

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150053438A (ko) * 2013-11-08 2015-05-18 한국전자통신연구원 스테레오 매칭 시스템과 이를 이용한 시차 맵 생성 방법
CN110533701A (zh) * 2018-05-25 2019-12-03 杭州海康威视数字技术股份有限公司 一种图像视差确定方法、装置及设备
CN109146814B (zh) * 2018-08-20 2021-02-23 Oppo广东移动通信有限公司 图像处理方法、装置、存储介质及电子设备
CN109640066B (zh) * 2018-12-12 2020-05-22 深圳先进技术研究院 高精度稠密深度图像的生成方法和装置
CN109961507B (zh) * 2019-03-22 2020-12-18 腾讯科技(深圳)有限公司 一种人脸图像生成方法、装置、设备及存储介质
US20210004962A1 (en) * 2019-07-02 2021-01-07 Qualcomm Incorporated Generating effects on images using disparity guided salient object detection
CN111127355A (zh) * 2019-12-17 2020-05-08 上海工程技术大学 一种对缺损光流图进行精细补全的方法及其应用
CN112884682B (zh) * 2021-01-08 2023-02-21 福州大学 一种基于匹配与融合的立体图像颜色校正方法及系统

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120313932A1 (en) * 2011-06-10 2012-12-13 Samsung Electronics Co., Ltd. Image processing method and apparatus
US20140002605A1 (en) * 2012-06-27 2014-01-02 Imec Taiwan Co. Imaging system and method
CN109584166A (zh) * 2017-09-29 2019-04-05 株式会社理光 视差图稠密化方法、装置和计算机可读存储介质
CN110033426A (zh) * 2018-01-12 2019-07-19 杭州海康威视数字技术股份有限公司 一种用于对视差估计图像进行处理的装置
CN110660088A (zh) * 2018-06-30 2020-01-07 华为技术有限公司 一种图像处理的方法和设备
CN111432194A (zh) * 2020-03-11 2020-07-17 北京迈格威科技有限公司 视差图空洞填充方法、装置及电子设备及存储介质
CN112347882A (zh) * 2020-10-27 2021-02-09 中德(珠海)人工智能研究院有限公司 一种智能分拣控制方法和智能分拣控制系统

Also Published As

Publication number Publication date
CN116438568A (zh) 2023-07-14

Similar Documents

Publication Publication Date Title
KR102310430B1 (ko) 촬영 방법, 장치 및 디바이스
CN109671106B (zh) 一种图像处理方法、装置与设备
US9973672B2 (en) Photographing for dual-lens device using photographing environment determined using depth estimation
US10810720B2 (en) Optical imaging method and apparatus
WO2019183813A1 (fr) Dispositif et procédé de capture d'image
US11532076B2 (en) Image processing method, electronic device and storage medium
KR101916355B1 (ko) 듀얼-렌즈 장치의 촬영 방법, 및 듀얼-렌즈 장치
TWI808987B (zh) 將相機與陀螺儀融合在一起的五維視頻穩定化裝置及方法
WO2016011747A1 (fr) Procédé et dispositif d'ajustement de carnation
EP3544286B1 (fr) Procédé de focalisation, dispositif et support de stockage
CN110958401A (zh) 一种超级夜景图像颜色校正方法、装置和电子设备
US10187566B2 (en) Method and device for generating images
WO2016029465A1 (fr) Procédé et appareil de traitement d'image et dispositif électronique
CN112911165A (zh) 内窥镜曝光方法、装置及计算机可读存储介质
US10009545B2 (en) Image processing apparatus and method of operating the same
CN105210362B (zh) 图像调整设备、图像调整方法和图像捕获设备
WO2018219274A1 (fr) Procédé et appareil de traitement de débruitage, support d'informations et terminal
WO2023225825A1 (fr) Procédé et appareil de génération de graphe de différence de position, dispositif électronique, puce et support
CN114143471B (zh) 图像处理方法、系统、移动终端及计算机可读存储介质
CN111726531B (zh) 图像拍摄方法、处理方法、装置、电子设备及存储介质
WO2019134513A1 (fr) Procédé de mise au point de cliché, dispositif, support d'informations, et dispositif électronique
CN114339022A (zh) 摄像头拍摄参数确定方法、神经网络模型的训练方法
KR102458470B1 (ko) 이미지 처리 방법 및 장치, 카메라 컴포넌트, 전자 기기, 저장 매체
EP4304188A1 (fr) Procédé et appareil de photographie, support et puce
KR102494696B1 (ko) 영상을 생성하는 방법 및 디바이스.

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22943031

Country of ref document: EP

Kind code of ref document: A1