WO2019047284A1 - Procédés d'extraction de caractéristiques et d'assemblage de panoramique, appareil associé, dispositif et support d'informations lisible - Google Patents

Procédés d'extraction de caractéristiques et d'assemblage de panoramique, appareil associé, dispositif et support d'informations lisible Download PDF

Info

Publication number
WO2019047284A1
WO2019047284A1 PCT/CN2017/102871 CN2017102871W WO2019047284A1 WO 2019047284 A1 WO2019047284 A1 WO 2019047284A1 CN 2017102871 W CN2017102871 W CN 2017102871W WO 2019047284 A1 WO2019047284 A1 WO 2019047284A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
rotation
feature
matrix
calculating
Prior art date
Application number
PCT/CN2017/102871
Other languages
English (en)
Chinese (zh)
Inventor
王健宗
王义文
刘奡智
肖京
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2019047284A1 publication Critical patent/WO2019047284A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Definitions

  • the present application relates to the field of information processing technologies, and in particular, to a feature extraction method, a panoramic splicing method, a device thereof, a device, and a computer readable storage medium.
  • panoramic virtual reality technology In many commercial display projects, the application of panoramic virtual reality technology has become a trend and new ideas. In the process of achieving panoramic stitching, the precision and speed of stitching are very important. Some existing panoramic stitching algorithms have insufficient precision and speed of stitching, resulting in poor practicability.
  • the embodiment of the present application provides a feature extraction method, a panoramic splicing method, a device thereof, a device, and a computer readable storage medium.
  • the feature extraction method is used in a panoramic splicing method, and the panoramic splicing is improved under the premise of ensuring accuracy. Speed and practicality.
  • the embodiment of the present application provides the following method:
  • a feature extraction method comprising:
  • a panoramic stitching method comprising:
  • an embodiment of the present application provides an apparatus, including: a unit for performing the feature extraction method described in the above first aspect or a unit for performing the panoramic splicing method described in the above first aspect.
  • an embodiment of the present application further provides an apparatus, where the device includes a memory, and a processor connected to the memory;
  • the memory is configured to store program data for implementing feature extraction, and the processor is configured to execute program data stored in the memory to perform the feature extraction method described in the first aspect;
  • the memory is configured to store program data for implementing panoramic stitching
  • the processor is configured to execute program data stored in the memory to perform the panoramic stitching method of the first aspect described above.
  • an embodiment of the present application provides a computer readable storage medium, where the one or more program data is stored, and the one or more program data may be processed by one or more processes.
  • the device is executed to implement the feature extraction method or the panoramic stitching method described in the above first aspect.
  • an input image that needs to be spliced is received; a feature point is calculated according to the image and feature point matching is performed; a transformation matrix is calculated according to the feature point on the image; and the image is transformed by using a transformation matrix to obtain The transformed image; the transformed image is projected to complete the stitching; the stitched image is blended to obtain the merged panoramic image.
  • a circular area centered on the position where the key point is located is acquired on the scale space where each key point is located; the circular area is divided into N small sector-shaped areas; The cumulative value of the gradients in the M directions allocated in each of the small fan-shaped regions, where M is the number of directions in the direction parameter; and the feature vector of the key points is determined according to the calculated cumulative values of the N*M gradients.
  • the embodiment of the present application is not in the square region where the feature point is located, but in the circular region where the feature point is located.
  • the rotation axis is rotated to the direction of the feature point to satisfy the rotation invariance.
  • the feature vector of the key point is determined according to the calculated cumulative value of the N*M gradients, which reduces the calculation amount and improves the calculation efficiency.
  • the feature points and the feature vectors of the feature points are calculated, and then matched according to the feature points of the feature points and the feature points, which improves the matching speed and improves the speed of the panorama stitching.
  • FIG. 1 is a schematic flowchart of a feature extraction method provided by an embodiment of the present application.
  • FIG. 2 is a schematic diagram of a sub-flow of a feature extraction method according to an embodiment of the present application
  • FIG. 3 is a schematic diagram of a calculated feature vector provided by an embodiment of the present application.
  • FIG. 4 is a schematic flowchart of a method for splicing a panorama according to an embodiment of the present application
  • FIG. 5 is a schematic diagram of a sub-flow of a method for splicing a panorama according to an embodiment of the present application
  • FIG. 6 is a schematic diagram of a sub-flow of a method for splicing a panorama according to another embodiment of the present application.
  • FIG. 7 is a schematic diagram of a sub-flow of a method for splicing a panorama according to another embodiment of the present application.
  • FIG. 8 is a schematic flowchart of a method for splicing a panorama according to another embodiment of the present application.
  • FIG. 9 is a schematic block diagram of a feature extraction apparatus according to an embodiment of the present application.
  • FIG. 10 is a schematic block diagram of a feature vector determining unit provided by an embodiment of the present application.
  • FIG. 11 is a schematic block diagram of a panoramic splicing apparatus provided by an embodiment of the present application.
  • FIG. 12 is a schematic block diagram of a transformation matrix calculation unit provided by an embodiment of the present application.
  • FIG. 13 is a schematic block diagram of a transformation matrix calculation unit according to another embodiment of the present application.
  • FIG. 14 is a schematic block diagram of a transformation matrix calculation unit according to another embodiment of the present application.
  • FIG. 15 is a schematic block diagram of a panoramic splicing apparatus according to another embodiment of the present application.
  • FIG. 16 is a schematic block diagram of a feature extraction device according to an embodiment of the present application.
  • FIG. 17 is a schematic block diagram of a panoramic splicing device according to an embodiment of the present application.
  • first rotation matrix may be referred to as a second rotation matrix without departing from the scope of the present application.
  • second rotation matrix may be referred to as a first rotation matrix. Both the first rotation matrix and the second rotation matrix are rotation matrices, but they are not the same rotation matrix.
  • FIG. 1 is a schematic flowchart diagram of a feature extraction method according to an embodiment of the present application. The method includes S101 ⁇ S109.
  • the scale space refers to: obtaining the scale space representation sequence at multiple scales by continuously varying the scale parameters. Using the scale space to further process the image makes it easier to capture the essential features of the image.
  • the scale space satisfies translation invariance, scale invariance, Euclidean invariance, and affine invariance.
  • the scale space of an image is defined as the convolution of a Gaussian function of a varying scale with the original image.
  • the scale space of the image is represented by Gaussian pyramid.
  • the construction of the Gaussian pyramid is divided into two parts: Gaussian blurring of different scales on the image; downsampling of the image.
  • the Gaussian difference image is obtained by subtracting the adjacent upper and lower layers of the image in each group of Gaussian pyramids. On Gaussian difference images, find the extreme points of the Gaussian difference function. Specifically, each pixel point is compared with all its neighboring points, and all the adjacent points include adjacent points on the scale space where the pixel point is located and corresponding adjacent points on the upper and lower adjacent scales of the pixel point. Whether the pixel is larger or smaller than its neighboring point, taking the point corresponding to the largest or smallest pixel value of the adjacent point, and using the point as an extreme point.
  • each extreme point includes the scale and position of the scale space in which the extreme point is located.
  • S104 calculating a key point according to the extreme point.
  • the extreme points of a discrete space are not necessarily true extreme points. In order to improve stability, it is necessary to find true extreme points from discrete extreme points and eliminate unstable edge extreme points. Specifically, curve fitting is performed on the Gaussian difference function of the scale space.
  • the direction parameter includes M directions, and M is a natural number greater than 1.
  • M is a natural number greater than 1.
  • the histogram is used to calculate the gradient and direction of the pixels in the neighborhood. For example, the histogram divides the direction range of 0 to 360 degrees into 36 bins, each of which is 10 degrees.
  • the M column in which the gradient value of each column is large is selected, and the direction in which the M column is located is taken as the direction of the key point.
  • M 8.
  • S106 Obtain a circular area centered on the position where the key point is located in the scale space where each key point is located.
  • Each key point includes the scale and location of the scale space in which the key point is located. Since the obtained key points may be located in different scale spaces, it is necessary to know the scale at which the key points are located, and in the scale space corresponding to the scale, obtain a circular area centered on the position where the key points are located.
  • the radius of the circular area is related to the resolution of the input image, and the resolution of the image represents the quality of the image. The higher the resolution of the input image, the smaller the radius of the circular area; the lower the resolution of the input image, the larger the radius of the circular area. In this way, it is guaranteed that useful feature information can be extracted.
  • the diameter of the circular area is the original sift algorithm (David The length of the square diagonal of the 4*4 window in the key point scale space in the algorithm proposed by Lowe.
  • the radius of the circular area may also be some other fixed value or the like.
  • N is a natural number greater than 1.
  • N 8.
  • S109 Determine a feature vector of the key point according to the calculated cumulative value of the N*M gradients. Specifically, as shown in FIG. 2, S109 includes S201-S202. S201, the calculated N*M gradient integrated values are arranged in descending order. The calculated gradient cumulative values are sorted for better feature point (keypoint) matching. S202: Normalize the aligned gradient cumulative values to obtain feature vectors of the key points. The purpose of normalization is to eliminate the effects of light. In the specific implementation, the order of the alignment and the normalization is not limited. It can be understood that the gradient cumulative values may be first aligned and then normalized, or may be normalized and then ranked.
  • Figure 3 is a schematic diagram of a calculated feature vector.
  • the circular area 30 is divided into eight sector-shaped small areas 31, wherein the diameter 32 is the length of the diameter of the circular area, and the feature 33 corresponding to each small area of the sector is by the pixels in the small area of the sector.
  • the gradient values of the points are assigned to 8 directions, and the cumulative gradients in the 8 directions are counted.
  • the feature vector of the key point includes the gradient cumulative value in eight directions in the eight sector-shaped small regions. It should be noted that the gradient cumulative value is normalized and descended.
  • the 4*4 area of the key point of the original sift algorithm is directly replaced by the circular area, and the main direction of the key point in the 4*4 area is not required to be determined, and there is no need to Rotation invariance can be satisfied by rotating the original 4*4 area of the sift.
  • the circular area is divided into 8 blocks, and the cumulative shaving values assigned to the 8 directions in each block are calculated.
  • the eigenvectors of each key point are changed from the original 4*4*8 (8 directions) dimension to 8*8 (8 directions), that is, from the original 128-dimensional to 64-dimensional, each key
  • the dimension of the feature vector of the point is reduced by half. The speed of image feature extraction is greatly improved under the premise of maintaining accuracy.
  • FIG. 4 is a schematic block diagram of a method for splicing a panorama provided by an embodiment of the present application.
  • the panoramic stitching method includes S401-S406.
  • S401 Receive an input image that needs to be stitched.
  • the image to be stitched input by the panoramic stitching method does not need to be corrected before the feature point extraction, such as by an ordinary SLR camera/phone camera.
  • the image that needs to be spliced according to the actual demand may be pre-processed, such as removing some interference noise.
  • the feature points and the feature vectors of the feature points are calculated by the embodiment shown in FIG. 1 to FIG. 2, and details are not described herein again.
  • the feature points are matched according to the calculated feature points and the feature vectors of the feature points, so that two or two images that can match each other can be found. It can be understood that two or two images that match each other have the same feature points. Since the embodiment shown in FIG. 1 to FIG. 2 can greatly improve the speed of image feature extraction under the premise of maintaining accuracy, the feature points and feature points of each image are calculated by using the embodiment shown in FIG. Vector, and then feature point matching between images, can greatly improve the speed of image matching.
  • S403 includes S501-S503.
  • S501 Calculate a first rotation matrix by using a least square method according to feature points on the image, where the first rotation matrix includes a rotation parameter. Specifically, the feature points of the images that can match each other are input, and the rotation matrix between the two images that can be matched with each other is calculated by the least square method, and the rotation matrix between the images of the two matching images can be calculated. Then, the rotation matrix between all the two mutually matching images is adjusted to the same standard, and the adjusted rotation matrix of the same standard is called the first rotation matrix.
  • the rotation matrix is a representation of camera parameters between images.
  • adjusting to the same standard can be understood as, for example, for the scale parameter, the scale of the rotation matrix of the first pair of images that can match each other is 1, and the scale of the rotation matrix of the second pair of images that can match each other is 1.5, then the scale of the rotation matrix of the second pair of images that can match each other can be adjusted to be the same as the first scale based on the scale of the rotation matrix of the first pair of images that can match each other; The scale of the first one can be changed to 1.5 based on the scale of the rotation matrix of the second pair of images that can match each other; or the scale of the two can be adjusted to 3. The same is true for other parameters such as rotation parameters and translation parameters. S502. Calculate a homography matrix according to feature points on the image.
  • the parameters in the homography matrix include rotation parameters, translation parameters, and scaling parameters. Specifically, the feature points on the images that can match each other are input, and the homography matrix is calculated by the principle of the polar geometry. For the idea of specific calculations, please refer to the description of the first rotation matrix calculation. S503. Replace the rotation parameter in the homography matrix by using the calculated rotation parameter in the first rotation matrix to obtain a transformation matrix. By replacing the rotation parameters in the homography matrix with the rotation parameters in the first rotation matrix calculated according to the least squares method, the obtained transformation matrix is more accurate.
  • the method of fusion specifically includes: equalization processing of image brightness and color, etc., so that the stitched panoramic image looks more natural.
  • the feature points of the image to be spliced are calculated and matched, the image is transformed by the transformation matrix to obtain the transformed image, the transformed image is projected to complete the splicing, and the spliced image is fused to obtain The merged panoramic image.
  • the method embodiment described above calculates the feature points of the image and performs matching by the method of the embodiment shown in FIG. 1 to FIG. 2, which speeds up the image matching speed and improves the efficiency of the panoramic image stitching.
  • the rotation parameters in the homography matrix are replaced by the rotation parameters in the first rotation matrix calculated according to the least squares method, and the obtained transformation matrix is more accurate, thereby improving the precision of the panorama stitching.
  • the transformation matrix is calculated according to the feature points on the image, that is, step S403, including S601-S603.
  • S601 using a random sampling consistency algorithm according to feature points on the image (Random Sample Consensus, RANSAC) calculates a second rotation matrix comprising rotation parameters.
  • the RANSAC algorithm finds the optimal second rotation matrix by inputting the feature points on the images that can be matched by the two pairs, and uses the model and some trusted parameters to make the number of feature points satisfying the second rotation matrix the most.
  • S602. Calculate a homography matrix according to feature points on the image.
  • the parameters in the homography matrix include rotation parameters, translation parameters, and scaling parameters.
  • the homography matrix is calculated by the principle of the polar geometry.
  • S603. Replace the rotation parameter in the homography matrix by using the calculated rotation parameter in the second rotation matrix to obtain a transformation matrix.
  • the transformation matrix is calculated according to the feature points on the image, that is, step S403, including S701-S704.
  • S701. Calculate a first rotation matrix by using a least square method according to feature points on the image, where the first rotation matrix includes a rotation parameter.
  • S702. Calculate a third rotation matrix according to the feature points on the image by using a RANSAC algorithm and a first rotation matrix, where the third rotation matrix includes a rotation parameter.
  • the rotation parameter in the first rotation matrix is taken as the initial value of the parameter in the RANSAC algorithm.
  • the third rotation matrix is calculated using the RANSAC algorithm on the basis of the initial value.
  • S703. Calculate a homography matrix according to feature points on the image.
  • the parameters in the homography matrix include rotation parameters, translation parameters, and scaling parameters. Specifically, the feature points on the images that can match each other are input, and the homography matrix is calculated by the principle of the polar geometry. S704. Replace the rotation parameter in the homography matrix by using the calculated rotation parameter in the third rotation matrix to obtain a transformation matrix.
  • the accuracy of the transformation matrix obtained is greatly improved, and the precision of the panorama stitching is greatly improved.
  • FIG. 8 is a schematic flowchart of a method for splicing a panorama according to another embodiment of the present application.
  • the method includes S801-S808.
  • the method embodiment differs from the embodiment described in FIG. 4 in that steps S802-S803 are added before the feature points of the image are calculated. For other steps, please refer to the description of the embodiment described in FIG.
  • S802. Determine whether the input image to be spliced is a fisheye image. Specifically, whether the input image is a fisheye image can be judged by judging the input parameter, such as the parameter of the type of the input image/the parameter of whether or not the fisheye distortion correction is required. If the type of the input image is the type of the fisheye image or the fisheye distortion correction is required, it is judged that the input image is a fisheye image. Because the fisheye camera does not need to take a lot of pictures compared to the ordinary SLR camera/phone camera, it improves the efficiency of image acquisition. In addition, the less image acquisition, the probability of errors in the panorama stitching process is reduced, and the accuracy of panoramic stitching is improved.
  • the input parameter such as the parameter of the type of the input image/the parameter of whether or not the fisheye distortion correction is required.
  • the method of distortion correction includes a spherical coordinate positioning method, a warp and longitude mapping method, and the like.
  • the fisheye image is processed, and the image acquired by the fisheye image is less, which reduces the probability of error in the panorama stitching process and improves the precision of the panorama stitching.
  • FIG. 9 is a schematic block diagram of a feature extraction apparatus according to an embodiment of the present application.
  • the apparatus 90 includes a first receiving unit 901, a generating unit 902, a detecting unit 903, a key point calculating unit 904, a direction calculating unit 905, an obtaining unit 906, a blocking unit 907, a calculation allocating unit 908, and a feature vector determining unit 909.
  • the first receiving unit 901 is configured to receive an input image.
  • the generating unit 902 is configured to generate a scale space of the image.
  • the detecting unit 903 is for detecting an extreme point in the scale space of the image.
  • the key point calculation unit 904 is configured to calculate a key point based on the extreme point.
  • the direction calculation unit 905 is used to calculate the direction parameter of each key point.
  • the obtaining unit 906 is configured to acquire a circular area centered on the position where the key point is located on the scale space where each key point is located.
  • Each key point includes the scale and location of the scale space in which the key point is located. Since the obtained key points may be located in different scale spaces, it is necessary to know the scale at which the key points are located, and in the scale space corresponding to the scale, obtain a circular area centered on the position where the key points are located.
  • the radius of the circular area is related to the resolution of the input image, and the resolution of the image represents the quality of the image. The higher the resolution of the input image, the smaller the radius of the circular area; the lower the resolution of the input image, the larger the radius of the circular area.
  • the diameter of the circular area is the length of the square diagonal of the 4*4 window in the key point scale space of the original sift algorithm.
  • the radius of the circular area may also be a fixed value or the like.
  • the calculation allocating unit 908 is configured to calculate the gradient cumulative value in the M directions allocated in each of the small sector regions, where M is the number of directions in the direction parameter and is a natural number greater than 1.
  • M is the number of directions in the direction parameter and is a natural number greater than 1.
  • M 8.
  • the gradient and direction of the pixel points in each small fan-shaped area are calculated, and the gradient values of the pixel points in the small fan-shaped area are allocated to eight directions, and the gradient integrated values in eight directions are counted.
  • the specific formula for calculating the gradient cumulative value is the same as the original sift algorithm, and will not be described here.
  • the feature vector determining unit 909 is configured to determine a feature vector of the key point according to the calculated N*M gradient integrated values.
  • the feature vector determining unit includes a sorting unit 101 and a normalizing unit 102.
  • the sorting unit 101 is configured to sort the calculated gradient cumulative values in descending order.
  • the calculated gradient cumulative values are sorted for better feature point (keypoint) matching.
  • the normalization unit 102 is configured to normalize the aligned gradient cumulative values to obtain feature vectors of the key points.
  • the purpose of normalization is to eliminate the effects of light.
  • the order of the alignment and the normalization is not limited. It can be understood that the gradient cumulative values may be first aligned and then normalized, or may be normalized and then ranked.
  • the 4*4 area of the key point of the original sift algorithm is directly replaced by the circular area, and the main direction of the key point in the 4*4 area is not required to be determined, and the original direction is not needed.
  • Rotation invariance can be satisfied by rotating the 4*4 area of the sift.
  • the circular area is divided into 8 blocks, and the cumulative shaving values assigned to the 8 directions in each block are calculated.
  • the eigenvectors of each key point are changed from the original 4*4*8 (8 directions) dimension to 8*8 (8 directions), that is, from the original 128-dimensional to 64-dimensional, each key
  • the dimension of the feature vector of the point is reduced by half. The speed of image feature extraction is greatly improved under the premise of maintaining accuracy.
  • the feature extraction device described above can be implemented in the form of a computer program that can be run on a feature extraction device as shown in FIG.
  • FIG. 11 is a schematic block diagram of a panoramic splicing apparatus provided by an embodiment of the present application.
  • the apparatus 110 includes a second receiving unit 111, a matching unit 112, a transformation matrix calculation unit 113, a transformation unit 114, a projection unit 115, and a fusion unit 116.
  • the second receiving unit 111 is configured to receive an input image that needs to be spliced.
  • the matching unit 112 is configured to calculate feature points and perform feature point matching according to the input image that needs to be spliced. Specifically, the feature points and the feature vectors of the feature points can be calculated by the feature extraction device as shown in FIG. 9 to FIG. 10, and the matching unit 112 performs feature point matching according to the calculated feature points and the feature vectors of the feature points.
  • the matching unit includes the first receiving unit 901, the generating unit 902, the detecting unit 903, the key point calculating unit 904, the direction calculating unit 905, the obtaining unit 906, and the blocking unit of the feature extracting device shown in FIG. 907.
  • the feature vector determining unit 906 includes a sorting unit 101 and a normalizing unit 102.
  • the matching unit 112 performs feature point matching according to the calculated feature points and the feature vectors of the feature points. This way you can find images that match each other. It can be understood that two or two images that match each other have the same feature points. Since the feature extraction device shown in FIG. 9 to FIG. 10 can greatly improve the speed of image feature extraction under the premise of maintaining accuracy, the feature extraction device or the matching unit shown in FIG. 9 to FIG. 10 is used (the matching unit includes FIG. 9 The unit corresponding to the feature extraction device shown) calculates the feature vector of each image and the feature vector of the feature point, and then performs feature point matching between the images, which can greatly improve the speed of image matching.
  • the transformation matrix calculation unit 113 is for calculating a transformation matrix from feature points on the image.
  • the transformation matrix calculation unit includes a first rotation matrix calculation unit 121, a homography matrix calculation unit 122, and a first replacement unit 123.
  • the first rotation matrix calculation unit 121 is configured to calculate a first rotation matrix by a least square method according to feature points on the image, wherein the first rotation matrix includes a rotation parameter. Specifically, the feature points of the images that can match each other are input, and the rotation matrix between the two images that can be matched with each other is calculated by the least square method, and the rotation matrix between the images of the two matching images can be calculated. Then, the rotation matrix between all the two mutually matching images is adjusted to the same standard, and the adjusted rotation matrix of the same standard is called the first rotation matrix.
  • the homography matrix calculation unit 122 is configured to calculate a homography matrix according to feature points on the image.
  • the parameters in the homography matrix include rotation parameters, translation parameters, and scaling parameters.
  • the first replacement unit 123 is configured to replace the rotation parameter in the homography matrix with the calculated rotation parameter in the first rotation matrix to obtain a transformation matrix. By replacing the rotation parameters in the homography matrix with the rotation parameters in the first rotation matrix calculated according to the least squares method, the obtained transformation matrix is more accurate.
  • the transform unit 114 is configured to transform the image using the transform matrix to obtain the transformed image. Since the camera parameters of each image may change during shooting, and the camera parameters change, the captured images may also have corresponding differences. If the image is not adjusted correspondingly and directly spliced, then ghosting will occur, which directly affects the effect of the final splicing. Therefore, it is necessary to use the transformation matrix to transform the input image that needs to be spliced. Like the camera parameters of the transformed image, there is no relative rotation, relative scale change, or relative translation.
  • the projection unit 115 is configured to project the transformed image to complete the splicing.
  • the merging unit 116 is configured to fuse the spliced images to obtain a fused panoramic image.
  • the above embodiment calculates the feature points of the image and performs matching by the embodiment shown in FIG. 9 to FIG. 10, which speeds up the image matching speed and improves the efficiency of the panoramic image stitching.
  • the rotation parameters in the homography matrix are replaced by the rotation parameters in the first rotation matrix calculated according to the least squares method, and the obtained transformation matrix is more accurate, thereby improving the precision of the panorama stitching.
  • the transformation matrix calculation unit 113 includes a second rotation matrix calculation unit 131, a homography matrix calculation unit 132, and a second replacement unit 133.
  • the second rotation matrix calculation unit 131 is configured to calculate a second rotation matrix using a RANSAC algorithm according to feature points on the image, the second rotation matrix including a rotation parameter.
  • the homography matrix calculation unit 132 is configured to calculate a homography matrix from feature points on the image.
  • the parameters in the homography matrix include rotation parameters, translation parameters, and scaling parameters.
  • the second replacement unit 133 is configured to replace the rotation parameter in the homography matrix with the rotation parameter in the calculated second rotation matrix to obtain a transformation matrix.
  • the transformation matrix calculation unit 113 includes a first rotation matrix calculation unit 141, a third rotation matrix calculation unit 142, a homography matrix calculation unit 143, and a third replacement unit 144.
  • the first rotation matrix calculation unit 141 is configured to calculate a first rotation matrix by a least square method according to feature points on the image, wherein the first rotation matrix includes a rotation parameter.
  • the third rotation matrix calculation unit 142 is configured to calculate a third rotation matrix using a RANSAC algorithm and a first rotation matrix according to feature points on the image, the third rotation matrix including a rotation parameter. Specifically, the rotation parameter in the first rotation matrix is taken as the initial value of the parameter in the RANSAC algorithm.
  • the third rotation matrix is calculated using the RANSAC algorithm on the basis of the initial value.
  • the homography matrix calculation unit 143 is configured to calculate a homography matrix from feature points on the image.
  • the third replacement unit 144 is configured to replace the rotation parameter in the homography matrix with the rotation parameter in the calculated third rotation matrix to obtain a transformation matrix.
  • FIG. 15 is a schematic block diagram of a panoramic splicing apparatus according to another embodiment of the present application.
  • the apparatus 150 includes a second receiving unit 151, a judging unit 152, a distortion correcting unit 153, a matching unit 154, a transform matrix calculating unit 155, a transform unit 156, a projecting unit 157, and a merging unit 158.
  • the difference between this embodiment and the embodiment of FIG. 11 is that the determining unit 152 and the distortion correcting unit 153 are added.
  • the determining unit 152 is configured to determine whether the input image to be stitched is a fisheye image. Specifically, whether the input image is a fisheye image can be judged by judging the input parameter, such as the parameter of the type of the input image/the parameter of whether or not the fisheye distortion correction is required. If the type of the input image is the type of the fisheye image or the fisheye distortion correction is required, it is judged that the input image is a fisheye image. Compared with the ordinary SLR camera/phone camera, the fisheye camera does not need to take a lot of pictures, and collects fewer images, which improves the efficiency of image acquisition, and also reduces the probability of errors in the panorama stitching process, and improves the panorama. The precision of the stitching.
  • the distortion correcting unit 153 is configured to perform distortion correction on the input fisheye image if the input image to be spliced is a fisheye image.
  • the above-described panoramic splicing device can be implemented in the form of a computer program that can be run on a panoramic splicing device as shown in FIG.
  • FIG. 16 is a schematic block diagram of a feature extraction device according to an embodiment of the present application.
  • the feature extraction device 160 includes an input device 161, an output device 162, a memory 163, and a processor 164.
  • the input device 161, the output device 162, the memory 163, and the processor 164 are connected by a bus 165.
  • the input device 161 is for inputting an image that requires feature extraction.
  • the input device 161 of the embodiment of the present application may include a keyboard, a mouse, a voice input device, a touch input device, and the like.
  • the output device 162 is for outputting a feature vector or the like.
  • the output device 162 of the embodiment of the present application may include a voice output device, a display, a display screen, a touch screen, and the like.
  • the memory 163 is used to store program data that implements feature extraction.
  • the memory 163 of the embodiment of the present application may be a system memory, such as a non-volatile (such as a ROM, a flash memory, etc.).
  • the memory 163 of the embodiment of the present application may also be an external memory outside the system, such as a magnetic disk, an optical disk, a magnetic tape, or the like.
  • the processor 164 is configured to run program data stored in the memory 163 to perform the following operations:
  • the processor 114 also performs the following operations:
  • the calculated N*M gradient integrated values are arranged in descending order; the aligned gradient integrated values are normalized to obtain the feature vectors of the key points.
  • FIG. 17 is a schematic block diagram of a panoramic splicing device according to an embodiment of the present application.
  • the panoramic splicing device 170 includes an input device 171, an output device 172, a memory 173, and a processor 174.
  • the input device 171, the output device 172, the memory 173, and the processor 174 are connected by a bus 175.
  • the input device 171 is for inputting an image that requires panoramic stitching.
  • the input device 171 of the embodiment of the present application may include a keyboard, a mouse, a voice input device, a touch input device, and the like.
  • the output device 172 is for outputting a panoramic image or the like.
  • the output device 172 of the embodiment of the present application may include a display, a display screen, a touch screen, and the like.
  • the memory 173 is used to store program data for realizing panoramic stitching.
  • the memory 173 of the embodiment of the present application may be a system memory, such as a non-volatile (such as a ROM, a flash memory, etc.).
  • the memory 173 of the embodiment of the present application may also be an external memory outside the system, such as a magnetic disk, an optical disk, a magnetic tape, or the like.
  • the processor 174 is configured to run program data stored in the memory 173 to perform the following operations:
  • the processor 174 also performs the following operations:
  • the processor 174 also performs the following operations:
  • the processor 174 also performs the following operations:
  • the third rotation matrix includes a rotation parameter;
  • the homography matrix is calculated according to the feature points on the image, the homography matrix includes a rotation parameter; and the rotation parameter in the calculated third rotation matrix is used to replace the rotation condition
  • the rotation parameters in the sex matrix to get the transformation matrix.
  • the application further provides a computer readable storage medium storing one or more program data, the one or more program data being executable by one or more processors to implement the following step:
  • the program data may be executed by the processor to implement the following steps:
  • the calculated cumulative values of the N*M gradients are arranged in descending order; the aligned cumulative values of the gradients are normalized to obtain the feature vectors of the key points.
  • the present application also provides another computer readable storage medium storing one or more program data, the one or more program data being executable by one or more processors to implement The following steps:
  • the transformed image is projected to complete the splicing; the spliced image is fused to obtain the fused panoramic image;
  • the calculating the feature points according to the image and performing feature point matching may be by acquiring the aforementioned computer readable storage medium
  • the calculated key point and the feature vector of the key point are implemented, and the key point is a feature point, and feature point matching is performed according to the feature point in the image and the feature vector of the feature point.
  • related program data stored in the foregoing computer readable storage medium may also be stored into the computer readable storage medium to implement the calculating feature points according to the image and calculating according to the Feature points for feature point matching.
  • the program data may be executed by the processor to implement the following steps:
  • the program data may be executed by the processor to implement the following steps:
  • the program data may be executed by the processor to implement the following steps:
  • the third rotation matrix includes a rotation parameter;
  • the homography matrix is calculated according to the feature points on the image, the homography matrix includes a rotation parameter; and the rotation parameter in the calculated third rotation matrix is used to replace the rotation condition
  • the rotation parameters in the sex matrix to get the transformation matrix.
  • the disclosed apparatus, apparatus, and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division, and the actual implementation may have another division manner.
  • the integrated unit, if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • the aforementioned storage medium includes : U disk, removable hard disk, read-only memory (ROM, Read-Only Memory), disk or optical disk, and other media that can store program code.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

Selon un mode de réalisation, la présente invention concerne un procédé d'extraction de caractéristiques, un procédé d'assemblage de panoramique, un appareil associé, un dispositif et un support d'informations lisible par ordinateur. Le procédé d'assemblage de panoramique comprend les étapes qui consistent : à recevoir des images entrées nécessitant d'être assemblées ; à calculer des points caractéristiques en fonction des images et à mettre en correspondance les points caractéristiques ; à calculer une matrice de transformation selon des points caractéristiques dans les images ; à transformer les images à l'aide de la matrice de transformation de façon à obtenir des images transformées ; à projeter les images transformées pour achever l'assemblage ; et à faire fusionner les images assemblées afin d'obtenir une image panoramique issue de la fusion. Le calcul de points caractéristiques en fonction des images et la mise en correspondance des points caractéristiques comprennent le calcul de points clés et de vecteurs de caractéristiques des points clés selon un procédé d'extraction de caractéristiques, les points clés étant des points caractéristiques, et la mise en correspondance des points caractéristiques en fonction des points caractéristiques dans les images et des vecteurs de caractéristiques des points caractéristiques. Le mode de réalisation de la présente invention permet d'accroître la vitesse d'extraction de caractéristiques, la vitesse de mise en correspondance de points caractéristiques et la vitesse d'assemblage de panoramique.
PCT/CN2017/102871 2017-09-05 2017-09-22 Procédés d'extraction de caractéristiques et d'assemblage de panoramique, appareil associé, dispositif et support d'informations lisible WO2019047284A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710790764.9 2017-09-05
CN201710790764.9A CN107665479A (zh) 2017-09-05 2017-09-05 一种特征提取方法、全景拼接方法及其装置、设备及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2019047284A1 true WO2019047284A1 (fr) 2019-03-14

Family

ID=61098406

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/102871 WO2019047284A1 (fr) 2017-09-05 2017-09-22 Procédés d'extraction de caractéristiques et d'assemblage de panoramique, appareil associé, dispositif et support d'informations lisible

Country Status (2)

Country Link
CN (1) CN107665479A (fr)
WO (1) WO2019047284A1 (fr)

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110111287A (zh) * 2019-04-04 2019-08-09 上海工程技术大学 一种织物多角度图像融合系统及其方法
CN110232656A (zh) * 2019-06-13 2019-09-13 上海倍肯机电科技有限公司 一种解决特征点不足的图像拼接优化方法
CN110689485A (zh) * 2019-10-14 2020-01-14 中国空气动力研究与发展中心超高速空气动力研究所 一种应用于大型压力容器红外无损检测的sift图像拼接方法
CN111080525A (zh) * 2019-12-19 2020-04-28 成都海擎科技有限公司 一种基于sift特征的分布式图像和图元拼接方法
CN111223073A (zh) * 2019-12-24 2020-06-02 乐软科技(北京)有限责任公司 一种虚拟探测系统
CN111242221A (zh) * 2020-01-14 2020-06-05 西交利物浦大学 基于图匹配的图像匹配方法、系统及存储介质
CN111585535A (zh) * 2020-06-22 2020-08-25 中国电子科技集团公司第二十八研究所 一种反馈式数字自动增益控制电路
CN111738920A (zh) * 2020-06-12 2020-10-02 山东大学 一种面向全景拼接加速的fpga架构及全景图像拼接方法
CN111899158A (zh) * 2020-07-29 2020-11-06 北京天睿空间科技股份有限公司 考虑几何畸变的图像拼接方法
CN112037130A (zh) * 2020-08-27 2020-12-04 江苏提米智能科技有限公司 一种自适应图像拼接融合方法、装置、电子设备及存储介质
CN112102169A (zh) * 2020-09-15 2020-12-18 合肥英睿系统技术有限公司 一种红外图像拼接方法、装置和存储介质
CN112163996A (zh) * 2020-09-10 2021-01-01 沈阳风驰软件股份有限公司 一种基于图像处理的平角视频融合方法
CN112419383A (zh) * 2020-10-30 2021-02-26 中山大学 一种深度图的生成方法、装置及存储介质
CN112465702A (zh) * 2020-12-01 2021-03-09 中国电子科技集团公司第二十八研究所 一种多路超高清视频同步自适应拼接显示处理方法
CN112785505A (zh) * 2021-02-23 2021-05-11 深圳市来科计算机科技有限公司 一种昼夜图像拼接方法
CN112837223A (zh) * 2021-01-28 2021-05-25 杭州国芯科技股份有限公司 一种基于重叠子区域的超大图像配准拼接方法
CN113034362A (zh) * 2021-03-08 2021-06-25 桂林电子科技大学 一种高速公路隧道监控全景影像拼接方法
CN113066012A (zh) * 2021-04-23 2021-07-02 深圳壹账通智能科技有限公司 场景图像的确认方法、装置、设备及存储介质
CN113256492A (zh) * 2021-05-13 2021-08-13 上海海事大学 一种全景视频拼接方法、电子设备及存储介质
CN113724176A (zh) * 2021-08-23 2021-11-30 广州市城市规划勘测设计研究院 一种多摄像头动作捕捉无缝衔接方法、装置、终端及介质
CN114339157A (zh) * 2021-12-30 2022-04-12 福州大学 一种观察区域可调式多相机实时拼接系统及方法
CN114627262A (zh) * 2022-05-11 2022-06-14 武汉大势智慧科技有限公司 基于倾斜网格数据的图像生成方法及系统
CN115861927A (zh) * 2022-12-01 2023-03-28 中国南方电网有限责任公司超高压输电公司大理局 电力设备巡检图像的图像识别方法、装置和计算机设备
CN116452426A (zh) * 2023-06-16 2023-07-18 广汽埃安新能源汽车股份有限公司 一种全景图拼接方法及装置
CN117011147A (zh) * 2023-10-07 2023-11-07 之江实验室 一种红外遥感影像特征检测及拼接方法及装置
CN117011137A (zh) * 2023-06-28 2023-11-07 深圳市碧云祥电子有限公司 基于rgb相似度特征匹配的图像拼接方法、装置及设备
CN117168344A (zh) * 2023-11-03 2023-12-05 杭州鲁尔物联科技有限公司 单目全景环视形变监测方法、装置及计算机设备

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108305281B (zh) * 2018-02-09 2020-08-11 深圳市商汤科技有限公司 图像的校准方法、装置、存储介质、程序产品和电子设备
CN110223222B (zh) * 2018-03-02 2023-12-05 株式会社理光 图像拼接方法、图像拼接装置和计算机可读存储介质
CN109102464A (zh) * 2018-08-14 2018-12-28 四川易为智行科技有限公司 全景图像拼接方法及装置
CN109272442B (zh) * 2018-09-27 2023-03-24 百度在线网络技术(北京)有限公司 全景球面图像的处理方法、装置、设备和存储介质
CN109600584A (zh) * 2018-12-11 2019-04-09 中联重科股份有限公司 观察塔机的方法和装置、塔机及机器可读存储介质
CN111797860B (zh) * 2019-04-09 2023-09-26 Oppo广东移动通信有限公司 特征提取方法、装置、存储介质及电子设备
CN110298817A (zh) * 2019-05-20 2019-10-01 平安科技(深圳)有限公司 基于图像处理的目标物统计方法、装置、设备及存储介质
CN112712518B (zh) * 2021-01-13 2024-01-09 中国农业大学 鱼类计数方法、装置、电子设备及存储介质
CN113409372B (zh) * 2021-06-25 2023-03-24 浙江商汤科技开发有限公司 图像配准方法及相关装置、设备和存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101354254A (zh) * 2008-09-08 2009-01-28 北京航空航天大学 一种飞行器航向跟踪方法
US20120148164A1 (en) * 2010-12-08 2012-06-14 Electronics And Telecommunications Research Institute Image matching devices and image matching methods thereof
CN105608667A (zh) * 2014-11-20 2016-05-25 深圳英飞拓科技股份有限公司 一种全景拼接的方法及装置
CN106558072A (zh) * 2016-11-22 2017-04-05 重庆信科设计有限公司 一种基于改进sift特征在遥感图像上配准的方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101354254A (zh) * 2008-09-08 2009-01-28 北京航空航天大学 一种飞行器航向跟踪方法
US20120148164A1 (en) * 2010-12-08 2012-06-14 Electronics And Telecommunications Research Institute Image matching devices and image matching methods thereof
CN105608667A (zh) * 2014-11-20 2016-05-25 深圳英飞拓科技股份有限公司 一种全景拼接的方法及装置
CN106558072A (zh) * 2016-11-22 2017-04-05 重庆信科设计有限公司 一种基于改进sift特征在遥感图像上配准的方法

Cited By (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110111287A (zh) * 2019-04-04 2019-08-09 上海工程技术大学 一种织物多角度图像融合系统及其方法
CN110232656A (zh) * 2019-06-13 2019-09-13 上海倍肯机电科技有限公司 一种解决特征点不足的图像拼接优化方法
CN110232656B (zh) * 2019-06-13 2023-03-28 上海倍肯智能科技有限公司 一种解决特征点不足的图像拼接优化方法
CN110689485A (zh) * 2019-10-14 2020-01-14 中国空气动力研究与发展中心超高速空气动力研究所 一种应用于大型压力容器红外无损检测的sift图像拼接方法
CN110689485B (zh) * 2019-10-14 2022-11-04 中国空气动力研究与发展中心超高速空气动力研究所 一种应用于大型压力容器红外无损检测的sift图像拼接方法
CN111080525A (zh) * 2019-12-19 2020-04-28 成都海擎科技有限公司 一种基于sift特征的分布式图像和图元拼接方法
CN111223073A (zh) * 2019-12-24 2020-06-02 乐软科技(北京)有限责任公司 一种虚拟探测系统
CN111242221A (zh) * 2020-01-14 2020-06-05 西交利物浦大学 基于图匹配的图像匹配方法、系统及存储介质
CN111738920A (zh) * 2020-06-12 2020-10-02 山东大学 一种面向全景拼接加速的fpga架构及全景图像拼接方法
CN111585535A (zh) * 2020-06-22 2020-08-25 中国电子科技集团公司第二十八研究所 一种反馈式数字自动增益控制电路
CN111585535B (zh) * 2020-06-22 2022-11-08 中国电子科技集团公司第二十八研究所 一种反馈式数字自动增益控制电路
CN111899158A (zh) * 2020-07-29 2020-11-06 北京天睿空间科技股份有限公司 考虑几何畸变的图像拼接方法
CN111899158B (zh) * 2020-07-29 2023-08-25 北京天睿空间科技股份有限公司 考虑几何畸变的图像拼接方法
CN112037130B (zh) * 2020-08-27 2024-03-26 江苏提米智能科技有限公司 一种自适应图像拼接融合方法、装置、电子设备及存储介质
CN112037130A (zh) * 2020-08-27 2020-12-04 江苏提米智能科技有限公司 一种自适应图像拼接融合方法、装置、电子设备及存储介质
CN112163996B (zh) * 2020-09-10 2023-12-05 沈阳风驰软件股份有限公司 一种基于图像处理的平角视频融合方法
CN112163996A (zh) * 2020-09-10 2021-01-01 沈阳风驰软件股份有限公司 一种基于图像处理的平角视频融合方法
CN112102169A (zh) * 2020-09-15 2020-12-18 合肥英睿系统技术有限公司 一种红外图像拼接方法、装置和存储介质
CN112419383B (zh) * 2020-10-30 2023-07-28 中山大学 一种深度图的生成方法、装置及存储介质
CN112419383A (zh) * 2020-10-30 2021-02-26 中山大学 一种深度图的生成方法、装置及存储介质
CN112465702A (zh) * 2020-12-01 2021-03-09 中国电子科技集团公司第二十八研究所 一种多路超高清视频同步自适应拼接显示处理方法
CN112465702B (zh) * 2020-12-01 2022-09-13 中国电子科技集团公司第二十八研究所 一种多路超高清视频同步自适应拼接显示处理方法
CN112837223A (zh) * 2021-01-28 2021-05-25 杭州国芯科技股份有限公司 一种基于重叠子区域的超大图像配准拼接方法
CN112837223B (zh) * 2021-01-28 2023-08-29 杭州国芯科技股份有限公司 一种基于重叠子区域的超大图像配准拼接方法
CN112785505A (zh) * 2021-02-23 2021-05-11 深圳市来科计算机科技有限公司 一种昼夜图像拼接方法
CN112785505B (zh) * 2021-02-23 2023-01-31 深圳市来科计算机科技有限公司 一种昼夜图像拼接方法
CN113034362A (zh) * 2021-03-08 2021-06-25 桂林电子科技大学 一种高速公路隧道监控全景影像拼接方法
CN113066012A (zh) * 2021-04-23 2021-07-02 深圳壹账通智能科技有限公司 场景图像的确认方法、装置、设备及存储介质
CN113066012B (zh) * 2021-04-23 2024-04-09 深圳壹账通智能科技有限公司 场景图像的确认方法、装置、设备及存储介质
CN113256492A (zh) * 2021-05-13 2021-08-13 上海海事大学 一种全景视频拼接方法、电子设备及存储介质
CN113256492B (zh) * 2021-05-13 2023-09-12 上海海事大学 一种全景视频拼接方法、电子设备及存储介质
CN113724176A (zh) * 2021-08-23 2021-11-30 广州市城市规划勘测设计研究院 一种多摄像头动作捕捉无缝衔接方法、装置、终端及介质
CN114339157A (zh) * 2021-12-30 2022-04-12 福州大学 一种观察区域可调式多相机实时拼接系统及方法
CN114627262A (zh) * 2022-05-11 2022-06-14 武汉大势智慧科技有限公司 基于倾斜网格数据的图像生成方法及系统
CN114627262B (zh) * 2022-05-11 2022-08-05 武汉大势智慧科技有限公司 基于倾斜网格数据的图像生成方法及系统
CN115861927A (zh) * 2022-12-01 2023-03-28 中国南方电网有限责任公司超高压输电公司大理局 电力设备巡检图像的图像识别方法、装置和计算机设备
CN116452426A (zh) * 2023-06-16 2023-07-18 广汽埃安新能源汽车股份有限公司 一种全景图拼接方法及装置
CN116452426B (zh) * 2023-06-16 2023-09-05 广汽埃安新能源汽车股份有限公司 一种全景图拼接方法及装置
CN117011137A (zh) * 2023-06-28 2023-11-07 深圳市碧云祥电子有限公司 基于rgb相似度特征匹配的图像拼接方法、装置及设备
CN117011147B (zh) * 2023-10-07 2024-01-12 之江实验室 一种红外遥感影像特征检测及拼接方法及装置
CN117011147A (zh) * 2023-10-07 2023-11-07 之江实验室 一种红外遥感影像特征检测及拼接方法及装置
CN117168344A (zh) * 2023-11-03 2023-12-05 杭州鲁尔物联科技有限公司 单目全景环视形变监测方法、装置及计算机设备
CN117168344B (zh) * 2023-11-03 2024-01-26 杭州鲁尔物联科技有限公司 单目全景环视形变监测方法、装置及计算机设备

Also Published As

Publication number Publication date
CN107665479A (zh) 2018-02-06

Similar Documents

Publication Publication Date Title
WO2019047284A1 (fr) Procédés d'extraction de caractéristiques et d'assemblage de panoramique, appareil associé, dispositif et support d'informations lisible
WO2019216593A1 (fr) Procédé et appareil de traitement de pose
WO2019031873A1 (fr) Assemblage continu d'images
Heide et al. High-quality computational imaging through simple lenses
EP3740936A1 (fr) Procédé et appareil de traitement de pose
US8902329B2 (en) Image processing apparatus for correcting image degradation caused by aberrations and method of controlling the same
WO2016048108A1 (fr) Appareil de traitement d'image et procédé de traitement d'image
WO2015188685A1 (fr) Procédé d'acquisition de modèle de mannequin sur base d'une caméra de profondeur un système d'adaptation virtuel de réseau
WO2019047378A1 (fr) Procédé et dispositif de reconnaissance rapide de corps célestes et télescope
EP3108653A1 (fr) Appareil et procédé de réglage de longueur focale et de détermination d'une carte de profondeur
US20180330470A1 (en) Digital Media Environment for Removal of Obstructions in a Digital Image Scene
EP4320472A1 (fr) Dispositif et procédé de mise au point automatique prédite sur un objet
WO2018223602A1 (fr) Terminal d'affichage, procédé d'amélioration de contraste de trame et support de stockage lisible par ordinateur
JP6075294B2 (ja) 画像処理システム及び画像処理方法
WO2018023925A1 (fr) Procédé et système de photographie
EP4367628A1 (fr) Procédé de traitement d'image et dispositif associé
WO2016080653A1 (fr) Procédé et appareil de traitement d'images
WO2022092451A1 (fr) Procédé de positionnement d'emplacement en intérieur utilisant un apprentissage profond
WO2019148818A1 (fr) Procédé, dispositif et système de traitement d'image, et support de stockage lisible par ordinateur
WO2016169219A1 (fr) Procédé et dispositif d'extraction de textures d'un visage humain
WO2023063679A1 (fr) Dispositif et procédé de mise au point automatique prédite sur un objet
WO2018076560A1 (fr) Procédé et appareil d'affichage d'images
WO2015135497A1 (fr) Procédé appareil et serveur de classification d'utilisateur
WO2023055033A1 (fr) Procédé et appareil pour l'amélioration de détails de texture d'images
WO2017101259A1 (fr) Procédé et dispositif d'affichage d'image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17924357

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 24.09.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 17924357

Country of ref document: EP

Kind code of ref document: A1