WO2013164043A1 - Procédé et système de détermination d'un modèle de mise en correspondance des couleurs permettant de transformer des couleurs d'une première vue en des couleurs d'au moins une seconde vue - Google Patents

Procédé et système de détermination d'un modèle de mise en correspondance des couleurs permettant de transformer des couleurs d'une première vue en des couleurs d'au moins une seconde vue Download PDF

Info

Publication number
WO2013164043A1
WO2013164043A1 PCT/EP2012/076229 EP2012076229W WO2013164043A1 WO 2013164043 A1 WO2013164043 A1 WO 2013164043A1 EP 2012076229 W EP2012076229 W EP 2012076229W WO 2013164043 A1 WO2013164043 A1 WO 2013164043A1
Authority
WO
WIPO (PCT)
Prior art keywords
color
view
colors
feature
matching
Prior art date
Application number
PCT/EP2012/076229
Other languages
English (en)
Inventor
Jürgen Stauder
Hasan SHEIKH FARIDUL
Alain Tremeau
Corinne Poree
Original Assignee
Thomson Licensing
Centre National De La Recherche Scientifique
Universite Jean Monnet
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thomson Licensing, Centre National De La Recherche Scientifique, Universite Jean Monnet filed Critical Thomson Licensing
Publication of WO2013164043A1 publication Critical patent/WO2013164043A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/133Equalising the characteristics of different image components, e.g. their average brightness or colour balance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/15Processing image signals for colour aspects of image signals

Definitions

  • the invention concerns a method and a system for determining a color mapping model able to transform colors of a first view into colors of at least one second view, and more specifically, for creating a color look up table from geometrically corresponding features in two images.
  • 3D video content In the framework of stereo and multiple view imaging, 3D video content needs to be created, processed and reproduced on a 3D-capable display screen. Processing of 3D video content allows creating or enhancing 3D information, for example by disparity estimation. Such a processing allows also enhancing 2D images using 3D information, for example by interpolation of views from different viewpoints. Often 3D video content is created from at least two captured 2D video views. By relating the at least two views of the same scene in a geometrical manner, 3D information can be extracted.
  • color differences between the at least two views of the same scene from different viewpoints are color differences between the at least two views of the same scene from different viewpoints. These color differences may result for example from physical light effects, from inconsistent color corrections in post-production, from uncalibrated cameras used to capture the different views, or from non-calibrated film scanners. It would be preferable if such color differences could be compensated.
  • compensation of such color differences would be helpful for many applications. For example, when a 3D video sequence is compressed, compensation of color differences can reduce the resulting bit rate. Another example is 3D analysis for disparity estimations in 3D video sequences. When color differences are compensated, disparity estimations can be more precise. Another example is 3D assets creation for visual effects in post- production. When color differences in a multi-view video sequence are compensated, extracted texture for 3D objects will have better color coherence.
  • Color mapping is generally composed into three steps as shown in figure 1 .
  • Color mapping generally starts with finding the geometric relationship between the views using feature matching. From these findings of feature matching, the relationship of colors between different views is established, called color correspondences. Color correspondences provide which colors from one view are corresponding with which colors from another view. Then, an appropriate color mapping model is chosen depending on the knowledge of how the colors are changed between the views. Finally, the color mapping model is fitted to the color correspondences by an estimation procedure.
  • Geometrical correspondences can be automatically extracted from images using known methods. For example, a well known method for detection of so-called feature correspondences is presented in the article entitled “Distinctive image features from scale invariant keypoints", authored by D. G. Lowe et al., and published in 2004 in the “Int. Journal of Computer Vision” Vol. 60(2), pp. 91 -1 10. This method, called “SIFT” (Scale Invariant Feature Transform) detects corresponding feature points in the different input images, by using a descriptor based on Difference of Gaussian (“DoG").
  • SIFT Scale Invariant Feature Transform
  • Color correspondences the second step of color mapping, usually extracts corresponding colors by utilizing the characteristics of the matched features generated by feature matching.
  • the computation of color correspondences has generally the following requirements:
  • the color mapping models that are typically used can be classified into parametric and non-parametric models. Details can be found in the article entitled “Performance evaluation of color correction approaches for automatic multi-view image and video stitching", authored by W. Xu and J. Mulligan, published in 2010 In Proc. CVPR'10, pages 263-270.
  • parametric model means that it can be described using a finite number of parameters.
  • Non-parametric models means that the model structure is not specified a priori but is instead determined from data.
  • non-parametric does not mean that such models completely lack parameters but that the number and nature of the parameters are flexible and not fixed in advance.
  • geometrical correspondences are not used. There is a case where precise geometrical correspondences are not meaningful because the input images do not show the same semantic scene but are just semantically close. For example, the colors of a first mountain shown in a first input image should be transformed by this color transfer into the colors of a second mountain, different from the first mountain, shown in a second input image. In another case, the two input images show the same semantic scene, but anyway, geometrical correspondences are not available. There are several reasons for that. First, for reasons of workflow order or computational time, geometrical correspondences may not be not available at the time of processing of color transfer. A second reason may be that the number of reliable geometrical correspondences is not sufficient for color transfer, for example in low textured images.
  • the document CN101673412 deals with color clustering and color cluster correspondences between two images.
  • the document EP1 107580 discloses a color mapping method which takes into account the spatial neighborhood of target pixels, i.e. the context within a local area around these pixels.
  • the invention aims at the optimization of sparse color correspondences extracted from sparse features by analyzing the spatial neighborhood of the features.
  • the invention proposes notably first to select by clustering few representative colors in each spatial neighborhood extracted from feature points, then to match these colors by optimizing color their color statistics.
  • the invention proposes notably a color mapping method that utilizes the spatial neighborhood of sparse features.
  • the subject of the invention is a method for generating, in an image processor, a list of correspondences between colors of a first view of a scene and colors of at least one second view of the same scene, the method comprising the following steps : - identify features in all these views,
  • each generated group match colors between corresponding color clusters of this group, in order to generate a list of correspondences between colors of the first view and colors of the at least one second view.
  • Each list of color correspondences is specific to a zone of these views comprising a given feature with its spatial neighborhood in these different views. It means that a group of colors of the reference view may have, in the test view, different corresponding colors, depending on the zone of the first view to which the colors of this group belong.
  • this list of color correspondences can be used to determine a color mapping model for the transformation of the colors of the first view into colors of the at least one second view.
  • the groups of corresponding color clusters that are used to perform the matching of colors are preferably selected, for instance according to a criterion based on the size of an area which is common to the color clusters of a group, for instance at least equal to or greater than 50% of the size of the smaller color cluster among those color clusters of a group.
  • the colors that are selected for matching in corresponding color clusters are preferably selected among colors satisfying a criterion based on a criterion using color cluster metrics, as, for instance, cluster sizes or areas, cluster means and cluster variance.
  • a subject of the invention is also an Image processor for generating a list of correspondences between colors of a first view of a scene and colors of at least one second view of the same scene, the method comprising the following steps :
  • - feature matching means configured to perform feature matching between features identified in these different views, such that at least one feature identified in the first view matches with a feature identified in the at least one second view, and
  • FIG. 1 illustrates a general scheme of a classical color mapping method according to the prior art, comprising three basic steps ;
  • FIG. 2 illustrates two views of the same scene, i.e. a reference view and a test view, to which the method according to the invention is embodied;
  • FIG. 4 is a magnification of the neighbourhoods of corresponding features shown in figure 3;
  • FIG. 5 shows the result of the color clustering of the neighbourhoods of figure 4 according to the fourth step of the main embodiment of the invention
  • FIG. 8 is a graphical representation of a list of color correspondences between colors of the red channel of the reference view and colors of the red channel of the test view of figure 2, obtained from the sixth step of the main embodiment of the invention ;
  • Figure 9 is a graphical representation of a list of correspondences between colors of the green channel of the reference view and colors of the green channel of the test view of figure 2, obtained from the sixth step of the main embodiment of the invention ;
  • - Figure 10 is a graphical representation of a list of correspondences between colors of the blue channel of the reference view and colors of the blue channel of the test view of figure 2, obtained from the sixth step of the main embodiment of the invention ;
  • - Figure 1 1 is a diagram showing the different steps of the method according to the main embodiment of the invention.
  • a reference view two views of the same scene are provided, as illustrated on figure 2: a reference view and a test view.
  • Reference view means that after color mapping, we expect that the colors of both views are as close as possible to the reference view.
  • the test view the view the colors of which will be mapped through the list of color correspondences generated according to the invention is called the test view.
  • the left column is related to the reference view whereas the right column is related to the test view.
  • a feature matching operation is performed between the features identified in these different views, using a scale and rotation invariant feature matching algorithm called SIFT, as described in the D.G. Lowe's article already mentioned above.
  • SIFT scale and rotation invariant feature matching algorithm
  • Corresponding patches along with their deformation parameters can then be matched.
  • the word "patch" corresponds to a spatial zone of a view comprising an identified feature and a geometrical neighborhood around this feature.
  • a patch coming from the reference image is called a reference patch whereas a patch coming from the test image is called test patch.
  • An example of such feature matching after applying the deformation parameters is shown in figures 2 and 3.
  • SIFT matching shown in figure 3 is not precisely aligned due to occlusion and SIFT's deformation parameter estimation. As SIFT points are not precisely aligned, errors of matching may occur. As an example, in figure 3, the SIFT estimation error due to location, angle and occlusion is highlighted with ellipses. In other words, the features matching usually generates errors that extend over more than one pixel. Note that, the application of deformation parameters such as scale and rotation makes both patches with the same size and nearly similar content. It means that the patches are nearly registered or nearly geometrically aligned.
  • a spatial neighborhood is selected around each identified feature in the reference view and in the test view.
  • a rectangular neighborhood of 15x15 pixels has been selected around the feature location.
  • Any other block of NxM pixels surrounding the identified feature can be selected, where N may or may not be equal to M.
  • N and M are preferably both inferior to 100 for image formats such as HDTV in order to not include parts of scene having significantly different object motion or object shape.
  • Figure 3 shows 15x15 neighborhoods as black rectangles that are magnified in figure 4.
  • the neighborhood around a SIFT matching feature is magnified by 2 for visualization purpose.
  • a fourth step of the invention color clustering of the identified spatial neighborhoods is performed in these different views in order to generate color clusters in these spatial neighborhoods.
  • Figure 5 shows the result of this color clustering.
  • a mean shift algorithm is used that is described in the article entitled "Mean shift: a robust approach toward feature space analysis", authored by D. Comaniciu and P. Meer, published in 2002 in Pattern Analysis and Machine Intelligence, IEEE Transactions Vol. 24(5), pp.603-619.
  • mean shift has splitted the neighborhood studied in two color clusters for both the reference and the test patches. Note that, the total number of color clusters in the reference patch and the total number of color clusters in the test patch may not be the same. Any other color clustering method can be used instead as the method described in the D. Comaniciu's article.
  • a corresponding color cluster is searched among the color clusters generated in the neighborhood of the matching feature found in test view. For searching such color correspondences between two color clusters generated in the neighborhood of corresponding features extracted in the two views, we look for the largest areas which are geometrically common to a color cluster generated in the neighborhood of a feature identified in the reference view and to a color cluster generated in the neighborhood of a corresponding feature identified in the test view, as described below.
  • Figures 6 and 7 show colors selected respectively in a first pair of corresponding color clusters and in a second pair of corresponding color clusters, these clusters being shown among others on figure 5.
  • the computing of color cluster correspondence can be for instance performed as follows.
  • the color clusters in the reference patch and the color clusters in the test patch are color labeled.
  • all positions i.e. all pixels, that are common to a color cluster of the reference patch and to a color cluster of the test patch, are extracted. For instance, for each color cluster of the reference patch, an alpha-mask is built in which alpha is equal to 1 for the pixels corresponding to this color cluster and in which alpha is equal to 0 out of the area of these pixels. Then, this alpha-mask is applied on the corresponding test patch - see the corresponding patches on figure 4 - by performing an "AND" operation between this alpha-mask and the test patch. The application of the alpha-mask selects an area of pixels which belong both to the color cluster of the reference patch and to the corresponding color cluster of the test patch.
  • This area of pixels is called "overlapping area”. Then, an extraction criterion is applied to these overlapping areas.
  • One extraction criterion that can be used is for example the size of the overlapping area between the color cluster from the reference patch and its counterpart from the test patch.
  • Other extraction criteria can be used, for example based on the shape of the clusters.
  • the common area between the color cluster of the reference patch and its corresponding cluster in the test patch is defined to be the part of the overlapping area having this most frequent label.
  • the sixth step of the invention is the matching of colors between corresponding color clusters.
  • the pairs of corresponding color clusters that are used to perform the matching of colors are preferably selected, for instance according to a criterion based on the size of the area which is common to the color clusters of a pair, this common area being for instance determined as described in the following.
  • a good candidate as a pair of corresponding color clusters to be used for the matching of colors would be a pair of color clusters having a common area, as computed above, which is at least equal to or greater than 50% of the size of the smaller cluster among those two clusters of the pair. If two corresponding clusters do not satisfy that condition, we assume that this pair is classified as a "bad candidate" in term of color clusters correspondence and will not be used for the matching of colors. On the other hand, if a pair of corresponding color clusters satisfies this condition, we assume that this pair is classified as "good candidate" in term of color clusters correspondence and will actually be used for the matching of colors as described below.
  • color cluster metrics Any method using color cluster metrics can be used to get this classification into "good” and “bad” candidates.
  • cluster size, cluster shape, common cluster area, cluster color mean and cluster color variance can be used.
  • the color channels are generally : R (red), G (green), and B (blue).
  • the list of correspondences between colors of the reference view and colors of the test view is used for example as follows to determine a color mapping model for the transformation of the colors of the reference view into colors of the test view.
  • the global color mapping model that we choose is based on a non linear function defined by three parameters as shown in equation 2.
  • c ref are the color coordinates of a color in the reference view
  • Ci ⁇ t are the color coordinates of a color in the test view .
  • Parameter G defines the gain
  • parameter Y defines the gamma
  • parameter b defines the offset of the non linear function. This function is usually called GOG (Gamma, Offset and Gain). Any other color mapping model can be chosen to implement the invention.
  • the robust estimation method is performed in two steps which are inspired from this ROUT method.
  • the model parameters are therefore computed from all color correspondences from the red channel (figure 8), from the green channel (figure 9) and from the blue channel (figure 10), respectively.
  • model parameters are estimated in a refinement step from inliers only, shown as light-grey inner dots on these figures.
  • the model parameters y b, G are estimated from the inliers using the least square method.
  • the black central line shows the estimated GOG ( Y , t», G) curve.
  • illumination we select first view from one exposure but the second view from another exposure. For example, we used two images both under
  • illumination3 but a first view (viewO) from exposure 1 (500ms) and a second view (view6) from exposure 2 (2000ms).
  • viewO first view
  • view6 second view
  • each color mapping method try to correct It ⁇ t view and produce an output of "corrected test" view, i CO rr» .t # ⁇ j_t # st -
  • the quality of the color mapping method is evaluated by comparing the Icererted test with the itrue as described below.
  • an evaluation framework is needed. This evaluation framework computes the remaining color differences between the color mapped view, and the ground truth, Itrue . If we assume that a color mapping method works well, these remaining color differences should be as low as possible. In other words, the less the color differences remain, the better is the color mapping method.
  • N is the total number of pixels of the image.
  • N °4 refers to the color transfer method disclosed by F. Pitie, A.C. Kokaram, and R. Dahyot, in "Automated colour grading using colour distribution transfer", published in 2007 in Computer Vision and Image
  • table 2 shows these average results which are the overall quality comparison of the four methods.
  • a sparse feature matching based color correspondence method has been proposed.
  • the invention allows the optimization of the neighborhood of sparse feature matching.
  • the invention notably proposes the clustering of the neighborhood, the computing of the color cluster correspondence, and the analysis of the local statistics of color cluster correspondences in color to get color correspondences. From our experimental result, we find that the proposed color correspondence method according to the invention can handle both the spatial precision as well as occlusion. Moreover, since this method captures colors from the neighborhood of the feature matching, we find it sufficient to generalize the color mapping model for rest of the colors where direct correspondences are not known.
  • the invention may be implemented in various forms of hardware, software, firmware, special purpose processors, or combinations thereof.
  • the invention may be notably implemented as a combination of hardware and software.
  • the software may be implemented as an application program tangibly embodied on a program storage unit.
  • the application program may be uploaded to, and executed by, a machine comprising any suitable architecture.
  • the machine is implemented on a computer platform having hardware such as one or more central processing units (“CPU"), a random access memory (“RAM”), and input/output (“I/O”) interfaces.
  • CPU central processing units
  • RAM random access memory
  • I/O input/output
  • the computer platform may also include an operating system and microinstruction code.
  • various processes and functions described herein may be either part of the microinstruction code or part of the application program, or any combination thereof, which may be executed by a CPU.
  • various other peripheral units may be connected to the computer platform such as an additional data storage unit and a printing unit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

La présente invention a pour objet d'optimiser des correspondances disparates entre couleurs extraites de caractéristiques disparates en utilisant le voisinage spatial des caractéristiques. L'invention propose notamment tout d'effectuer d'abord une sélection en regroupant un petit nombre de couleurs représentatives dans chaque voisinage spatial extrait de points caractéristiques, puis de mettre en correspondance ces couleurs, notamment par optimisation de leurs statistiques de couleurs.
PCT/EP2012/076229 2012-05-03 2012-12-19 Procédé et système de détermination d'un modèle de mise en correspondance des couleurs permettant de transformer des couleurs d'une première vue en des couleurs d'au moins une seconde vue WO2013164043A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP12305499 2012-05-03
EP12305499.1 2012-05-03

Publications (1)

Publication Number Publication Date
WO2013164043A1 true WO2013164043A1 (fr) 2013-11-07

Family

ID=47504957

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2012/076229 WO2013164043A1 (fr) 2012-05-03 2012-12-19 Procédé et système de détermination d'un modèle de mise en correspondance des couleurs permettant de transformer des couleurs d'une première vue en des couleurs d'au moins une seconde vue

Country Status (1)

Country Link
WO (1) WO2013164043A1 (fr)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015086530A1 (fr) * 2013-12-10 2015-06-18 Thomson Licensing Procede pour compenser des differences de couleur entre des images differentes d'une meme scene
EP3001668A1 (fr) * 2014-09-24 2016-03-30 Thomson Licensing Procédé de compensation de différences de couleur entre différentes images d'une même scène
CN106650755A (zh) * 2016-12-26 2017-05-10 哈尔滨工程大学 基于颜色特征的特征提取方法
US10262441B2 (en) 2015-02-18 2019-04-16 Qualcomm Incorporated Using features at multiple scales for color transfer in augmented reality
CN109919899A (zh) * 2017-12-13 2019-06-21 香港纺织及成衣研发中心有限公司 基于多光谱成像的图像的质量评估方法

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1107580A2 (fr) 1999-12-08 2001-06-13 Xerox Corporation Conversion de gamme
CN101673412A (zh) 2009-09-29 2010-03-17 浙江工业大学 结构光三维视觉系统的光模板匹配方法

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1107580A2 (fr) 1999-12-08 2001-06-13 Xerox Corporation Conversion de gamme
CN101673412A (zh) 2009-09-29 2010-03-17 浙江工业大学 结构光三维视觉系统的光模板匹配方法

Non-Patent Citations (14)

* Cited by examiner, † Cited by third party
Title
D. COMANICIU; P. MEER: "Mean shift: a robust approach toward feature space analysis", PATTERN ANALYSIS AND MACHINE INTELLIGENCE, IEEE TRANSACTIONS, vol. 24, no. 5, 2002, pages 603 - 619, XP002323848, DOI: doi:10.1109/34.1000236
D. G. LOWE ET AL.: "Distinctive image features from scale invariant keypoints", INT. JOURNAL OF COMPUTER VISION, vol. 60, no. 2, 2004, pages 91 - 110, XP019216426, DOI: doi:10.1023/B:VISI.0000029664.99615.94
D. SCHARSTEIN; C. PAL: "Learning conditional random fields for stereo", IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, 2007. CVPR'07, 2007, pages 1 - 8, XP031114448
E. REINHARD; M. ASHIKHMIN; B. GOOCH; P. SHIRLEY: "Color Transfer between Images", APPLIED PERCEPTION OF THE IEEE COMPUTER GRAPHICS AND APPLICATIONS, vol. 21, no. 5, 2001, pages 34 - 41, XP002728545
F. PITIE; A.C. KOKARAM; R. DAHYOT: "Automated colour grading using colour distribution transfer", COMPUTER VISION AND IMAGE UNDERSTANDING, vol. 107, no. 1-2, 2007, pages 123 - 137, XP022103085, DOI: doi:10.1016/j.cviu.2006.11.011
H. MOTULSKY; R. BROWN: "Detecting outliers when fitting data with nonlinear regression-a new method based on robust nonlinear regression and the false discovery rate", BMC BIOINFORMATICS, vol. 7, no. 1, 2006, pages 123, XP021013626, DOI: doi:10.1186/1471-2105-7-123
HASAN S F ET AL: "Robust Color Correction for Stereo", VISUAL MEDIA PRODUCTION (CVMP), 2011 CONFERENCE FOR, IEEE, 16 November 2011 (2011-11-16), pages 101 - 108, XP032074523, ISBN: 978-1-4673-0117-6, DOI: 10.1109/CVMP.2011.18 *
KENJI YAMAMOTO ET AL: "Color correction for multi-view video using energy minimization of view networks", INTERNATIONAL JOURNAL OF AUTOMATION AND COMPUTING, vol. 5, no. 3, 1 July 2008 (2008-07-01), pages 234 - 245, XP055055313, ISSN: 1476-8186, DOI: 10.1007/s11633-008-0234-5 *
M.R. LUO; G. CUI; B. RIGG: "The development of the cie 2000 colour-difference formula: Ciede2000", COLOR RESEARCH & APPLICATION, vol. 26, no. 5, 2001, pages 340 - 350, XP009006847, DOI: doi:10.1002/col.1049
MEHRDAD PANAHPOUR TEHRANI; AKIO ISHIKAWA; SHIGEYUKI SAKAZAWA; ATSUSHI KOIKE: "Iterative colour correction of multicamera systems using corresponding feature points", JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, vol. 21, no. 5-6, 2010, pages 377 - 391, XP027067816, DOI: doi:10.1016/j.jvcir.2010.03.007
PANAHPOUR TEHRANI M ET AL: "Iterative colour correction of multicamera systems using corresponding feature points", JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, ACADEMIC PRESS, INC, US, vol. 21, no. 5-6, 1 July 2010 (2010-07-01), pages 377 - 391, XP027067816, ISSN: 1047-3203, [retrieved on 20100601], DOI: 10.1016/J.JVCIR.2010.03.007 *
QI WANG ET AL: "Robust color correction in stereo vision", IMAGE PROCESSING (ICIP), 2011 18TH IEEE INTERNATIONAL CONFERENCE ON, IEEE, 11 September 2011 (2011-09-11), pages 965 - 968, XP032080658, ISBN: 978-1-4577-1304-0, DOI: 10.1109/ICIP.2011.6116722 *
W. XU; J. MULLIGAN: "Performance evaluation of color correction approaches for automatic multi-view image and video stitching", PROC. CVPR'10, 2010, pages 263 - 270, XP031726027
YOAV HACOHEN; ELI SHECHTMAN; DAN B GOLDMAN; DANI LISCHINSKI: "Non-rigid dense correspondence with applications for image enhancement", ACM TRANSACTIONS ON GRAPHICS, 2011

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015086530A1 (fr) * 2013-12-10 2015-06-18 Thomson Licensing Procede pour compenser des differences de couleur entre des images differentes d'une meme scene
US20160323563A1 (en) * 2013-12-10 2016-11-03 Thomson Licensing Method for compensating for color differences between different images of a same scene
EP3001668A1 (fr) * 2014-09-24 2016-03-30 Thomson Licensing Procédé de compensation de différences de couleur entre différentes images d'une même scène
US10262441B2 (en) 2015-02-18 2019-04-16 Qualcomm Incorporated Using features at multiple scales for color transfer in augmented reality
CN106650755A (zh) * 2016-12-26 2017-05-10 哈尔滨工程大学 基于颜色特征的特征提取方法
CN109919899A (zh) * 2017-12-13 2019-06-21 香港纺织及成衣研发中心有限公司 基于多光谱成像的图像的质量评估方法
EP3724850A4 (fr) * 2017-12-13 2021-09-01 The Hong Kong Research Institute of Textiles and Apparel Limited Évaluation de qualité de couleur basée sur l'imagerie multispectrale

Similar Documents

Publication Publication Date Title
JP6438403B2 (ja) 結合された深度キューに基づく平面視画像からの深度マップの生成
Han et al. Visible and infrared image registration in man-made environments employing hybrid visual features
Kordelas et al. Enhanced disparity estimation in stereo images
CN111435438A (zh) 适于增强现实、虚拟现实和机器人的图形基准标记识别
Ravichandran et al. Video registration using dynamic textures
US20160165216A1 (en) Disparity search range determination for images from an image sensor array
US20130002810A1 (en) Outlier detection for colour mapping
WO2015086537A1 (fr) Procédé de construction d'un ensemble de correspondances de couleurs à partir d'un ensemble de correspondances de caractéristiques dans un ensemble d'images correspondantes
WO2013164043A1 (fr) Procédé et système de détermination d'un modèle de mise en correspondance des couleurs permettant de transformer des couleurs d'une première vue en des couleurs d'au moins une seconde vue
US9082019B2 (en) Method of establishing adjustable-block background model for detecting real-time image object
de Oliveira et al. A hierarchical superpixel-based approach for DIBR view synthesis
Recky et al. Façade segmentation in a multi-view scenario
CN108491857B (zh) 一种视域重叠的多摄像机目标匹配方法
Minematsu et al. Adaptive background model registration for moving cameras
Xiang et al. Exemplar-based depth inpainting with arbitrary-shape patches and cross-modal matching
CN110120012B (zh) 基于双目摄像头的同步关键帧提取的视频拼接方法
EP2698764A1 (fr) Procédé d'échantillonnage de couleurs d'images d'une séquence vidéo et application à groupement de couleurs
CN109919164B (zh) 用户界面对象的识别方法及装置
KR102665603B1 (ko) 스테레오 매칭을 위한 하드웨어 디스패러티 평가
Mukherjee et al. A hybrid algorithm for disparity calculation from sparse disparity estimates based on stereo vision
CN110472085B (zh) 三维图像搜索方法、系统、计算机设备和存储介质
Liu et al. Feature matching for texture-less endoscopy images via superpixel vector field consistency
Wang et al. A robust algorithm for color correction between two stereo images
Ran et al. Intrinsic color correction for stereo matching
Hasan et al. Optimization of sparse color correspondences for color mapping

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12810252

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12810252

Country of ref document: EP

Kind code of ref document: A1