WO2020043280A1 - Mesure de fidélité d'image - Google Patents

Mesure de fidélité d'image Download PDF

Info

Publication number
WO2020043280A1
WO2020043280A1 PCT/EP2018/073239 EP2018073239W WO2020043280A1 WO 2020043280 A1 WO2020043280 A1 WO 2020043280A1 EP 2018073239 W EP2018073239 W EP 2018073239W WO 2020043280 A1 WO2020043280 A1 WO 2020043280A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
map
pixel
determining
pixel values
Prior art date
Application number
PCT/EP2018/073239
Other languages
English (en)
Inventor
Maciej PEDZISZ
Original Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget Lm Ericsson (Publ) filed Critical Telefonaktiebolaget Lm Ericsson (Publ)
Priority to PCT/EP2018/073239 priority Critical patent/WO2020043280A1/fr
Publication of WO2020043280A1 publication Critical patent/WO2020043280A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/147Data rate or code amount at the encoder output according to rate distortion criteria
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/004Diagnosis, testing or measuring for television systems or their details for digital television systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/14Coding unit complexity, e.g. amount of activity or edge presence estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/154Measured or subjectively estimated visual quality after decoding, e.g. measurement of distortion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/167Position within a video image, e.g. region of interest [ROI]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/189Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
    • H04N19/192Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding the adaptation method, adaptation tool or adaptation type being iterative or recursive
    • H04N19/194Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding the adaptation method, adaptation tool or adaptation type being iterative or recursive involving only two passes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Definitions

  • the invention generally relates to methods, devices, encoder, computer program and carrier for determining image fidelity measures.
  • image and video processing technologies use some form of quality analysis to validate the processing results. This may be used for pre-processing stages, e.g., noise removal, color space conversions and de-interlacing; image and video compression, e.g., in-loop rate-distortion optimization, profile creation and tuning; and/or as an external procedure to validate and/or compare the results of some image and video processors, e.g., encoder comparisons.
  • pre-processing stages e.g., noise removal, color space conversions and de-interlacing
  • image and video compression e.g., in-loop rate-distortion optimization, profile creation and tuning
  • image and video compression e.g., in-loop rate-distortion optimization, profile creation and tuning
  • an external procedure to validate and/or compare the results of some image and video processors, e.g., encoder comparisons.
  • the currently best image quality analysis is to use subjective scores as the evaluation criteria. These scores are assigned by human viewers and are mapped to mean opinion score (MOS) or differential mean opinion score (DMOS) range. They are optimal in a sense that both image and video statistics and human visual system (HVS) properties are considered and the quality is judged by humans, i.e., the target receiver for which the images and videos were created.
  • MOS mean opinion score
  • DMOS differential mean opinion score
  • HVS human visual system
  • non-reference methods also referred to as single-ended methods, in which only distorted images are available for the quality evaluation.
  • full-reference methods are the most reliable as all information is available in the quality evaluation, while the non-reference methods are the least reliable but do not require any additional information except for the distorted images.
  • the full-reference methods typically use the intensity channel or all three color channels to compare the reference images and distorted images and judge the distortion in a quantitative manner.
  • the most common methods in this category rely on the L2-norm of the difference between reference and distorted images and include mean square error (MSE) and peak signal to noise ratio (PSNR).
  • MSE mean square error
  • PSNR peak signal to noise ratio
  • Other methods perform quality evaluation based on more abstract image properties, such as image structure as used in structural similarity (SSIM) and its derivatives. These image fidelity metrics can also be improved through knowledge of where humans tend to fixate in images [1 ].
  • An aspect of the embodiments relates to a method of determining an image fidelity measure for an image.
  • the method comprises determining a first map representing, for each pixel in at least a portion of the image, a distortion in pixel values between the pixel and a corresponding pixel in a reference image.
  • the method also comprises determining a second map as an aggregation of a third map representing, for each pixel in the at least a portion of the image, a local variability in pixel values and a fourth map representing, for each corresponding pixel in the reference image, a local variability in pixel values.
  • the method further comprises determining the image fidelity measure based on the first map and the second map.
  • Another aspect of the embodiment relates to a method of encoding an original image.
  • the method comprises encoding at least a portion of the original image according to multiple coding modes to obtain multiple encoded candidate image portions and decoding the multiple encoded candidate image portions to obtain multiple decoded candidate image portions.
  • the method also comprises determining, for each of the multiple decoded candidate image portions, a respective image fidelity measure according to above using the original image as reference image.
  • the method further comprises selecting, among the multiple encoded candidate image portions, an encoded candidate image portion as encoded representation of the at least a portion of the original image at least partly based on the respective image fidelity measures.
  • a further aspect of the embodiments relates to a method of selecting an encoder profile for an encoder.
  • the method comprises encoding at least one original image of a test set using multiple encoder profiles to obtain multiple encoded images and decoding the multiple encoded images to obtain multiple decoded images.
  • the method also comprises determining, for each of the multiple decoded images, a respective image fidelity measure according to above using the at least one original image as reference image.
  • the method further comprises selecting, among the multiple encoder profiles, an encoder profile for the encoder based at least partly on the respective image fidelity measures.
  • An aspect of the embodiments relates to a device for determining an image fidelity measure for an image.
  • the device is configured to determine a first map representing, for each pixel in at least a portion of the image, a distortion in pixel values between the pixel and a corresponding pixel in a reference image.
  • the device is also configured to determine a second map as an aggregation of a third map representing, for each pixel in the at least a portion of the image, a local variability in pixel values and a fourth map representing, for each corresponding pixel in the reference image, a local variability in pixel values.
  • the device is further configured to determine the image fidelity measure based on the first map and the second map.
  • an encoder comprising a device for determining an image fidelity measure for an image according to above.
  • the encoder is configured to encode at least a portion of the original image according to multiple coding modes to obtain multiple encoded candidate image portions and decode the multiple encoded candidate images to obtain multiple decoded candidate image portions.
  • the encoder is also configured to select, among the multiple encoded candidate image portions, an encoded candidate image portion as encoded representation of the at least a portion of the original image at least partly based on respective image fidelity measures determined by the device for each of the multiple decoded candidate image portions using the original image as reference image.
  • a further aspect of the embodiments relates to a device for selecting an encoder profile for an encoder.
  • the device comprises a device for determining an image fidelity measure for an image according to above.
  • the device for selecting an encoder profile is configured to encode at least one original image of a test set using multiple encoder profiles to obtain multiple encoded images and decode the multiple encoded images to obtain multiple decoded images.
  • the device is also configured to select, among the multiple encoder profiles, an encoder profile for the encoder based at least partly on respective image fidelity measures determined by the device for determining an image fidelity measure using the at least one original image as reference image.
  • a related aspect of the embodiment defines a network device comprising a device according to above and/or an encoder according to above.
  • Yet another aspect of the embodiments relates to a computer program comprising instructions, which when executed by at least one processor, cause the at least one processor to determine a first map representing, for each pixel in at least a portion of an image, a distortion in pixel values between the pixel and a corresponding pixel in a reference image.
  • the at least one processor is also caused to determine a second map as an aggregation of a third map representing, for each pixel in the at least a portion of the image, a local variability in pixel values and a fourth map representing, for each corresponding pixel in the reference image, a local variability in pixel values.
  • the at least one processor is further caused to determine an image fidelity measure for the image based on the first map and the second map.
  • a related aspect of the embodiments defines a carrier comprising a computer program according to above.
  • the carrier is one of an electronic signal, an optical signal, an electromagnetic signal, a magnetic signal, an electric signal, a radio signal, a microwave signal, or a computer-readable storage medium.
  • the image fidelity measure has a good correlation with the human visual system as assessed using both linear and rank correlations with subjective scores. It is fast in terms of execution speed.
  • the image fidelity measure statistically outperforms many of the prior art metrics and has smaller mean absolute error and root mean squares errors on MOS/DMOS scores than such prior art metrics.
  • FIG. 1 schematically illustrates processing of original or reference images resulting in distorted images
  • Fig. 2 is a flow chart illustrating a method of determining an image fidelity measure for an image according to an embodiment
  • Fig. 3 is a flow chart illustrating additional, optional steps of the method in Fig. 2 according to an embodiment
  • Fig. 4 is a flow chart illustrating an embodiment of determining an image fidelity measure in Fig. 2;
  • Fig. 5 is a flow chart illustrating an additional, optional step of the method in Fig. 2 according to an embodiment
  • Fig. 6 is a flow chart illustrating a method of encoding an original image according to an embodiment
  • Fig. 7 is a flow chart illustrating an additional, optional step of the method in Fig. 6 according to an embodiment
  • Fig. 8 is a flow chart illustrating a method of selecting an encoder profile for encoding a video sequence of images
  • Fig. 9 schematically illustrates an embodiment of determining an image fidelity measure for an image
  • Fig. 10 schematically illustrates an embodiment of determining an image fidelity measure for an image
  • Fig. 1 1 is a block diagram of a device for determining an image fidelity measure for an image according to an embodiment
  • Fig. 12 is a block diagram of a device for determining an image fidelity measure for an image according to another embodiment
  • Fig. 13 is a block diagram of a device for determining an image fidelity measure for an image according to a further embodiment
  • Fig. 14 schematically illustrates a computer program based implementation of an embodiment
  • Fig. 15 is a block diagram of a device for determining an image fidelity measure for an image according to yet another embodiment
  • Fig. 16 is a block diagram of an encoder according to an embodiment
  • Fig. 17 is a block diagram of a device for selecting an encoder profile for encoding a video stream of images according to an embodiment
  • Fig. 18 schematically illustrates a distributed implementation among network devices
  • Fig. 19 is a schematic illustration of an example of a wireless communication system with one or more cloud-based network devices according to an embodiment
  • Fig. 20 is a schematic diagram illustrating an example of a wireless network in accordance with some embodiments
  • Fig. 21 is a schematic diagram illustrating an example of an embodiment of a wireless device in accordance with some embodiments.
  • Fig. 22 is a schematic block diagram illustrating an example of a virtualization environment in which functions implemented by some embodiments may be virtualized
  • Fig. 23 is a schematic diagram illustrating an example of a telecommunication network connected via an intermediate network to a host computer in accordance with some embodiments;
  • Fig. 24 is a schematic diagram illustrating an example of a host computer communicating via a base station with a user equipment over a partially wireless connection in accordance with some embodiments;
  • Fig. 25 is a flowchart illustrating a method implemented in a communication system in accordance with an embodiment
  • Fig. 26 is a flowchart illustrating a method implemented in a communication system in accordance with an embodiment
  • Fig. 27 is a flowchart illustrating a method implemented in a communication system in accordance with an embodiment
  • Fig. 28 is a flowchart illustrating a method implemented in a communication system in accordance with an embodiment
  • Fig. 29 is a diagram comparing averaged Pearson’s linear correlation coefficient (PLCC) across five image quality assessment (IQA) databases; Tampere image database 2013 (TID2013) [2], categorical subjective image quality (CSIQ) database [3], laboratory for image and video engineering release 2 (LIVE2) database [4], Colourlab image database: image quality (CID:IQ) [5] and image and video communication (IVC) database [6], for various prior art image fidelity measures; peak signal to noise ratio (PSNR), structural similarity (SSIM), multiscale SSIM (MS-SSIM), information content weighted PSNR (IW-PSNR), PSNR, human visual system (PSNR-HVS), PSNR-FIVS with contrast masking (PSNR-HVS- M) and visual information fidelity (VIF), and image fidelity measures according to embodiments; visually important image quality assessment (VIIQA), VIIQA JPEG (VIIGA-JPG) and VIIQA JPEG 2000 (VIIQA- J2K) using a local pixel neighborhood defined by
  • the present invention generally relates to determination of image fidelity measures, and in particular to determining such image fidelity measures suitable for image quality analysis and evaluation.
  • the image fidelity measure of the embodiments is determined in a full-reference method having access to both original or reference images and distorted images.
  • Fig. 1 schematically illustrates the general concept of processing images 20, also referred to as pictures or frames herein, resulting in distorted or degraded images 10.
  • the original images 20 input in such an image processing could be still images, such as captured by a camera or computer-generated images, or images of a video sequence or stream 1.
  • the image processing as shown in Fig. 1 could be any processing that is applied to images 20 and that may result in a distortion or degradation in the quality of the images due to changes in pixel values of the pixels 24 in the images 20.
  • Non-limiting examples of such image processing includes image or video coding, also referred to as image or video compression, and various pre-processing stages prior to such image or video coding.
  • image or video coding also referred to as image or video compression
  • various pre-processing stages prior to such image or video coding For instance, the image fidelity measure of the embodiments could be used in connection with in-loop rate- distortion optimization (RDO), encoder profile creation and tuning, etc., i.e., generally controlling or optimizing the image or video coding.
  • RDO in-loop rate- distortion optimization
  • the image fidelity measure may also, or alternatively, be used to validate and/or compare the results of different image or video encoders or encoder profiles, e.g., in encoder competitions or comparisons.
  • the image fidelity measure of the embodiments may, for instance, be used to control or optimize, and/or validate or compare, noise removal processing, color space conversions, de-interlacing and other such image or video pre-processing stages.
  • the images 20 input to the image processing are referred to as reference images 20 or original images herein, whereas the images 10 output of the image processing are referred to as simply images 10 or distorted images herein.
  • Original” and“reference” as used herein for original images 20 or reference images 20 indicate that the image 20 is to be input into a distortion-causing image processing.
  • Original should, however, not be interpreted limited as only referring to images directly output from a camera or a computer-based image generating source.
  • reference should not be interpreted as referring to using the reference image 20 as coding reference for a current image 10.
  • the original or reference images 20 may have been subject to upstream image processing operations, including such image processing operations that may cause a distortion in pixel values and thereby a degradation in quality.
  • Original or reference should thereby be interpreted with the regard to a current image processing operation regardless of any previous or upstream such image processing operation.
  • image fidelity and“image quality” are sometimes used interchangeably in the art of image quality assessment.
  • Image fidelity is, however, relating to the ability to discriminate between two images, e.g., a reference image 20 and a distorted image 10.
  • Image quality is, on the other hand, more related to the preference for one image over another.
  • image fidelity and image fidelity measures relate to full- reference methods, in which both reference images 20 and distorted images 10 are available for assessment, whereas image quality and image quality measures are more relevant for non-reference methods, in which only the distorted images 10 are available in the assessment.
  • RGB color red (R), green (G), blue (B) color, i.e., RGB color
  • luma (Y’) and chroma (Cb, Cr) color i.e., Y’CbCr color
  • luminance (Y) and chrominance (X, Z) color i.e., XYZ color
  • luma or intensity (I) and chroma (Ct, Cp) color i.e., ICtCp color.
  • a pixel value as used herein could be any color component value, such as R, G, B, Y’, Cb, Cr, X, Y, Z, I, Ct or Cp value.
  • a pixel value is a luma value (Y’) or a chroma value (Cb or Cr).
  • FIG. 2 is a flow chart illustrating a method of determining an image fidelity measure for an image 10, see also Fig. 1.
  • the method comprises determining, in step S1 , a first map representing, for each pixel 14 in at least a portion 12 of the image 10, a distortion in pixel values between the pixel 14 and a corresponding pixel 24 in a reference image 20.
  • the method also comprises determining, in step S2, a second map as an aggregation of a third map representing, for each pixel 14 in the at least a portion 12 of the image 10, a local variability in pixel values and a fourth map representing, for each corresponding pixel 24 in the reference image 20, a local variability in pixel values.
  • the method further comprises determining the image fidelity measure based on the first map and the second map in step S3.
  • Steps S1 and S2 of Fig. 2 can be performed serially in any order, i.e., step S1 prior to step S2 or step S2 prior to step S1 , or at least partly in parallel.
  • Corresponding pixel 24 as used herein indicates a pixel 24 in the reference image 20 having the same coordinate or position in the reference image 20 as a pixel 14 has in the image 10. For instance, a pixel 14 having the coordinate ( i,j ) in the image 10 has a corresponding pixel 24 with the coordinate ( i,j ) in the reference image 20.
  • the image fidelity measure of the embodiments is based on two maps determined for at least a portion 12 of the image 10.“Map” as used herein represents a data set having the same resolution as the least a portion 12 of the image 10 in terms of number of data entries and pixels.
  • the least a portion 12 of the image 10 could as an illustrative, but non-limiting, example be defined as comprising m x n pixels 14 having a respective pixel value for some integer values m, n.
  • the map then has the same resolution, i.e., m x data entries, one such data entry for each pixel 14 in the at least a portion 12 of the image 10.
  • the map could thereby be regarded as an array, such as a two-dimensional (2D) array, or a matrix with data entries and where the array or matrix has the same resolution as the pixel-resolution of the at least a portion 12 of the image 10.
  • the first map determined in step S1 represents distortion in pixel values, i.e., degradation or differences in pixel values between pixels 14 in the at least a portion 12 of the image 10 and corresponding pixels 24 in a corresponding portion 22 of the reference image 20.
  • This first map thereby reflects differences in pixel values between the reference image 20 and the image 10 and where such differences are due to the image processing applied to the reference image 20 to form the image 10 as a distorted or degraded version of the reference image 20.
  • the second map as determined in step S2 is included in the determination of the image fidelity measure since errors across the image 10, i.e., distortion in pixel values, do not have the same visual impact for the human visual system (HVS).
  • the second map correlates with visual attention or saliency, i.e., where humans tend to fixate while looking at the images 10, and thereby indicates image regions that are important for the HVS.
  • the second map can thereby be used to weight the distortions in the first map more heavily in important image regions as compared to in other regions of the image 10 that are of less importance for the HVS. As a result, an image fidelity measure that is adapted to the HVS is obtained.
  • the second map is determined in step S2 as an aggregation of the third map and the fourth map.
  • the third and fourth maps represent respective local variability in pixel values in the image 10 and the reference image 20.
  • This local variability in pixel values with regard to a pixel 14 in the image 10 or a corresponding pixel 24 in the reference image 20 corresponds to a variability in pixel values in a neighborhood in the image 10 or the reference image 20 relative to the pixel 14 or the corresponding pixel 24.
  • local variability in pixel values with regard to a pixel 14 in the image 10 reflects how and/or how much pixel values of neighboring or adjacent pixels 14 in the image 10 vary.
  • the first map determined in step S1 is a distortion map representing, for each pixel 14 in the at least a portion 12 of the image 10, a distortion in pixel values between the pixel 14 and the corresponding pixel 24 in the reference image 20.
  • the second map determined in step S2 is a visual importance map determined as an aggregation of a first variability map representing, for each pixel 14 in the at least a portion 12 of the image 10, the local variability in pixel values and a second variability map representing, for each corresponding pixel 24 in the reference image 20, the local variability in pixel values.
  • the image fidelity measure is determined based on the distortion map and the visual importance map.
  • the maps determined in Fig. 2 could be determined for at least a portion 12 of the image 10 as mentioned above.
  • This portion 12 could then constitute a part of, but not the whole, image 10.
  • the portion 12 could for instance correspond to a macroblock of pixels 14, a block of pixels 14, a coding block, a coding unit, a slice of a frame or image 10 or some other partition of an image 10 into a group or set of pixels 14.
  • the maps are determined for the whole image 10 and thereby the whole reference image 20.
  • step S1 comprises determining the first map representing, for each pixel 14 in the image 10, a distortion in pixel values between the pixel 14 and the corresponding pixel 24 in the reference image 20.
  • Step S2 comprises, in this embodiment, determining the second map as an aggregation of the third map representing, for each pixel 14 in the image 10, the local variability in pixel values and the fourth map representing, for each corresponding pixel 24 in the reference image 20, the local variability in pixel values.
  • the first map determined in step S1 represents a distortion in pixel values. Such a distortion thereby reflects a difference or degradation in pixel values caused by the image processing applied to the reference image 20 as an original image to obtain the image 10 as a distorted or degraded version of the original image.
  • the first map could thereby be defined as / ⁇ Cf,/), / 2 (f, /)) , such as — / 2 ( ⁇ ,;)) for some function f(x, y ), wherein denotes a pixel value of a pixel 14 at coordinate or position in the image 10 and denotes a pixel value of a corresponding pixel 24 at coordinate or position in the reference image 20.
  • the first map is a function of pixel-wise differences in pixel values between pixels 14 in the at least a portion 12 of the image and corresponding pixels 24 in the reference image 20.
  • Various functions f(x, y) could be used in step S1 to determine the first map.
  • step S1 could comprise determining the first map representing, for each pixel 14 in the at least a portion 12 of the image 10, an absolute difference in pixel values between the pixel 14 and the corresponding pixel 24 in the reference image 20, e.g., fi ial, j) - I 2 (i,j) I).
  • step S1 comprises determining the first map DM(i,j ) based on, such as equal to, wherein p is a positive power parameter.
  • the power parameter p is a positive number larger than zero. This power parameter can be used to enhance small differences in pixel values and/or saturate large differences in pixel values. Generally, a large value of the power parameter enhances large differences in pixel values, while suppressing small differences in pixel values, whereas a small value of the power parameter instead enhances small differences in pixel values while saturating larger such differences in pixel values between pixels 14 in the image 10 and corresponding pixels 24 in the reference image 20. In an embodiment, the power parameter p is within a range of from 0.001 up to 0.50, preferably within a range of from 0.01 up to 0.30, and more preferably within a range of from 0.02 up to 0.20.
  • step S1 determines the first map, such as Minkowski distance
  • the second map VIM(i,j ) thereby reflects, in each pixel or coordinate ( i,j ), the maximum local variability in pixel values for the given pixel 12, 14 or coordinate ( i,j ) in the image 10 and in the reference image 20.
  • the embodiments are, however, not limited to pixel-wise maxima as an example of aggregation of the third and fourth maps.
  • step S2 therefore comprises determining the second map as the aggregation of the third map representing, for each pixel 14 in the at least a portion 12 of the image 10, the local variability in pixel values in a pixel neighborhood of the pixel 14 in the image 10 and the fourth map representing, for each corresponding pixel 24 in the reference image 20, the local variability in pixel values in a pixel neighborhood of the corresponding pixel 24 in the reference image 20.
  • step S2 comprises determining the second map as the aggregation of the third map representing, for each pixel 14 in the at least a portion 12 of the image 10, a local variance in pixel values and the fourth map representing, for each corresponding pixel 24 in the reference image 20, a local variance in pixel values.
  • step S2 comprises determining the second map as the aggregation of the third map representing, for each pixel 14 in the at least a portion 12 of the image 10, a non-linearly mapped and normalized local variance in pixel values and the fourth map representing, for each corresponding pixel 24 in the reference image 20, a non-linearly mapped and normalized local variance in pixel values.
  • Fig. 3 is a flow chart illustrating additional steps of the method in Fig. 2 according to a particular embodiment using non-linearly mapped and normalized local variances.
  • the method continues from step S1 in Fig. 2.
  • a next step S10 comprises determining a first variance map representing, for each pixel 14 in the at least a portion 12 of the image 10, a local variance in pixel values.
  • Step S1 1 comprises determining a first variability map as a non-linearly mapped and normalized version of the first variance map.
  • Step S12 comprises determining a second variance map representing, for each corresponding pixel 24 in the reference image 20, a local variance in pixel values and step S13 comprises determining a second variability map as a non-linearly mapped and normalized version of the second variance map.
  • Steps S10+S1 1 and steps S12+S13 can be performed serially in any order or at least partly in parallel.
  • the method then continues to step S2 in Fig. 2, which comprises, in this embodiment, determining the second map as the aggregation of the first variability map and the second variability map.
  • step S11 of Fig. 3 comprises determining the first variability map VM ⁇ i var- ii.j) ⁇
  • var ⁇ i represents the first variance map ⁇ i ⁇ /va7i (i,j) q
  • Step S13 correspondingly comprises, in this particular embodiment, var 2 (i,j) q determining the second variability map VM 2 (i,j ) based on, such as equal to,
  • var 2 (i,j ) represents the second variance map.
  • the parameter q defines the non-linearity of the variance maps.
  • q is within a range of from 0.05 up to 1 .2, preferably within a range of from 0.10 up to 0.95, and more preferably within a range of from 0.20 up to 0.80, such as selected among the group consisting of 0.20, 0.50 and 0.79.
  • step S10 of Fig. 3 comprises determining the first variance map var ⁇ i.j) based on, such as equal to, iV 2 d ⁇
  • step S12 correspondingly
  • the positive odd parameter N defines the size of the pixel neighborhood, within which the local variance is determined.
  • the positive odd parameter N is preferably larger than one and is, preferably selected among the group consisting of 3, 5, 7, 9, 1 1 and 13, more preferably selected among the group consisting of 3, 5, 7 and 9, such as selected among the group consisting of 3, 5 and 7, and more preferably being 3 or 5.
  • the local variance is determined using a box filter, i.e., uniform filter, having a filter size as defined by the positive odd parameter N.
  • the second map also referred to as visual importance map
  • the third map also referred to as first variability map
  • the fourth map also referred to as the second variability map
  • the first and second variability maps are determined as non-linearly mapped and normalized versions of the first and second variance maps.
  • the variability maps may represent other types of local variability in pixel values, such as contrast f 2 as defined under Textural Features 2) Contrast on page 619 in [7], correlation f 3 as defined under Textural Features 3) Correlation on page 619 in [7] or information measure of correlation f l2 as defined under Textural Features 12) Information Measures of Correlation on page 619 in [7], the teaching of which with regard to calculating contrast 2 , correlation f 3 and information measure of correlation f l2 is hereby incorporated by reference.
  • step S3 in Fig. 2 is performed as illustrated in Fig. 4.
  • a next step S20 comprises determining a fifth map, also referred to as visual distortion map, by weighting pixel-wise the first map by the second map.
  • the image fidelity measure is then determined in step S22 based on the fifth map.
  • “Pixel-wise weighting” the first map by the second map implies that a value at coordinate or pixel ( i,j ) in the first map is weighted by a weight corresponding to the value at coordinate or pixel ( i,j ) in the second map.
  • the pixel-wise weighting is a coordinate-wise or position-wise weighting.
  • the visual distortion map is preferably constructed by weighting pixel-wise the distortion map by the visual importance map. This means that distortions in pixel regions that are of high HVS importance can be weighted heavier as compared to distortions in other less important pixel regions in the image 10.
  • the method comprises an additional step S21 as indicated in Fig. 4.
  • This step S21 comprises determining a mean error ME ( lt / 2 ) based on a sum of the fifth map , wherein I lt I 2 represent pixel values and ( i,j ) represents a coordinate of a pixel 14 in the image 10 and of a corresponding pixel 24 in the reference image 20.
  • step S22 comprises, in this embodiment, determining the image fidelity measure based on the mean error. In the case of grayscale images 10, or when there is a desire to reduce the computations, only the mean error for one type of pixel value is calculated and is used as image fidelity measure.
  • This type of pixel value is then preferably pixel values for the intensity channel, such as being a luma value (Y’), a luminance value (Y) or an intensity value (I).
  • pixel values for the intensity channel such as being a luma value (Y’), a luminance value (Y) or an intensity value (I).
  • the above described process of calculating mean errors is preferably performed on all three color channels, such as intensity channel (Y’, Y or I) and chromatic channels, i.e., chroma values (Cb, Cr) or chromaticity values (X, Z).
  • step S21 preferably comprises determining a mean error ME(Y 1 , Y 2 ) for an intensity channel and mean errors ME(U 1 , U 2 ), ME(y 1 , V 2 ) for chromatic channels.
  • Y j represents luma (Y’), luminance (Y) or intensity (I) channels for the image 10/reference image 20 and U 1 / 2 , V ⁇ / 2 represent chroma (Cb, Cr) or chromaticity (X, Z) channels for the image 10/reference image
  • step S22 comprises determining the image fidelity measure based on a normalized linear combination of the mean error for the intensity channel and the mean errors for the chromatic channels. In a particular embodiment, step S22 comprises determining the image fidelity measure based on, such as equal to,
  • c is a positive number larger than 0 but smaller than 1 andn Y , n c are normalization coefficients defined based on the bit depth of the image 10 and reference image 20.
  • the linearization parameter c also referred to as convex mixing parameter, is preferably within a range of from 0.10 up to 0.95, preferably within a range of from 0.30 up to 0.90, and more preferably within a range of from 0.50 up to 0.90, such as selected among the group consisting of 0.58, 0.65 and 0.69.
  • the normalization coefficientsn Y , n c are defined based on the bit depth of the image 10 and reference image 20 and, optionally also based on the encoding scheme used for encoding the reference images 20.
  • the normalization coefficients n Y , n c are defined by
  • BD is the bit depth of the color channels and p is the previously described positive power parameter.
  • the processing can be simplified by, for instance, avoiding calculating mean errors for the chromatic channels and thereby only for the intensity channel. This corresponds to setting the parameter c to one.
  • the mean opinion score (MOS) range and the differential MOS (DMOS) range are traditionally used in order to compare various image fidelity and quality metrics or measures.
  • the method comprises an additional step S4 as shown in Fig. 5. The method then continues from step S3 in Fig. 2 (or step S22 in Fig. 4).
  • Step S4 comprises converting the image fidelity measure to a DMOS range or a MOS range.
  • the image quality measure of the embodiments denoted VIIQA for visually important image quality assessment here below, can be mapped to the DMOS range or scale according to:
  • the alpha parameters a lt a 2 are the solution of a non-linear least squares (NLS) fitting of scores produced by the VIIQA algorithm to human subjective quality judgement contained in the training image databases.
  • the alpha parameter is selected within the range of 7 and 9 and the alpha parameter a 2 is selected within the range of 0.7 and 1 .2.
  • the proposed VIIQA algorithm used to calculate the image fidelity measure accepts two images 10, 20 as its input.
  • the two input images 10, 20, i.e., a distorted image 10 and its reference or original image 20, preferably have the same dimensions, i.e., height and width in terms of number of pixels; preferably have the same number of color channels, e.g., grayscale (one or three color channels) vs. color (three color channels), and preferably have the same color gamut.
  • the distorted image 10 and the reference image 20 represented as integer RGB data, could be converted to an integer BT.709 4:4:4 Y’CbCr representation with the same bit-depth as the reference image 20.
  • the converted color channels of the reference image 20 and the distorted image 10 are then fed to the VIIQA algorithm for the calculation of the image fidelity measure.
  • mean errors between reference and distorted color channels are calculated separately for the intensity channel and the two chromatic channels, and then aggregated to form the image fidelity measure as a picture quality rating (PQR).
  • the image quality measure may then optionally be converted to a standard MOS/DMOS interval scale.
  • BT.709 also referred to as ITU-R Recommendation BT.709 or Rec. 709
  • the distortion map is determined as a representation of pixel value distortions or differences between the reference color channel and the distorted color channel.
  • Respective variability maps are determined for the reference and distorted color channels and aggregated into a visual importance map.
  • the visual importance map is combined with the distortion map into the visual distortion map, which is summed to get the mean error for the particular color channel.
  • the image fidelity measure of the present embodiments can advantageously be used in connection with image or video coding instead of prior art evaluation metrics and measures, such as sum of absolute differences (SAD) and sum of squared errors (SSE).
  • SAD sum of absolute differences
  • SSE sum of squared errors
  • the image fidelity measure can then be used inside the encoding process and encoder in order to select coding modes and/or coding parameters.
  • Fig. 6 is a flow chart illustrating a method of encoding an original image 20.
  • the method comprises encoding, in step S30, at least a portion 22 of the original image 20 according to multiple coding modes to obtain multiple encoded candidate image portions.
  • the multiple encoded candidate image portions are decoded in step S31 to obtain multiple decoded candidate image portions 12.
  • the following step S32 comprises determining, for each of the multiple decoded candidate image portions 12, a respective image fidelity measure according to any of the embodiments using the original image 20 as reference image.
  • a next step S33 comprises selecting, among the multiple encoded candidate image portions, an encoded candidate image portion as encoded representation of the least a portion 22 of the original image 20 at least partly based on the respective image fidelity measures.
  • At least a portion 22 of the original image 20 is encoded according to multiple coding modes.
  • This at least portion 22 of the original image 20 could be in the form of a macroblock of pixels 24, a coding block or coding unit of pixels 24 or samples, generally referred to as a block of pixels 24 herein.
  • the various coding modes include, for instance, different intra coding modes, such as planar mode, DC mode and various angular modes; inter coding modes, such as uni-directional (P) or bi-directional (B) inter coding; different intra or inter partitions, such as 32x32 pixels, 16x16 pixels, 8x8 pixels or 4x4 pixels. Each such coding mode results in a respective encoded candidate image portion.
  • the encoded candidate image portion is then decoded to obtain a decoded candidate image portion 12 that corresponds to the at least a portion 12 of the (distorted) image 10 in Fig. 1 , whereas the at least a portion 22 of the original image 20 is the corresponding portion 22 with corresponding pixels 24 in the reference image 20.
  • Image coding such as of a video sequence 1 , typically includes transformation into the frequency domain, quantization and then entropy coding.
  • image decoding includes the inverse of these operations, i.e., decoding, inverse quantization and inverse transformation.
  • encoding as defined herein may include these sub-steps of transformation, quantization and (entropy) encoding
  • decoding may include the sub-steps of decoding, inverse quantization and inverse transformation.
  • the image fidelity measure of the embodiments is calculated for each of the multiple decoded candidate image portions 12 in step S32 and is then used in step S33 to select which of the multiple encoded candidate image portions to use as the encoded representation of the at least a portion 22 of the original image 20.
  • the image fidelity measure is thereby employed to identify the coding mode and the resulting encoded candidate image portion that in some sense is best or optimal in a way defined at least partly based on the image fidelity measure.
  • the image fidelity measure determined in step S32 is used to determine a rate- distortion measure, which in turn is used in step S33 to select encoded candidate image portion. This particular embodiment is illustrated in Fig. 7. The method continues from step S32 in Fig. 6.
  • a next step S40 comprises determining, for each of the multiple decoded candidate image portions, a respective rate- distortion measure based on the respective image fidelity measure and a rate representing a bit cost of representing the at least a portion 22 of the original image 20 with the encoded candidate portion.
  • the method then continues to step S33, which in this embodiment comprises selecting, among the multiple encoded candidate image portions, an encoded candidate image portion as encoded representation of the at least a portion 22 of the original image 20 based on the respective rate-distortion measure.
  • step S33 comprises selecting the encoded candidate image portion that minimizes the rate-distortion measure.
  • a coding mode and thereby an encoded candidate image portion is selected in a so-called rate-distortion optimization (RDO) using a rate-distortion measure determined based on the image fidelity measure of the embodiments.
  • RDO rate-distortion optimization
  • the target of RDO is to minimize the distortion D for a given rate R c by appropriate selections of coding modes and parameters, i.e.,
  • l is the Lagrange multiplier
  • the image fidelity measure of the present embodiments is advantageously used as distortion parameter D in the Lagrangian cost function.
  • the image fidelity measure of the present embodiments can also find other uses in connection with image or video coding in addition to selection of coding mode and encoded candidate image portion.
  • the image fidelity measure can also, or alternatively, be used to select encoder profile of an encoder.
  • An encoder profile is a combination of high-level parameters, such as size of the motion estimation search, number of considered coding unit or block splitting schemes, entropy encoder choice, depth of coding tree unit, etc.
  • Fig. 8 is a flow chart illustrating a method of selecting an encoder profile for an encoder.
  • the method comprises encoding at least one original image 20 of a test set 1 using multiple encoder profiles to obtain multiple encoded images in step S50.
  • a next step S51 comprises decoding the multiple encoded images to obtain multiple decoded images 10.
  • a respective image fidelity measure according to the embodiments is then determined in step S52 for each of the multiple decoded images 10 using the at least one original image 20 as reference image.
  • the following step S53 comprises selecting, among the multiple encoder profiles, an encoder profile for the encoder based at least partly on the respective image fidelity measures.
  • step S53 comprises selecting the encoder profile resulting in the best image fidelity or quality as defined based on the respective image fidelity measures.
  • test sets 1 of original images 20 can be used in the method of Fig. 8 to select encoder profile.
  • the test set 1 could be a pre-defined test set 1 of various images 20, such as including both generally hard to encode content and easy to encode content, which can be used to test and evaluate various encoders and encoder profiles.
  • multiple test sets 1 of original images 20 are available and that could be adapted to different contents. For instance, a first test set 1 comprises sports content, a second test set 2 comprises movie content, a third test set 1 comprises news content, a fourth test set 1 comprises cartoon content, etc.
  • an appropriate test set could be selected based on the content of the images or video to be encoded by the encoder to thereby have a most suitable test set of images when selecting encoder profile for the encoder.
  • the test set 1 could constitute a portion of the images or video sequence to be encoded by an encoder, the encoder profile of which is to be selected. For instance, an initial portion of a video sequence could be used in the method shown in Fig. 8 to select an appropriate encoder profile based on the image fidelity measure of the embodiments.
  • the actual values of the various parameters mentioned herein can be determined in a training phase on various image quality assessment (IQA) databases.
  • the training phase may involve maximizing correlation, such as average Spearman’s rank correlation over the available IQA databases, with the associated subjective scores.
  • IQA image quality assessment
  • the present embodiments are thereby not limited to the actual parameter values presented herein and these values may be adjusted in additional training phases based on access to more IQA databases and/or in the case of using different image encoders or encoder profiles.
  • Another aspect of the embodiments relates to a device for determining an image fidelity measure for an image 10.
  • the device is configured to determine a first map representing, for each pixel 14 in at least a portion 12 of the image 10, a distortion in pixel values between the pixel 14 and a corresponding pixel 24 in a reference image 20.
  • the device is also configured to determine a second map as an aggregation of a third map representing, for each pixel 14 in the at least a portion 12 of the image 10, a local variability in pixel values and a fourth map representing, for each corresponding pixel 24 in the reference image 20, a local variability in pixel values.
  • the device is further configured to determine the image fidelity measure based on the first map and the second map.
  • the device is configured to determine a distortion map representing, for each pixel 14 in the at least a portion 12 of the image 10, a distortion in pixel values between the pixel 14 and the corresponding pixel 24 in the reference image 20.
  • the device is also configured to determine a visual importance map as an aggregation of a first variability map representing, for each pixel 14 in the at least a portion 12 of the image 10, the local variability in pixel values and a second variability map representing, for each corresponding pixel 24 in the reference image 20, the local variability in pixel values.
  • the device is further configured to determine the image fidelity measure based on the distortion map and the visual importance map.
  • the device is configured to determine the first map representing, for each pixel 14 in the image 10, a distortion in pixel values between the pixel 14 and the corresponding pixel 24 in the reference image 20.
  • the device is also configured to determine the second map as an aggregation of the third map representing, for each pixel 14 in the image 10, the local variability in pixel values and the fourth map representing, for each corresponding pixel 24 in the reference image 20, the local variability in pixel values.
  • the device is configured to determine the first map representing, for each pixel 14 in the at least a portion 12 of the image 10, an absolute difference in pixel values between the pixel 14 and the corresponding pixel 24 in the reference image 20.
  • the device is configured to determine the first map DM(i,j ) based on
  • the device is configured to determine the second map as the aggregation of the third map representing, for each pixel 14 in the at least a portion 12 of the image 10, the local variability in pixel values in a pixel neighborhood of the pixel 14 in the image 10 and the fourth map representing, for each corresponding pixel 24 in the reference image 20, the local variability in pixel values in a pixel neighborhood of the corresponding pixel 24 in the reference image 20.
  • the device is configured to determine the second map as the aggregation of the third map representing, for each pixel 14 in the at least a portion 12 of the image 10, a local variance in pixel values and the fourth map representing, for each corresponding pixel 24 in the reference image 20, a local variance in pixel values.
  • the device is configured to determine the second map as the aggregation of the third map representing, for each pixel 14 in the at least a portion 12 of the image 10, a non-linearly mapped and normalized local variance in pixel values and the fourth map representing, for each corresponding pixel 24 in the reference image 20, a non-linearly mapped and normalized local variance in pixel values.
  • the device is configured to determine a first variance map representing, for each pixel 14 in the at least a portion 12 of the image 10, a local variance in pixel values and determine a first variability map as a non-linearly mapped and normalized version of the first variance map.
  • the device is also configured, in this embodiment, to determine a second variance map representing, for each corresponding pixel 24 in the reference image 20, a local variance in pixel values and determine a second variability map as a non-linearly mapped and normalized version of the second variance map.
  • the device is further configured, in this embodiment, to determine the second map as the aggregation of the first variability map and the second variability map.
  • the device is configured to determine the first variability map VM ⁇ i based on var ⁇ i ⁇
  • var ⁇ i represents the first variance map and q is a positive power ⁇ i ⁇ j var 1 (i,j) c l’
  • the device is also configured, in this embodiment, determine the second variability map
  • T .. var 2 (i,j) q comprises determining (S13) the second variability map VM 2 (I,J) based on — - . h ,
  • the device is configured to determine the first variance map vari(i,j ) based on wherein
  • N—l represents pixel value of a pixel 14 at coordinate ( i,j ) in the image 10
  • M ——
  • N is a positive odd integer.
  • the device is configured to determine a fifth map by weighting pixel-wise the first map by the second map and determine the image fidelity measure based on the fifth map.
  • the device is configured to determine a mean error ME (/ 1; / 2 ) based on a sum of the fifth map , wherein I I 2 represent pixel values and ( i,j ) represents a coordinate of a pixel 14 in the image 10 and of a corresponding pixel 24 in the reference image 20.
  • the device is also configured, in this embodiment, to determine the image fidelity measure based on the mean error.
  • the device is configured to determine a mean error ME(Y 1 , Y 2 ) for an intensity channel and mean errors ME(U 1 , U 2 ), ME(Y 1 , V 2 ) for chromatic channels.
  • the device is also configured, in this embodiment, to determine the image fidelity measure based on a normalized linear combination of the mean error for the intensity channel and the mean errors for the chromatic channels.
  • the device is configured to determine the image fidelity measure based on
  • the device is configured to convert the image fidelity measure to a differential mean opinion score (DMOS) range or a mean opinion score (MOS) range.
  • DMOS differential mean opinion score
  • MOS mean opinion score
  • embodiments may be implemented in hardware, or in software for execution by suitable processing circuitry, or a combination thereof.
  • processing circuitry such as one or more processors or processing units.
  • processing circuitry includes, but is not limited to, one or more microprocessors, one or more Digital Signal Processors (DSPs), one or more Central Processing Units (CPUs), video acceleration hardware, and/or any suitable programmable logic circuitry such as one or more Field Programmable Gate Arrays (FPGAs), or one or more Programmable Logic Controllers (PLCs).
  • DSPs Digital Signal Processors
  • CPUs Central Processing Units
  • FPGAs Field Programmable Gate Arrays
  • PLCs Programmable Logic Controllers
  • Fig. 1 1 is a schematic block diagram illustrating an example of a device 100 for determining an image fidelity measure for an image according to an embodiment.
  • the device 100 comprises a processor 101 , such as processing circuitry, and a memory 102.
  • the memory 102 comprises instructions executable by the processor 101 .
  • the processor 101 is operative to determine the first map representing, for each pixel in the at least a portion of the image, the distortion in pixel values between the pixel and the corresponding pixel in the reference image.
  • the processor 101 is also operative to determine the second map as the aggregation of the third map representing, for each pixel in the at least a portion of the image, the local variability in pixel values and the fourth map representing, for each corresponding pixel in the reference image, the local variability in pixel values.
  • the processor 101 is further operative to determine the image fidelity measure based on the first map and the second map.
  • the device 100 may also include a communication circuit, represented by a respective input/output (I/O) unit 103 in Fig. 1 1 .
  • the I/O unit 103 may include functions for wired and/or wireless communication with other devices, servers and/or network nodes in a wired or wireless communication network.
  • the I/O unit 103 may be based on radio circuitry for communication with one or more other nodes, including transmitting and/or receiving information.
  • the I/O unit 103 may be interconnected to the processor 101 and/or memory 102.
  • the I/O unit 103 may include any of the following: a receiver, a transmitter, a transceiver, I/O circuitry, input port(s) and/or output port(s).
  • Fig. 12 is a schematic block diagram illustrating a device 1 10 for determining an image fidelity measure for an image based on a hardware circuitry implementation according to an embodiment.
  • suitable hardware circuitry include one or more suitably configured or possibly reconfigurable electronic circuitry, e.g., Application Specific Integrated Circuits (ASICs), FPGAs, or any other hardware logic such as circuits based on discrete logic gates and/or flip-flops interconnected to perform specialized functions in connection with suitable registers (REG), and/or memory units (MEM).
  • ASICs Application Specific Integrated Circuits
  • FPGAs field-programmable gate array
  • MEM memory units
  • Fig. 13 is a schematic block diagram illustrating yet another example of a device 120 for determining an image fidelity measure for an image based on combination of both processor(s) 122, 123 and hardware circuitry 124, 125 in connection with suitable memory unit(s) 121 .
  • the overall functionality is, thus, partitioned between programmed software for execution on one or more processors 122, 123 and one or more pre-configured or possibly reconfigurable hardware circuits 124, 125.
  • the actual hardware-software partitioning can be decided by a system designer based on a number of factors including processing speed, cost of implementation and other requirements.
  • Fig. 14 is a computer program based implementation of a device 200 for determining an image fidelity measure for an image according to an embodiment.
  • a computer program 240 which is loaded into the memory 220 for execution by processing circuitry including one or more processors 210.
  • the processor(s) 210 and memory 220 are interconnected to each other to enable normal software execution.
  • An optional I/O unit 230 may also be interconnected to the processor(s) 210 and/or the memory 220 to enable input and/or output of relevant data, such as images and image fidelity measures.
  • the term‘processor’ should be interpreted in a general sense as any circuitry, system or device capable of executing program code or computer program instructions to perform a particular processing, determining or computing task.
  • the processing circuitry including one or more processors 210 is thus configured to perform, when executing the computer program 240, well-defined processing tasks such as those described herein.
  • the computer program 240 comprises instructions, which when executed by at least one processor 210, cause the at least one processor 210 to determine a first map representing, for each pixel in at least a portion of an image, a distortion in pixel values between the pixel and a corresponding pixel in a reference image.
  • the at least one processor 210 is also caused to determine a second map as an aggregation of a third map representing, for each pixel in the at least a portion of the image, a local variability in pixel values and a fourth map representing, for each corresponding pixel in the reference image, a local variability in pixel values.
  • the at least one processor 210 is further caused to determine an image fidelity measure for the image based on the first map and the second map.
  • the proposed technology also provides a carrier 250 comprising the computer program 240.
  • the carrier 250 is one of an electronic signal, an optical signal, an electromagnetic signal, a magnetic signal, an electric signal, a radio signal, a microwave signal, or a computer-readable storage medium.
  • the software or computer program 240 stored on a computer-readable storage medium, such as the memory 220, in particular a non-volatile medium.
  • the computer-readable medium may include one or more removable or non-removable memory devices including, but not limited to a Read-Only Memory (ROM), a Random Access Memory (RAM), a Compact Disc (CD), a Digital Versatile Disc (DVD), a Blu-ray disc, a Universal Serial Bus (USB) memory, a Hard Disk Drive (HDD) storage device, a flash memory, a magnetic tape, or any other conventional memory device.
  • the computer program 240 may, thus, be loaded into the operating memory 220 for execution by the processing circuitry 210.
  • the flow diagram or diagrams presented herein may be regarded as a computer flow diagram or diagrams, when performed by one or more processors.
  • a corresponding device may be defined as a group of function modules, where each step performed by the processor corresponds to a function module.
  • the function modules are implemented as a computer program running on the processor.
  • Fig. 10 is a block diagram of a device 130 for determining an image fidelity measure for an image.
  • the device 130 comprises a first map determining module 131 for determining a first map representing, for each pixel in at least a portion of an image, a distortion in pixel values between the pixel and a corresponding pixel in a reference image.
  • the device 130 also comprises a second map determining module 132 for determining a second map as an aggregation of a third map representing, for each pixel in the at least a portion of the image, a local variability in pixel values and a fourth map representing, for each corresponding pixel in the reference image, a local variability in pixel values.
  • the device 130 further comprises a measure determining module 133 for determining an image fidelity measure for the image based on the first map and the second map.
  • a further aspect of the embodiments relates to an encoder 140 as shown in Fig. 16.
  • the encoder 140 comprises a device 100, 1 10, 120, 130 for determining an image fidelity measure for an image according to any of the embodiments, such as illustrated in any of Figs. 1 1 -13, 15.
  • the encoder 140 is configured to encode at least a portion of the original image according to multiple coding modes to obtain multiple encoded candidate image portions and decode the multiple encoded candidate images to obtain multiple decoded candidate image portions.
  • the encoder 140 is also configured to select, among the multiple encoded candidate image portions, an encoded candidate image portion as encoded representation of the at least a portion of the original image at least partly based on respective image fidelity measures determined by the device 100, 1 10, 120, 130 for each of the multiple decoded candidate image portions using the original image as reference image.
  • the encoder 140 is configured to determine, for each of the multiple decoded candidate image portions, a respective rate-distortion measure based on the respective image fidelity measure and a rate representing a bit cost of representing the at least a portion of the original image with the encoded candidate image portion.
  • the encoder 140 is also configured, in this embodiment, to select, among the multiple encoded candidate image portions, an encoded candidate image portion as encoded representation of the at least a portion of the original image based on the respective rate-distortion measure.
  • Yet another aspect of the embodiments relates to device 150 for selecting an encoder profile for an encoder as shown in Fig. 17.
  • the device 150 comprises a device 100, 1 10, 120, 130 for determining an image fidelity measure for an image according to any of the embodiments, such as illustrated in any of Figs. 11 - 13, 15.
  • the device 150 is configured to encode at least one original image of a test set using multiple encoder profiles to obtain multiple encoded images and decode the multiple encoded images to obtain multiple decoded images.
  • the device 150 is also configured to select, among the multiple encoder profiles, an encoder profile for the encoder based at least partly on respective image fidelity measures determined by the device 100, 1 10, 120, 130 for determining an image fidelity measure using the at least one original image as reference image.
  • computing services in network devices, such as network nodes and/or servers, where the resources are delivered as a service to remote locations over a network.
  • functionality can be distributed or re-located to one or more separate physical nodes or servers.
  • the functionality may be re-located or distributed to one or more jointly acting physical and/or virtual machines that can be positioned in separate physical node(s), i.e., in the so-called cloud.
  • cloud computing is a model for enabling ubiquitous on-demand network access to a pool of configurable computing resources, such as networks, servers, storage, applications and general or customized services.
  • a network device may generally be seen as an electronic device being communicatively connected to other electronic devices in the network.
  • the network device may be implemented in hardware, software or a combination thereof.
  • the network device may be a special-purpose network device or a general purpose network device, or a hybrid thereof.
  • a special-purpose network device may use custom processing circuits and a proprietary operating system (OS), for execution of software to provide one or more of the features or functions disclosed herein.
  • a general purpose network device may use common off-the-shelf (COTS) processors and a standard OS, for execution of software configured to provide one or more of the features or functions disclosed herein.
  • COTS common off-the-shelf
  • a special-purpose network device may include hardware comprising processing or computing resource(s), which typically include a set of one or more processors, and physical network interfaces (Nls), which sometimes are called physical ports, as well as non-transitory machine readable storage media having stored thereon software.
  • a physical Nl may be seen as hardware in a network device through which a network connection is made, e.g. wirelessly through a wireless network interface controller (WNIC) or through plugging in a cable to a physical port connected to a network interface controller (NIC).
  • WNIC wireless network interface controller
  • NIC network interface controller
  • the software may be executed by the hardware to instantiate a set of one or more software instance(s).
  • Each of the software instance(s), and that part of the hardware that executes that software instance may form a separate virtual network element.
  • a general purpose network device may, for example, include hardware comprising a set of one or more processor(s), often COTS processors, and NIC(s), as well as non- transitory machine readable storage media having stored thereon software.
  • the processor(s) executes the software to instantiate one or more sets of one or more applications. While one embodiment does not implement virtualization, alternative embodiments may use different forms of virtualization - for example represented by a virtualization layer and software containers.
  • one such alternative embodiment implements operating system-level virtualization, in which case the virtualization layer represents the kernel of an operating system, or a shim executing on a base operating system, that allows for the creation of multiple software containers that may each be used to execute one of a set of applications.
  • each of the software containers also called virtualization engines, virtual private servers, or jails, is a user space instance, typically a virtual memory space. These user space instances may be separate from each other and separate from the kernel space in which the operating system is executed. Then, the set of applications running in a given user space, unless explicitly allowed, cannot access the memory of the other processes.
  • the virtualization layer represents a hypervisor, sometimes referred to as a Virtual Machine Monitor (VMM), or the hypervisor is executed on top of a host operating system; and 2) the software containers each represent a tightly isolated form of software container called a virtual machine that is executed by the hypervisor and may include a guest operating system.
  • VMM Virtual Machine Monitor
  • a hypervisor is the software/hardware that is responsible for creating and managing the various virtualized instances and in some cases the actual physical hardware.
  • the hypervisor manages the underlying resources and presents them as virtualized instances. What the hypervisor virtualizes to appear as a single processor may actually comprise multiple separate processors. From the perspective of the operating system, the virtualized instances appear to be actual hardware components.
  • a virtual machine is a software implementation of a physical machine that runs programs as if they were executing on a physical, non-virtualized machine; and applications generally do not know they are running on a virtual machine as opposed to running on a“bare metal” host electronic device, though some systems provide para-virtualization which allows an operating system or application to be aware of the presence of virtualization for optimization purposes.
  • the instantiation of the one or more sets of one or more applications as well as the virtualization layer and software containers if implemented, are collectively referred to as software instance(s).
  • Each set of applications, corresponding software container if implemented, and that part of the hardware that executes them (be it hardware dedicated to that execution and/or time slices of hardware temporally shared by software containers), forms a separate virtual network element(s).
  • the virtual network element(s) may perform similar functionality compared to Virtual Network Element(s) (VNEs).
  • This virtualization of the hardware is sometimes referred to as Network Function Virtualization (NFV)).
  • NFV Network Function Virtualization
  • CPE Customer Premise Equipment
  • different embodiments may implement one or more of the software container(s) differently. For example, while embodiments are illustrated with each software container corresponding to a VNE, alternative embodiments may implement this correspondence or mapping between software container-VNE at a finer granularity level. It should be understood that the techniques described herein with reference to a correspondence of software containers to VNEs also apply to embodiments where such a finer level of granularity is used.
  • a hybrid network device which includes both custom processing circuitry/proprietary OS and COTS processors/standard OS in a network device, e.g. in a card or circuit board within a network device.
  • a platform Virtual Machine such as a VM that implements functionality of a special-purpose network device, could provide for para-virtualization to the hardware present in the hybrid network device.
  • Fig. 18 is a schematic diagram illustrating an example of how functionality can be distributed or partitioned between different network devices in a general case.
  • the network devices 300, 310, 320 may be part of the same wireless or wired communication system, or one or more of the network devices may be so-called cloud-based network devices located outside of the wireless or wired communication system.
  • network device may refer to any device located in connection with a communication network, including but not limited to devices in access networks, core networks and similar network structures.
  • the term network device may also encompass cloud-based network devices.
  • yet another aspect of the embodiments relates to a network device comprising a device for determining an image fidelity measure for an image according to the embodiments, such as illustrated in any of Figs. 1 1 -13, 15; an encoder according to the embodiments, such as illustrated in Fig. 16; and/or a device for selecting an encoder profile for an encoder according to the embodiments, such as illustrated in Fig. 17.
  • Fig. 19 is a schematic diagram illustrating an example of a wireless communication system, including a radio access network (RAN) 31 and a core network 32 and optionally an operations and support system (OSS) 33 in cooperation with one or more cloud-based network devices 300.
  • the figure also illustrates a wireless device 35 connected to the RAN 31 and capable of conducting wireless communication with a RAN node 30, such as a network node, a base station, node B (NB), evolved node B (eNB), next generation node B (gNB), etc.
  • a RAN node 30 such as a network node, a base station, node B (NB), evolved node B (eNB), next generation node B (gNB), etc.
  • the network device 300 illustrated as a cloud-based network device 300 in Fig. 19 may alternatively be implemented in connection with, such as at, the RAN node 30.
  • the proposed technology may be applied to specific applications and communication scenarios including providing various services within wireless networks, including so-called Over-the-Top (OTT) services.
  • OTT Over-the-Top
  • the proposed technology enables and/or includes transfer and/or transmission and/or reception of relevant user data and/or control data in wireless communications.
  • Fig. 20 is a schematic diagram illustrating an example of a wireless network in accordance with some embodiments.
  • a wireless network such as the example wireless network illustrated in Fig. 20.
  • the wireless network of Fig. 20 only depicts network QQ106, network nodes QQ160 and QQ160B, and wireless devices (WDs) QQ1 10, QQ1 10B, and QQ1 10C.
  • a wireless network may further include any additional elements suitable to support communication between wireless devices or between a wireless device and another communication device, such as a landline telephone, a service provider, or any other network node or end device.
  • the wireless network may provide communication and other types of services to one or more wireless devices to facilitate the wireless devices’ access to and/or use of the services provided by, or via, the wireless network.
  • the wireless network may comprise and/or interface with any type of communication, telecommunication, data, cellular, and/or radio network or other similar type of system.
  • the wireless network may be configured to operate according to specific standards or other types of predefined rules or procedures.
  • particular embodiments of the wireless network may implement communication standards, such as Global System for Mobile Communications (GSM), Universal Mobile Telecommunications System (UMTS), Long Term Evolution (LTE), and/or other suitable 2G, 3G, 4G, or 5G standards; wireless local area network (WLAN) standards, such as the IEEE 802.1 1 standards; and/or any other appropriate wireless communication standard, such as the Worldwide Interoperability for Microwave Access (WiMax), Bluetooth, Z-Wave and/or ZigBee standards.
  • GSM Global System for Mobile Communications
  • UMTS Universal Mobile Telecommunications System
  • LTE Long Term Evolution
  • WLAN wireless local area network
  • WiMax Worldwide Interoperability for Microwave Access
  • Bluetooth Z-Wave and/or ZigBee standards.
  • Network QQ106 may comprise one or more backhaul networks, core networks, IP networks, public switched telephone networks (PSTNs), packet data networks, optical networks, wide-area networks (WANs), local area networks (LANs), wireless local area networks (WLANs), wired networks, wireless networks, metropolitan area networks, and other networks to enable communication between devices.
  • PSTNs public switched telephone networks
  • WANs wide-area networks
  • LANs local area networks
  • WLANs wireless local area networks
  • wired networks wireless networks, metropolitan area networks, and other networks to enable communication between devices.
  • Network node QQ160 and WD QQ110 comprise various components described in more detail below. These components work together in order to provide network node and/or wireless device functionality, such as providing wireless connections in a wireless network.
  • the wireless network may comprise any number of wired or wireless networks, network nodes, base stations, controllers, wireless devices, relay stations, and/or any other components or systems that may facilitate or participate in the communication of data and/or signals whether via wired or wireless connections.
  • network node refers to equipment capable, configured, arranged and/or operable to communicate directly or indirectly with a wireless device and/or with other network nodes or equipment in the wireless network to enable and/or provide wireless access to the wireless device and/or to perform other functions (e.g., administration) in the wireless network.
  • network nodes include, but are not limited to, access points (APs) (e.g., radio access points), base stations (BSs) (e.g., radio base stations, Node Bs, evolved Node Bs (eNBs) and NR NodeBs (gNBs)).
  • APs access points
  • BSs base stations
  • eNBs evolved Node Bs
  • gNBs NR NodeBs
  • Base stations may be categorized based on the amount of coverage they provide (or, stated differently, their transmit power level) and may then also be referred to as femto base stations, pico base stations, micro base stations, or macro base stations.
  • a base station may be a relay node or a relay donor node controlling a relay.
  • a network node may also include one or more (or all) parts of a distributed radio base station such as centralized digital units and/or remote radio units (RRUs), sometimes referred to as Remote Radio Heads (RRHs). Such remote radio units may or may not be integrated with an antenna as an antenna integrated radio.
  • RRUs remote radio units
  • RRHs Remote Radio Heads
  • Such remote radio units may or may not be integrated with an antenna as an antenna integrated radio.
  • Parts of a distributed radio base station may also be referred to as nodes in a distributed antenna system (DAS).
  • DAS distributed antenna system
  • network nodes include multi-standard radio (MSR) equipment, such as MSR BSs, network controllers, such as radio network controllers (RNCs) or base station controllers (BSCs), base transceiver stations (BTSs), transmission points, transmission nodes, multi-cell/multicast coordination entities (MCEs), core network nodes (e.g., MSCs, MMEs), O&M nodes, OSS nodes, SON nodes, positioning nodes (e.g., E-SMLCs), and/or MDTs.
  • MSR multi-standard radio
  • RNCs radio network controllers
  • BSCs base station controllers
  • BTSs base transceiver stations
  • transmission points transmission nodes
  • MCEs multi-cell/multicast coordination entities
  • core network nodes e.g., MSCs, MMEs
  • O&M nodes e.g., OSS nodes, SON nodes, positioning nodes (e.g., E-SMLCs), and/or MDTs
  • network nodes may represent any suitable device (or group of devices) capable, configured, arranged, and/or operable to enable and/or provide a wireless device with access to the wireless network or to provide some service to a wireless device that has accessed the wireless network.
  • network node QQ160 includes processing circuitry QQ170, device readable medium QQ180, interface QQ190, auxiliary equipment QQ184, power source QQ186, power circuitry QQ187, and antenna QQ162.
  • network node QQ160 illustrated in the example wireless network of Fig. 20 may represent a device that includes the illustrated combination of hardware components, other embodiments may comprise network nodes with different combinations of components. It is to be understood that a network node comprises any suitable combination of hardware and/or software needed to perform the tasks, features, functions and methods disclosed herein.
  • network node QQ160 may comprise multiple different physical components that make up a single illustrated component (e.g., device readable medium QQ180 may comprise multiple separate hard drives as well as multiple RAM modules).
  • network node QQ160 may be composed of multiple physically separate components (e.g., a NodeB component and a RNC component, or a BTS component and a BSC component, etc.), which may each have their own respective components.
  • network node QQ160 comprises multiple separate components (e.g., BTS and BSC components)
  • one or more of the separate components may be shared among several network nodes.
  • a single RNC may control multiple NodeB’s.
  • each unique NodeB and RNC pair may in some instances be considered a single separate network node.
  • network node QQ160 may be configured to support multiple radio access technologies (RATs).
  • RATs radio access technologies
  • Network node QQ160 may also include multiple sets of the various illustrated components for different wireless technologies integrated into network node QQ160, such as, for example, GSM, WCDMA, LTE, NR, WiFi, or Bluetooth wireless technologies. These wireless technologies may be integrated into the same or different chip or set of chips and other components within network node QQ160.
  • Processing circuitry QQ170 is configured to perform any determining, calculating, or similar operations (e.g., certain obtaining operations) described herein as being provided by a network node.
  • processing circuitry QQ170 may include processing information obtained by processing circuitry QQ170 by, for example, converting the obtained information into other information, comparing the obtained information or converted information to information stored in the network node, and/or performing one or more operations based on the obtained information or converted information, and as a result of said processing making a determination.
  • Processing circuitry QQ170 may comprise a combination of one or more of a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application-specific integrated circuit, field programmable gate array, or any other suitable computing device, resource, or combination of hardware, software and/or encoded logic operable to provide, either alone or in conjunction with other network node QQ160 components, such as device readable medium QQ180, network node QQ160 functionality.
  • processing circuitry QQ170 may execute instructions stored in device readable medium QQ180 or in memory within processing circuitry QQ170. Such functionality may include providing any of the various wireless features, functions, or benefits discussed herein.
  • processing circuitry QQ170 may include a system on a chip (SOC).
  • SOC system on a chip
  • processing circuitry QQ170 may include one or more of radio frequency (RF) transceiver circuitry QQ172 and baseband processing circuitry QQ174.
  • radio frequency (RF) transceiver circuitry QQ172 and baseband processing circuitry QQ174 may be on separate chips (or sets of chips), boards, or units, such as radio units and digital units.
  • part or all of RF transceiver circuitry QQ172 and baseband processing circuitry QQ174 may be on the same chip or set of chips, boards, or units
  • some or all of the functionality described herein as being provided by a network node, base station, eNB or other such network device may be performed by processing circuitry QQ170 executing instructions stored on device readable medium QQ180 or memory within processing circuitry QQ170.
  • some or all of the functionality may be provided by processing circuitry QQ170 without executing instructions stored on a separate or discrete device readable medium, such as in a hard-wired manner.
  • processing circuitry QQ170 can be configured to perform the described functionality.
  • the benefits provided by such functionality are not limited to processing circuitry QQ170 alone or to other components of network node QQ160, but are enjoyed by network node QQ160 as a whole, and/or by end users and the wireless network generally.
  • Device readable medium QQ180 may comprise any form of volatile or non-volatile computer readable memory including, without limitation, persistent storage, solid-state memory, remotely mounted memory, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non-volatile, non-transitory device readable and/or computer-executable memory devices that store information, data, and/or instructions that may be used by processing circuitry 00170.
  • volatile or non-volatile computer readable memory including, without limitation, persistent storage, solid-state memory, remotely mounted memory, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other
  • Device readable medium QQ180 may store any suitable instructions, data or information, including a computer program, software, an application including one or more of logic, rules, code, tables, etc. and/or other instructions capable of being executed by processing circuitry QQ170 and, utilized by network node QQ160.
  • Device readable medium QQ180 may be used to store any calculations made by processing circuitry QQ170 and/or any data received via interface QQ190.
  • processing circuitry QQ170 and device readable medium QQ180 may be considered to be integrated.
  • Interface QQ190 is used in the wired or wireless communication of signalling and/or data between network node QQ160, network QQ106, and/or WDs QQ1 10. As illustrated, interface QQ190 comprises port(s)/terminal(s) QQ194 to send and receive data, for example to and from network QQ106 over a wired connection. Interface QQ190 also includes radio front end circuitry QQ192 that may be coupled to, or in certain embodiments a part of, antenna QQ162. Radio front end circuitry QQ192 comprises filters QQ198 and amplifiers QQ196. Radio front end circuitry QQ192 may be connected to antenna QQ162 and processing circuitry QQ170.
  • Radio front end circuitry may be configured to condition signals communicated between antenna QQ162 and processing circuitry QQ170.
  • Radio front end circuitry QQ192 may receive digital data that is to be sent out to other network nodes or WDs via a wireless connection.
  • Radio front end circuitry QQ192 may convert the digital data into a radio signal having the appropriate channel and bandwidth parameters using a combination of filters QQ198 and/or amplifiers QQ196. The radio signal may then be transmitted via antenna QQ162.
  • antenna QQ162 may collect radio signals which are then converted into digital data by radio front end circuitry QQ192.
  • the digital data may be passed to processing circuitry QQ170.
  • the interface may comprise different components and/or different combinations of components.
  • network node QQ160 may not include separate radio front end circuitry QQ192, instead, processing circuitry QQ170 may comprise radio front end circuitry and may be connected to antenna QQ162 without separate radio front end circuitry QQ192.
  • processing circuitry QQ170 may comprise radio front end circuitry and may be connected to antenna QQ162 without separate radio front end circuitry QQ192.
  • all or some of RF transceiver circuitry QQ172 may be considered a part of interface QQ190.
  • interface QQ190 may include one or more ports or terminals QQ194, radio front end circuitry QQ192, and RF transceiver circuitry QQ172, as part of a radio unit (not shown), and interface QQ190 may communicate with baseband processing circuitry QQ174, which is part of a digital unit (not shown).
  • Antenna QQ162 may include one or more antennas, or antenna arrays, configured to send and/or receive wireless signals. Antenna QQ162 may be coupled to radio front end circuitry QQ190 and may be any type of antenna capable of transmitting and receiving data and/or signals wirelessly. In some embodiments, antenna QQ162 may comprise one or more omni-directional, sector or panel antennas operable to transmit/receive radio signals between, for example, 2 GHz and 66 GHz.
  • An omni-directional antenna may be used to transmit/receive radio signals in any direction
  • a sector antenna may be used to transmit/receive radio signals from devices within a particular area
  • a panel antenna may be a line of sight antenna used to transmit/receive radio signals in a relatively straight line.
  • the use of more than one antenna may be referred to as MIMO.
  • antenna QQ162 may be separate from network node QQ160 and may be connectable to network node QQ160 through an interface or port.
  • Antenna QQ162, interface QQ190, and/or processing circuitry QQ170 may be configured to perform any receiving operations and/or certain obtaining operations described herein as being performed by a network node. Any information, data and/or signals may be received from a wireless device, another network node and/or any other network equipment. Similarly, antenna QQ162, interface QQ190, and/or processing circuitry QQ170 may be configured to perform any transmitting operations described herein as being performed by a network node. Any information, data and/or signals may be transmitted to a wireless device, another network node and/or any other network equipment.
  • Power circuitry QQ187 may comprise, or be coupled to, power management circuitry and is configured to supply the components of network node QQ160 with power for performing the functionality described herein. Power circuitry QQ187 may receive power from power source QQ186. Power source QQ186 and/or power circuitry QQ187 may be configured to provide power to the various components of network node QQ160 in a form suitable for the respective components (e.g., at a voltage and current level needed for each respective component). Power source QQ186 may either be included in, or external to, power circuitry QQ187 and/or network node QQ160.
  • network node QQ160 may be connectable to an external power source (e.g., an electricity outlet) via an input circuitry or interface such as an electrical cable, whereby the external power source supplies power to power circuitry QQ187.
  • power source QQ186 may comprise a source of power in the form of a battery or battery pack which is connected to, or integrated in, power circuitry QQ187. The battery may provide backup power should the external power source fail.
  • Other types of power sources such as photovoltaic devices, may also be used.
  • Alternative embodiments of network node QQ160 may include additional components beyond those shown in Fig.
  • network node QQ160 may include user interface equipment to allow input of information into network node QQ160 and to allow output of information from network node QQ160. This may allow a user to perform diagnostic, maintenance, repair, and other administrative functions for network node QQ160.
  • WD refers to a device capable, configured, arranged and/or operable to communicate wirelessly with network nodes and/or other wireless devices.
  • the term WD may be used interchangeably herein with user equipment (UE).
  • Communicating wirelessly may involve transmitting and/or receiving wireless signals using electromagnetic waves, radio waves, infrared waves, and/or other types of signals suitable for conveying information through air.
  • a WD may be configured to transmit and/or receive information without direct human interaction.
  • a WD may be designed to transmit information to a network on a predetermined schedule, when triggered by an internal or external event, or in response to requests from the network.
  • Examples of a WD include, but are not limited to, a smart phone, a mobile phone, a cell phone, a voice over IP (VoIP) phone, a wireless local loop phone, a desktop computer, a personal digital assistant (PDA), a wireless cameras, a gaming console or device, a music storage device, a playback appliance, a wearable terminal device, a wireless endpoint, a mobile station, a tablet, a laptop, a laptop-embedded equipment (LEE), a laptop- mounted equipment (LME), a smart device, a wireless customer-premise equipment (CPE) a vehicle- mounted wireless terminal device, etc.
  • VoIP voice over IP
  • PDA personal digital assistant
  • PDA personal digital assistant
  • a wireless cameras a gaming console or device
  • a music storage device a playback appliance
  • a wearable terminal device a wireless endpoint
  • a mobile station a tablet, a laptop, a laptop-embedded equipment (LEE), a laptop- mounted equipment (L
  • a WD may support device-to-device (D2D) communication, for example by implementing a 3GPP standard for sidelink communication, vehicle-to-vehicle (V2V), vehicle- to-infrastructure (V2I), vehicle-to-everything (V2X) and may in this case be referred to as a D2D communication device.
  • D2D device-to-device
  • V2V vehicle-to-vehicle
  • V2I vehicle- to-infrastructure
  • V2X vehicle-to-everything
  • a WD may represent a machine or other device that performs monitoring and/or measurements, and transmits the results of such monitoring and/or measurements to another WD and/or a network node.
  • the WD may in this case be a machine-to-machine (M2M) device, which may in a 3GPP context be referred to as an MTC device.
  • M2M machine-to-machine
  • the WD may be a UE implementing the 3GPP narrow band internet of things (NB-loT) standard.
  • NB-loT narrow band internet of things
  • machines or devices are sensors, metering devices such as power meters, industrial machinery, or home or personal appliances (e.g. refrigerators, televisions, etc.) personal wearables (e.g., watches, fitness trackers, etc.).
  • a WD may represent a vehicle or other equipment that is capable of monitoring and/or reporting on its operational status or other functions associated with its operation.
  • a WD as described above may represent the endpoint of a wireless connection, in which case the device may be referred to as a wireless terminal. Furthermore, a WD as described above may be mobile, in which case it may also be referred to as a mobile device or a mobile terminal.
  • wireless device QQ1 10 includes antenna QQ1 1 1 , interface QQ1 14, processing circuitry QQ120, device readable medium QQ130, user interface equipment QQ132, auxiliary equipment QQ134, power source QQ136 and power circuitry QQ137.
  • WD QQ110 may include multiple sets of one or more of the illustrated components for different wireless technologies supported by WD QQ1 10, such as, for example, GSM, WCDMA, LTE, NR, WiFi, WiMAX, or Bluetooth wireless technologies, just to mention a few. These wireless technologies may be integrated into the same or different chips or set of chips as other components within WD QQ1 10.
  • Antenna QQ1 1 1 may include one or more antennas or antenna arrays, configured to send and/or receive wireless signals, and is connected to interface QQ1 14.
  • antenna QQ1 1 1 may be separate from WD QQ1 10 and be connectable to WD QQ1 10 through an interface or port.
  • Antenna QQ1 1 1 , interface QQ1 14, and/or processing circuitry QQ120 may be configured to perform any receiving or transmitting operations described herein as being performed by a WD. Any information, data and/or signals may be received from a network node and/or another WD.
  • radio front end circuitry and/or antenna QQ11 1 may be considered an interface.
  • interface QQ1 14 comprises radio front end circuitry QQ112 and antenna QQ1 1 1.
  • Radio front end circuitry QQ1 12 comprise one or more filters QQ1 18 and amplifiers QQ116.
  • Radio front end circuitry QQ114 is connected to antenna QQ1 1 1 and processing circuitry QQ120, and is configured to condition signals communicated between antenna QQ1 1 1 and processing circuitry QQ120.
  • Radio front end circuitry QQ1 12 may be coupled to or a part of antenna QQ1 11 .
  • WD QQ1 10 may not include separate radio front end circuitry QQ112; rather, processing circuitry QQ120 may comprise radio front end circuitry and may be connected to antenna QQ11 1 .
  • Radio front end circuitry QQ1 12 may receive digital data that is to be sent out to other network nodes or WDs via a wireless connection. Radio front end circuitry QQ1 12 may convert the digital data into a radio signal having the appropriate channel and bandwidth parameters using a combination of filters QQ1 18 and/or amplifiers QQ1 16. The radio signal may then be transmitted via antenna QQ1 1 1 . Similarly, when receiving data, antenna QQ1 1 1 may collect radio signals which are then converted into digital data by radio front end circuitry QQ1 12. The digital data may be passed to processing circuitry QQ120.
  • the interface may comprise different components and/or different combinations of components.
  • Processing circuitry QQ120 may comprise a combination of one or more of a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application-specific integrated circuit, field programmable gate array, or any other suitable computing device, resource, or combination of hardware, software, and/or encoded logic operable to provide, either alone or in conjunction with other WD QQ1 10 components, such as device readable medium QQ130, WD QQ1 10 functionality. Such functionality may include providing any of the various wireless features or benefits discussed herein.
  • processing circuitry QQ120 may execute instructions stored in device readable medium QQ130 or in memory within processing circuitry QQ120 to provide the functionality disclosed herein.
  • processing circuitry QQ120 includes one or more of RF transceiver circuitry QQ122, baseband processing circuitry QQ124, and application processing circuitry QQ126.
  • the processing circuitry may comprise different components and/or different combinations of components.
  • processing circuitry QQ120 of WD QQ1 10 may comprise a SOC.
  • RF transceiver circuitry QQ122, baseband processing circuitry QQ124, and application processing circuitry QQ126 may be on separate chips or sets of chips.
  • part or all of baseband processing circuitry QQ124 and application processing circuitry QQ126 may be combined into one chip or set of chips, and RF transceiver circuitry QQ122 may be on a separate chip or set of chips.
  • part or all of RF transceiver circuitry QQ 122 and baseband processing circuitry QQ124 may be on the same chip or set of chips, and application processing circuitry QQ126 may be on a separate chip or set of chips.
  • part or all of RF transceiver circuitry QQ122, baseband processing circuitry QQ124, and application processing circuitry QQ126 may be combined in the same chip or set of chips.
  • RF transceiver circuitry QQ122 may be a part of interface QQ1 14.
  • RF transceiver circuitry QQ122 may condition RF signals for processing circuitry QQ120.
  • processing circuitry QQ120 executing instructions stored on device readable medium QQ130, which in certain embodiments may be a computer-readable storage medium.
  • some or all of the functionality may be provided by processing circuitry QQ120 without executing instructions stored on a separate or discrete device readable storage medium, such as in a hard-wired manner.
  • processing circuitry QQ120 can be configured to perform the described functionality.
  • the benefits provided by such functionality are not limited to processing circuitry QQ120 alone or to other components of WD QQ110, but are enjoyed by WD QQ1 10 as a whole, and/or by end users and the wireless network generally.
  • Processing circuitry QQ120 may be configured to perform any determining, calculating, or similar operations (e.g., certain obtaining operations) described herein as being performed by a WD. These operations, as performed by processing circuitry QQ120, may include processing information obtained by processing circuitry QQ120 by, for example, converting the obtained information into other information, comparing the obtained information or converted information to information stored by WD QQ1 10, and/or performing one or more operations based on the obtained information or converted information, and as a result of said processing making a determination.
  • processing information obtained by processing circuitry QQ120 by, for example, converting the obtained information into other information, comparing the obtained information or converted information to information stored by WD QQ1 10, and/or performing one or more operations based on the obtained information or converted information, and as a result of said processing making a determination.
  • Device readable medium QQ130 may be operable to store a computer program, software, an application including one or more of logic, rules, code, tables, etc. and/or other instructions capable of being executed by processing circuitry QQ120.
  • Device readable medium QQ130 may include computer memory (e.g., Random Access Memory (RAM) or Read Only Memory (ROM)), mass storage media (e.g., a hard disk), removable storage media (e.g., a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non-volatile, non-transitory device readable and/or computer executable memory devices that store information, data, and/or instructions that may be used by processing circuitry QQ120.
  • processing circuitry QQ120 and device readable medium QQ130 may be considered to be integrated.
  • User interface equipment QQ132 may provide components that allow for a human user to interact with WD QQ1 10. Such interaction may be of many forms, such as visual, audial, tactile, etc. User interface equipment QQ132 may be operable to produce output to the user and to allow the user to provide input to WD QQ1 10. The type of interaction may vary depending on the type of user interface equipment QQ132 installed in WD QQ1 10. For example, if WD QQ1 10 is a smart phone, the interaction may be via a touch screen; if WD QQ110 is a smart meter, the interaction may be through a screen that provides usage (e.g., the number of gallons used) or a speaker that provides an audible alert (e.g., if smoke is detected).
  • usage e.g., the number of gallons used
  • a speaker that provides an audible alert
  • User interface equipment QQ132 may include input interfaces, devices and circuits, and output interfaces, devices and circuits. User interface equipment QQ132 is configured to allow input of information into WD QQ1 10, and is connected to processing circuitry QQ120 to allow processing circuitry QQ120 to process the input information. User interface equipment QQ132 may include, for example, a microphone, a proximity or other sensor, keys/buttons, a touch display, one or more cameras, a USB port, or other input circuitry. User interface equipment QQ132 is also configured to allow output of information from WD QQ1 10, and to allow processing circuitry QQ120 to output information from WD QQ1 10.
  • User interface equipment QQ132 may include, for example, a speaker, a display, vibrating circuitry, a USB port, a headphone interface, or other output circuitry. Using one or more input and output interfaces, devices, and circuits, of user interface equipment QQ132, WD QQ110 may communicate with end users and/or the wireless network, and allow them to benefit from the functionality described herein.
  • Auxiliary equipment QQ134 is operable to provide more specific functionality which may not be generally performed by WDs. This may comprise specialized sensors for doing measurements for various purposes, interfaces for additional types of communication such as wired communications etc. The inclusion and type of components of auxiliary equipment QQ134 may vary depending on the embodiment and/or scenario.
  • Power source QQ136 may, in some embodiments, be in the form of a battery or battery pack. Other types of power sources, such as an external power source (e.g., an electricity outlet), photovoltaic devices or power cells, may also be used.
  • WD QQ1 10 may further comprise power circuitry QQ137 for delivering power from power source QQ136 to the various parts of WD QQ1 10 which need power from power source QQ136 to carry out any functionality described or indicated herein.
  • Power circuitry QQ137 may in certain embodiments comprise power management circuitry.
  • Power circuitry QQ137 may additionally or alternatively be operable to receive power from an external power source; in which case WD QQ1 10 may be connectable to the external power source (such as an electricity outlet) via input circuitry or an interface such as an electrical power cable.
  • Power circuitry QQ137 may also in certain embodiments be operable to deliver power from an external power source to power source QQ136. This may be, for example, for the charging of power source QQ136.
  • Power circuitry QQ137 may perform any formatting, converting, or other modification to the power from power source QQ136 to make the power suitable for the respective components of WD QQ1 10 to which power is supplied.
  • Fig. 21 is a schematic diagram illustrating an example of an embodiment of a UE in accordance with various aspects described herein.
  • a user equipment or UE may not necessarily have a user in the sense of a human user who owns and/or operates the relevant device. Instead, a UE may represent a device that is intended for sale to, or operation by, a human user but which may not, or which may not initially, be associated with a specific human user (e.g., a smart sprinkler controller).
  • a UE may represent a device that is not intended for sale to, or operation by, an end user but which may be associated with or operated for the benefit of a user (e.g., a smart power meter).
  • UE QQ2200 may be any UE identified by the 3rd Generation Partnership Project (3GPP), including a NB- loT UE, a machine type communication (MTC) UE, and/or an enhanced MTC (eMTC) UE.
  • UE QQ200 as illustrated in Fig. 21 , is one example of a WD configured for communication in accordance with one or more communication standards promulgated by the 3rd Generation Partnership Project (3GPP), such as 3GPP’s GSM, UMTS, LTE, and/or 5G standards.
  • 3GPP 3rd Generation Partnership Project
  • UE QQ200 includes processing circuitry QQ201 that is operatively coupled to input/output interface QQ205, radio frequency (RF) interface QQ209, network connection interface QQ21 1 , memory QQ215 including random access memory (RAM) QQ217, read-only memory (ROM) QQ219, and storage medium QQ221 or the like, communication subsystem QQ231 , power source QQ213, and/or any other component, or any combination thereof.
  • RF radio frequency
  • Storage medium QQ221 includes operating system QQ223, application program QQ225, and data QQ227. In other embodiments, storage medium QQ221 may include other similar types of information. Certain UEs may utilize all of the components shown in Fig. 21 , or only a subset of the components. The level of integration between the components may vary from one UE to another UE. Further, certain UEs may contain multiple instances of a component, such as multiple processors, memories, transceivers, transmitters, receivers, etc.
  • processing circuitry QQ201 may be configured to process computer instructions and data.
  • Processing circuitry QQ201 may be configured to implement any sequential state machine operative to execute machine instructions stored as machine-readable computer programs in the memory, such as one or more hardware-implemented state machines (e.g., in discrete logic, FPGA, ASIC, etc.); programmable logic together with appropriate firmware; one or more stored program, general-purpose processors, such as a microprocessor or Digital Signal Processor (DSP), together with appropriate software; or any combination of the above.
  • the processing circuitry QQ201 may include two central processing units (CPUs). Data may be information in a form suitable for use by a computer.
  • input/output interface QQ205 may be configured to provide a communication interface to an input device, output device, or input and output device.
  • UE QQ200 may be configured to use an output device via input/output interface QQ205.
  • An output device may use the same type of interface port as an input device.
  • a USB port may be used to provide input to and output from UE QQ200.
  • the output device may be a speaker, a sound card, a video card, a display, a monitor, a printer, an actuator, an emitter, a smartcard, another output device, or any combination thereof.
  • UE QQ200 may be configured to use an input device via input/output interface QQ205 to allow a user to capture information into UE QQ200.
  • the input device may include a touch-sensitive or presence-sensitive display, a camera (e.g., a digital camera, a digital video camera, a web camera, etc.), a microphone, a sensor, a mouse, a trackball, a directional pad, a trackpad, a scroll wheel, a smartcard, and the like.
  • the presence-sensitive display may include a capacitive or resistive touch sensor to sense input from a user.
  • a sensor may be, for instance, an accelerometer, a gyroscope, a tilt sensor, a force sensor, a magnetometer, an optical sensor, a proximity sensor, another like sensor, or any combination thereof.
  • the input device may be an accelerometer, a magnetometer, a digital camera, a microphone, and an optical sensor.
  • RF interface QQ209 may be configured to provide a communication interface to RF components such as a transmitter, a receiver, and an antenna.
  • Network connection interface QQ21 1 may be configured to provide a communication interface to network QQ243A.
  • Network QQ243A may encompass wired and/or wireless networks such as a local-area network (LAN), a wide-area network (WAN), a computer network, a wireless network, a telecommunications network, another like network or any combination thereof.
  • network QQ243A may comprise a Wi-Fi network.
  • Network connection interface QQ21 1 may be configured to include a receiver and a transmitter interface used to communicate with one or more other devices over a communication network according to one or more communication protocols, such as Ethernet, TCP/IP, SONET, ATM, or the like.
  • Network connection interface QQ21 1 may implement receiver and transmitter functionality appropriate to the communication network links (e.g., optical, electrical, and the like).
  • the transmitter and receiver functions may share circuit components, software or firmware, or alternatively may be implemented separately.
  • RAM QQ217 may be configured to interface via bus QQ202 to processing circuitry QQ201 to provide storage or caching of data or computer instructions during the execution of software programs such as the operating system, application programs, and device drivers.
  • ROM QQ219 may be configured to provide computer instructions or data to processing circuitry QQ201 .
  • ROM QQ219 may be configured to store invariant low-level system code or data for basic system functions such as basic input and output (I/O), startup, or reception of keystrokes from a keyboard that are stored in a non-volatile memory.
  • Storage medium QQ221 may be configured to include memory such as RAM, ROM, programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), magnetic disks, optical disks, floppy disks, hard disks, removable cartridges, or flash drives.
  • storage medium QQ221 may be configured to include operating system QQ223, application program QQ225 such as a web browser application, a widget or gadget engine or another application, and data file QQ227.
  • Storage medium QQ221 may store, for use by UE QQ200, any of a variety of various operating systems or combinations of operating systems.
  • Storage medium QQ221 may be configured to include a number of physical drive units, such as redundant array of independent disks (RAID), floppy disk drive, flash memory, USB flash drive, external hard disk drive, thumb drive, pen drive, key drive, high-density digital versatile disc (HD-DVD) optical disc drive, internal hard disk drive, Blu-Ray optical disc drive, holographic digital data storage (HDDS) optical disc drive, external mini-dual in-line memory module (DIMM), synchronous dynamic random access memory (SDRAM), external micro-DIMM SDRAM, smartcard memory such as a subscriber identity module or a removable user identity (SIM/RUIM) module, other memory, or any combination thereof.
  • RAID redundant array of independent disks
  • HD-DVD high-density digital versatile disc
  • HDDS holographic digital data storage
  • DIMM external mini-dual in-line memory module
  • SDRAM synchronous dynamic random access memory
  • SDRAM synchronous dynamic random access memory
  • smartcard memory such as a subscriber identity module or a
  • Storage medium QQ221 may allow UE QQ200 to access computer-executable instructions, application programs or the like, stored on transitory or non-transitory memory media, to off-load data, or to upload data.
  • An article of manufacture, such as one utilizing a communication system may be tangibly embodied in storage medium QQ221 , which may comprise a device readable medium.
  • processing circuitry QQ201 may be configured to communicate with network QQ243B using communication subsystem QQ231 .
  • Network QQ243A and network QQ243B may be the same network or networks or different network or networks.
  • Communication subsystem QQ231 may be configured to include one or more transceivers used to communicate with network QQ243B.
  • communication subsystem QQ231 may be configured to include one or more transceivers used to communicate with one or more remote transceivers of another device capable of wireless communication such as another WD, UE, or base station of a radio access network (RAN) according to one or more communication protocols, such as IEEE 802.QQ2, CDMA, WCDMA, GSM, LTE, UTRAN, WiMax, or the like.
  • RAN radio access network
  • Each transceiver may include transmitter QQ233 and/or receiver QQ235 to implement transmitter or receiver functionality, respectively, appropriate to the RAN links (e.g., frequency allocations and the like). Further, transmitter QQ233 and receiver QQ235 of each transceiver may share circuit components, software or firmware, or alternatively may be implemented separately.
  • the communication functions of communication subsystem QQ231 may include data communication, voice communication, multimedia communication, short-range communications such as Bluetooth, near-field communication, location-based communication such as the use of the global positioning system (GPS) to determine a location, another like communication function, or any combination thereof.
  • communication subsystem QQ231 may include cellular communication, Wi-Fi communication, Bluetooth communication, and GPS communication.
  • Network QQ243B may encompass wired and/or wireless networks such as a local-area network (LAN), a wide-area network (WAN), a computer network, a wireless network, a telecommunications network, another like network or any combination thereof.
  • network QQ243B may be a cellular network, a Wi-Fi network, and/or a near-field network.
  • Power source QQ213 may be configured to provide alternating current (AC) or direct current (DC) power to components of UE QQ200.
  • AC alternating current
  • DC direct current
  • the features, benefits and/or functions described herein may be implemented in one of the components of UE QQ200 or partitioned across multiple components of UE QQ200. Further, the features, benefits, and/or functions described herein may be implemented in any combination of hardware, software or firmware.
  • communication subsystem QQ231 may be configured to include any of the components described herein.
  • processing circuitry QQ201 may be configured to communicate with any of such components over bus QQ202.
  • any of such components may be represented by program instructions stored in memory that when executed by processing circuitry QQ201 perform the corresponding functions described herein.
  • the functionality of any of such components may be partitioned between processing circuitry QQ201 and communication subsystem QQ231 .
  • the non-computationally intensive functions of any of such components may be implemented in software or firmware and the computationally intensive functions may be implemented in hardware.
  • Fig. 22 is a schematic block diagram illustrating an example of a virtualization environment QQ300 in which functions implemented by some embodiments may be virtualized.
  • virtualizing means creating virtual versions of apparatuses or devices which may include virtualizing hardware platforms, storage devices and networking resources.
  • virtualization can be applied to a node (e.g., a virtualized base station or a virtualized radio access node) or to a device (e.g., a UE, a wireless device or any other type of communication device) or components thereof and relates to an implementation in which at least a portion of the functionality is implemented as one or more virtual components (e.g., via one or more applications, components, functions, virtual machines or containers executing on one or more physical processing nodes in one or more networks).
  • a node e.g., a virtualized base station or a virtualized radio access node
  • a device e.g., a UE, a wireless device or any other type of communication device
  • some or all of the functions described herein may be implemented as virtual components executed by one or more virtual machines implemented in one or more virtual environments QQ300 hosted by one or more of hardware nodes QQ330. Further, in embodiments in which the virtual node is not a radio access node or does not require radio connectivity (e.g., a core network node), then the network node may be entirely virtualized.
  • the functions may be implemented by one or more applications QQ320 (which may alternatively be called software instances, virtual appliances, network functions, virtual nodes, virtual network functions, etc.) operative to implement some of the features, functions, and/or benefits of some of the embodiments disclosed herein.
  • Virtualization environment QQ300 which provides hardware QQ330 comprising processing circuitry QQ360 and memory QQ390.
  • Memory QQ390 contains instructions QQ395 executable by processing circuitry QQ360 whereby application QQ320 is operative to provide one or more of the features, benefits, and/or functions disclosed herein.
  • Virtualization environment QQ300 comprises general-purpose or special-purpose network hardware devices QQ330 comprising a set of one or more processors or processing circuitry QQ360, which may be commercial off-the-shelf (COTS) processors, dedicated Application Specific Integrated Circuits (ASICs), or any other type of processing circuitry including digital or analogue hardware components or special purpose processors.
  • COTS commercial off-the-shelf
  • ASICs Application Specific Integrated Circuits
  • Each hardware device may comprise memory QQ390-1 which may be non- persistent memory for temporarily storing instructions QQ395 or software executed by processing circuitry QQ360.
  • Each hardware device may comprise one or more network interface controllers (NICs) QQ370, also known as network interface cards, which include physical network interface QQ380.
  • NICs network interface controllers
  • Each hardware device may also include non-transitory, persistent, machine-readable storage media QQ390-2 having stored therein software QQ395 and/or instructions executable by processing circuitry QQ360.
  • Software QQ395 may include any type of software including software for instantiating one or more virtualization layers QQ350 (also referred to as hypervisors), software to execute virtual machines QQ340 as well as software allowing it to execute functions, features and/or benefits described in relation with some embodiments described herein.
  • Virtual machines QQ340 comprise virtual processing, virtual memory, virtual networking or interface and virtual storage, and may be run by a corresponding virtualization layer QQ350 or hypervisor. Different embodiments of the instance of virtual appliance QQ320 may be implemented on one or more of virtual machines QQ340, and the implementations may be made in different ways.
  • processing circuitry QQ360 executes software QQ395 to instantiate the hypervisor or virtualization layer QQ350, which may sometimes be referred to as a virtual machine monitor (VMM).
  • Virtualization layer QQ350 may present a virtual operating platform that appears like networking hardware to virtual machine QQ340.
  • hardware QQ330 may be a standalone network node with generic or specific components.
  • Hardware QQ330 may comprise antenna QQ3225 and may implement some functions via virtualization.
  • hardware QQ330 may be part of a larger cluster of hardware (e.g. such as in a data center or customer premise equipment (CPE)) where many hardware nodes work together and are managed via management and orchestration (MANO) QQ3100, which, among others, oversees lifecycle management of applications QQ320.
  • CPE customer premise equipment
  • MANO management and orchestration
  • NFV network function virtualization
  • NFV may be used to consolidate many network equipment types onto industry standard high volume server hardware, physical switches, and physical storage, which can be located in data centers, and customer premise equipment.
  • virtual machine QQ340 may be a software implementation of a physical machine that runs programs as if they were executing on a physical, non-virtualized machine.
  • Each of virtual machines QQ340, and that part of hardware QQ330 that executes that virtual machine be it hardware dedicated to that virtual machine and/or hardware shared by that virtual machine with others of the virtual machines QQ340, forms a separate virtual network elements (VNE).
  • VNE virtual network elements
  • Virtual Network Function is responsible for handling specific network functions that run in one or more virtual machines QQ340 on top of hardware networking infrastructure QQ330 and corresponds to application QQ320 in Fig. 22.
  • one or more radio units QQ3200 that each include one or more transmitters QQ3220 and one or more receivers QQ3210 may be coupled to one or more antennas QQ3225.
  • Radio units QQ3200 may communicate directly with hardware nodes QQ330 via one or more appropriate network interfaces and may be used in combination with the virtual components to provide a virtual node with radio capabilities, such as a radio access node or a base station.
  • some signaling can be effected with the use of control system QQ3230 which may alternatively be used for communication between the hardware nodes QQ330 and radio units QQ3200.
  • Fig. 23 is a schematic diagram illustrating an example of a telecommunication network connected via an intermediate network to a host computer in accordance with some embodiments.
  • a communication system includes telecommunication network QQ410, such as a 3GPP-type cellular network, which comprises access network QQ41 1 , such as a radio access network, and core network QQ414.
  • Access network QQ41 1 comprises a plurality of base stations QQ412a, QQ412b, QQ412c, such as NBs, eNBs, gNBs or other types of wireless access points, each defining a corresponding coverage area QQ413a, QQ413b, QQ413c.
  • Each base station QQ412a, QQ412b, QQ412c is connectable to core network QQ414 over a wired or wireless connection QQ415.
  • a first UE QQ491 located in coverage area QQ413c is configured to wirelessly connect to, or be paged by, the corresponding base station QQ412c.
  • a second UE QQ492 in coverage area QQ413a is wirelessly connectable to the corresponding base station QQ412a. While a plurality of UEs QQ491 , QQ492 are illustrated in this example, the disclosed embodiments are equally applicable to a situation where a sole UE is in the coverage area or where a sole UE is connecting to the corresponding base station QQ412.
  • Telecommunication network QQ410 is itself connected to host computer QQ430, which may be embodied in the hardware and/or software of a standalone server, a cloud-implemented server, a distributed server or as processing resources in a server farm.
  • Host computer QQ430 may be under the ownership or control of a service provider, or may be operated by the service provider or on behalf of the service provider.
  • Connections QQ421 and QQ422 between telecommunication network QQ410 and host computer QQ430 may extend directly from core network QQ414 to host computer QQ430 or may go via an optional intermediate network QQ420.
  • Intermediate network QQ420 may be one of, or a combination of more than one of, a public, private or hosted network; intermediate network QQ420, if any, may be a backbone network or the Internet; in particular, intermediate network QQ420 may comprise two or more sub-networks (not shown).
  • the communication system of Fig. 23 as a whole enables connectivity between the connected UEs QQ491 , QQ492 and host computer QQ430.
  • the connectivity may be described as an over-the-top (OTT) connection QQ450.
  • Host computer QQ430 and the connected UEs QQ491 , QQ492 are configured to communicate data and/or signaling via OTT connection QQ450, using access network QQ41 1 , core network QQ414, any intermediate network QQ420 and possible further infrastructure (not shown) as intermediaries.
  • OTT connection QQ450 may be transparent in the sense that the participating communication devices through which OTT connection QQ450 passes are unaware of routing of uplink and downlink communications.
  • base station QQ412 may not or need not be informed about the past routing of an incoming downlink communication with data originating from host computer QQ430 to be forwarded (e.g., handed over) to a connected UE QQ491.
  • base station QQ412 need not be aware of the future routing of an outgoing uplink communication originating from the UE QQ491 towards the host computer QQ430.
  • Fig. 24 is a schematic diagram illustrating an example of a host computer communicating via a base station with a user equipment over a partially wireless connection in accordance with some embodiments
  • host computer QQ510 comprises hardware QQ515 including communication interface QQ516 configured to set up and maintain a wired or wireless connection with an interface of a different communication device of communication system QQ500.
  • Host computer QQ510 further comprises processing circuitry QQ518, which may have storage and/or processing capabilities.
  • processing circuitry QQ518 may comprise one or more programmable processors, application-specific integrated circuits, field programmable gate arrays or combinations of these (not shown) adapted to execute instructions.
  • Host computer QQ510 further comprises software QQ51 1 , which is stored in or accessible by host computer QQ510 and executable by processing circuitry QQ518.
  • Software QQ51 1 includes host application QQ512.
  • Host application QQ512 may be operable to provide a service to a remote user, such as UE QQ530 connecting via OTT connection QQ550 terminating at UE QQ530 and host computer QQ510. In providing the service to the remote user, host application QQ512 may provide user data which is transmitted using OTT connection QQ550.
  • Communication system QQ500 further includes base station QQ520 provided in a telecommunication system and comprising hardware QQ525 enabling it to communicate with host computer QQ510 and with UE QQ530.
  • Flardware QQ525 may include communication interface QQ526 for setting up and maintaining a wired or wireless connection with an interface of a different communication device of communication system QQ500, as well as radio interface QQ527 for setting up and maintaining at least wireless connection QQ570 with UE QQ530 located in a coverage area (not shown in Fig. 24) served by base station QQ520.
  • Communication interface 00526 may be configured to facilitate connection 00560 to host computer QQ510.
  • Connection QQ560 may be direct or it may pass through a core network (not shown in Fig.
  • hardware QQ525 of base station QQ520 further includes processing circuitry QQ528, which may comprise one or more programmable processors, application-specific integrated circuits, field programmable gate arrays or combinations of these (not shown) adapted to execute instructions.
  • Base station QQ520 further has software QQ521 stored internally or accessible via an external connection.
  • Communication system QQ500 further includes UE QQ530 already referred to.
  • the hardware QQ535 may include radio interface QQ537 configured to set up and maintain wireless connection QQ570 with a base station serving a coverage area in which UE QQ530 is currently located.
  • Hardware QQ535 of UE QQ530 further includes processing circuitry QQ538, which may comprise one or more programmable processors, application-specific integrated circuits, field programmable gate arrays or combinations of these (not shown) adapted to execute instructions.
  • UE QQ530 further comprises software QQ531 , which is stored in or accessible by UE QQ530 and executable by processing circuitry QQ538.
  • Software QQ531 includes client application QQ532.
  • Client application QQ532 may be operable to provide a service to a human or non-human user via UE QQ530, with the support of host computer QQ510.
  • host computer QQ510 an executing host application QQ512 may communicate with the executing client application QQ532 via OTT connection QQ550 terminating at UE QQ530 and host computer QQ510.
  • client application QQ532 may receive request data from host application QQ512 and provide user data in response to the request data.
  • OTT connection QQ550 may transfer both the request data and the user data.
  • Client application QQ532 may interact with the user to generate the user data that it provides. It is noted that host computer QQ510, base station QQ520 and UE QQ530 illustrated in Fig.
  • OTT connection QQ550 has been drawn abstractly to illustrate the communication between host computer QQ510 and UE QQ530 via base station QQ520, without explicit reference to any intermediary devices and the precise routing of messages via these devices.
  • Network infrastructure may determine the routing, which it may be configured to hide from UE QQ530 or from the service provider operating host computer QQ510, or both.
  • OTT connection QQ550 While OTT connection QQ550 is active, the network infrastructure may further take decisions by which it dynamically changes the routing (e.g., on the basis of load balancing consideration or reconfiguration of the network).
  • Wireless connection QQ570 between UE QQ530 and base station QQ520 is in accordance with the teachings of the embodiments described throughout this disclosure.
  • One or more of the various embodiments improve the performance of OTT services provided to UE QQ530 using OTT connection QQ550, in which wireless connection QQ570 forms the last segment.
  • a measurement procedure may be provided for the purpose of monitoring data rate, latency and other factors on which the one or more embodiments improve.
  • the measurement procedure and/or the network functionality for reconfiguring OTT connection QQ550 may be implemented in software QQ51 1 and hardware QQ515 of host computer QQ510 or in software QQ531 and hardware QQ535 of UE QQ530, or both.
  • sensors may be deployed in or in association with communication devices through which OTT connection QQ550 passes; the sensors may participate in the measurement procedure by supplying values of the monitored quantities exemplified above, or supplying values of other physical quantities from which software QQ51 1 , QQ531 may compute or estimate the monitored quantities.
  • the reconfiguring of OTT connection QQ550 may include message format, retransmission settings, preferred routing etc.; the reconfiguring need not affect base station QQ520, and it may be unknown or imperceptible to base station QQ520. Such procedures and functionalities may be known and practiced in the art.
  • measurements may involve proprietary UE signaling facilitating host computer QQ510’s measurements of throughput, propagation times, latency and the like.
  • Figs. 25 and 26 are schematic flow diagrams illustrating examples of methods implemented in a communication system including, e.g. a host computer, and optionally also a base station and a user equipment in accordance with some embodiments.
  • Fig. 25 is a flowchart illustrating a method implemented in a communication system, in accordance with one embodiment.
  • the communication system includes a host computer, a base station and a UE which may be those described with reference to Figs. 20 to 24. For simplicity of the present disclosure, only drawing references to Fig. 25 will be included in this section.
  • the host computer provides user data.
  • substep QQ61 1 (which may be optional) of step QQ610, the host computer provides the user data by executing a host application.
  • step QQ620 the host computer initiates a transmission carrying the user data to the UE.
  • step QQ630 the base station transmits to the UE the user data which was carried in the transmission that the host computer initiated, in accordance with the teachings of the embodiments described throughout this disclosure.
  • step QQ640 the UE executes a client application associated with the host application executed by the host computer.
  • Fig. 26 is a flowchart illustrating a method implemented in a communication system, in accordance with one embodiment.
  • the communication system includes a host computer, a base station and a UE which may be those described with reference to Figs. 20 to 24. For simplicity of the present disclosure, only drawing references to Fig. 26 will be included in this section.
  • the host computer provides user data.
  • the host computer provides the user data by executing a host application.
  • the host computer initiates a transmission carrying the user data to the UE. The transmission may pass via the base station, in accordance with the teachings of the embodiments described throughout this disclosure.
  • step QQ730 (which may be optional), the UE receives the user data carried in the transmission.
  • Figs. 27 and 28 are schematic diagrams illustrating examples of methods implemented in a communication system including a host computer, a base station and a user equipment in accordance with some embodiments.
  • Fig. 27 is a flowchart illustrating a method implemented in a communication system, in accordance with one embodiment.
  • the communication system includes a host computer, a base station and a UE which may be those described with reference to Figs. 20 to 24. For simplicity of the present disclosure, only drawing references to Fig. 27 will be included in this section.
  • step QQ810 (which may be optional) the UE receives input data provided by the host computer. Additionally or alternatively, in step QQ820, the UE provides user data.
  • substep QQ821 (which may be optional) of step QQ820, the UE provides the user data by executing a client application.
  • substep QQ81 1 (which may be optional) of step QQ810, the UE executes a client application which provides the user data in reaction to the received input data provided by the host computer. In providing the user data, the executed client application may further consider user input received from the user. Regardless of the specific manner in which the user data was provided, the UE initiates, in substep QQ830 (which may be optional), transmission of the user data to the host computer. In step QQ840 of the method, the host computer receives the user data transmitted from the UE, in accordance with the teachings of the embodiments described throughout this disclosure.
  • Fig. 28 is a flowchart illustrating a method implemented in a communication system, in accordance with one embodiment.
  • the communication system includes a host computer, a base station and a UE which may be those described with reference to Figs. 20 to 24. For simplicity of the present disclosure, only drawing references to Fig. 28 will be included in this section.
  • the base station receives user data from the UE.
  • the base station initiates transmission of the received user data to the host computer.
  • step QQ930 (which may be optional)
  • the host computer receives the user data carried in the transmission initiated by the base station.
  • a second map as an aggregation of a third map representing, for each pixel in the at least a portion of the image, a local variability in pixel values and a fourth map representing, for each corresponding pixel in the reference image, a local variability in pixel values;
  • a method performed by a wireless device for encoding an original image comprising: encoding at least a portion of the original image according to multiple coding modes to obtain multiple encoded candidate image portions;
  • a method performed by a wireless device for selecting an encoder profile for an encoder comprising:
  • a method performed by a network node or device for image fidelity measure determination comprising:
  • a second map as an aggregation of a third map representing, for each pixel in the at least a portion of the image, a local variability in pixel values and a fourth map representing, for each corresponding pixel in the reference image, a local variability in pixel values;
  • a method performed by a network node or device for encoding an original image comprising:
  • an encoded candidate image portion as encoded representation of the at least a portion of the original image at least partly based on the respective image fidelity measures.
  • a method performed by a network node or device for selecting an encoder profile for an encoder comprising:
  • a wireless device comprising processing circuitry configured to perform any of the steps of any of the Group A embodiments.
  • a network node or device such as a base station, comprising processing circuitry configured to perform any of the steps of any of the Group B embodiments. 1 1 .
  • a user equipment (UE) comprising:
  • an antenna configured to send and receive wireless signals
  • radio front-end circuitry connected to the antenna and to processing circuitry, and configured to condition signals communicated between the antenna and the processing circuitry; the processing circuitry being configured to perform any of the steps of any of the Group A embodiments;
  • an input interface connected to the processing circuitry and configured to allow input of information into the UE to be processed by the processing circuitry
  • a battery connected to the processing circuitry and configured to supply power to the UE.
  • a communication system including a host computer comprising:
  • a communication interface configured to forward the user data to a cellular network for transmission to a user equipment (UE),
  • UE user equipment
  • the cellular network comprises a base station having a radio interface and processing circuitry, the base station’s processing circuitry configured to perform any of the steps of any of the Group B embodiments.
  • the processing circuitry of the host computer is configured to execute a host application, thereby providing the user data
  • the UE comprises processing circuitry configured to execute a client application associated with the host application.
  • the host computer initiating a transmission carrying the user data to the UE via a cellular network comprising the base station, wherein the base station performs any of the steps of any of the Group B embodiments. 17.
  • the method of embodiment 16 further comprising, at the base station, transmitting the user data.
  • a user equipment configured to communicate with a base station, the UE comprising a radio interface and processing circuitry configured to perform any of the steps of any of the Group A embodiments.
  • a communication system including a host computer comprising:
  • processing circuitry configured to provide user data
  • a communication interface configured to forward user data to a cellular network for transmission to a user equipment (UE),
  • UE user equipment
  • the UE comprises a radio interface and processing circuitry, the UE’s components configured to perform any of the steps of any of the Group A embodiments.
  • the cellular network further includes a base station configured to communicate with the UE.
  • the processing circuitry of the host computer is configured to execute a host application, thereby providing the user data
  • the UE’s processing circuitry is configured to execute a client application associated with the host application.
  • a communication system including a host computer comprising:
  • UE user equipment
  • the UE comprises a radio interface and processing circuitry, the UE’s processing circuitry configured to perform any of the steps of any of the Group A embodiments.
  • the processing circuitry of the host computer is configured to execute a host application; and - the UE’s processing circuitry is configured to execute a client application associated with the host application, thereby providing the user data.
  • the processing circuitry of the host computer is configured to execute a host application, thereby providing request data
  • the UE’s processing circuitry is configured to execute a client application associated with the host application, thereby providing the user data in response to the request data.
  • the host computer receiving user data transmitted to the base station from the UE, wherein the UE performs any of the steps of any of the Group A embodiments. 31 .
  • a communication system including a host computer comprising a communication interface configured to receive user data originating from a transmission from a user equipment (UE) to a base station, wherein the base station comprises a radio interface and processing circuitry, the base station’s processing circuitry configured to perform any of the steps of any of the Group B embodiments. 35.
  • the communication system of embodiment 34 further including the base station.
  • the processing circuitry of the host computer is configured to execute a host application
  • the UE is configured to execute a client application associated with the host application, thereby providing the user data to be received by the host computer.
  • a method for determining an image fidelity measure for an image comprising: determining a first map representing, for each pixel in at least a portion of the image, a distortion in pixel values between the pixel and a corresponding pixel in a reference image;
  • a second map as an aggregation of a third map representing, for each pixel in the at least a portion of the image, a local variability in pixel values and a fourth map representing, for each corresponding pixel in the reference image, a local variability in pixel values;
  • a method for encoding an original image comprising:
  • an encoded candidate image portion as encoded representation of the at least a portion of the original image at least partly based on the respective image fidelity measures.
  • a method for selecting an encoder profile for an encoder comprising:
  • a device configured to determine an image fidelity measure for an image.
  • the device is configured to
  • a second map as an aggregation of a third map representing, for each pixel in the at least a portion of the image, a local variability in pixel values and a fourth map representing, for each corresponding pixel in the reference image, a local variability in pixel values;
  • a device configured to encode an original image.
  • the device is configured to
  • a device configured to select an encoder profile for an encoder.
  • the device is configured to encode at least one original image of a test set using multiple encoder profiles to obtain multiple encoded images;
  • a wireless device comprising a device according to any one of the embodiments 44 to 46.
  • a network node comprising a device according to any one of the embodiments 44 to 464.
  • a network device comprising a device according to any one of the embodiments 44 to 46.
  • a computer program comprising instructions, which when executed by at least one processor, cause the at least one processor to
  • a second map as an aggregation of a third map representing, for each pixel in the at least a portion of the image, a local variability in pixel values and a fourth map representing, for each corresponding pixel in the reference image, a local variability in pixel values;
  • a computer program comprising instructions, which when executed by at least one processor, cause the at least one processor to
  • an encoded candidate image portion as encoded representation of the at least a portion of the original image at least partly based on the respective image fidelity measures.
  • a computer program comprising instructions, which when executed by at least one processor, cause the at least one processor to encode at least one original image of a test set using multiple encoder profiles to obtain multiple encoded images;
  • a computer-program product comprising a computer-readable medium having stored thereon a computer program of any one of the embodiments 50 to 52.
  • An apparatus for determining an image fidelity measure for an image comprises: a module for determining a first map representing, for each pixel in at least a portion of the image, a distortion in pixel values between the pixel and a corresponding pixel in a reference image;
  • An apparatus for encoding an original image comprises:
  • an apparatus for determining, for each of the multiple decoded candidate image portions, a respective image fidelity measure using the original image as reference image;
  • An apparatus for selecting an encoder profile for an encoder comprises: a module for encoding at least one original image of a test set using multiple encoder profiles to obtain multiple encoded images;
  • an apparatus for determining, for each of the multiple decoded images, a respective image fidelity measure using the at least one original image as reference image;
  • PSNR peak signal to noise ratio
  • MS-SSIM multiscale SSIM
  • IW-PSNR information content weighted PSNR
  • PSNR human visual system
  • PSNR-HVS-M PSNR-HVS with contrast masking
  • a mean error ME (A, / 2 ) was determined as a sum over the entire image of a visual distortion map (VDM), which itself was constructed by weighting (pixel-wise) a distortion map (DM) by a visual importance map (VIM): Equation 1
  • H picture height
  • W picture width
  • DM was determined from two images as:
  • Equation 2 wherein / 2 (i,y) denote pixel values of pixels at coordinate ( i,j ) of respectively first (reference) and second (distorted) images, and parameter p is a power parameter.
  • the VIM was the result of aggregation (pixel-wise maxima) of two variability maps (VMs) as: Equation 3 that themselves were calculated from the following, non-linearly mapped and normalized variance maps:
  • the variance map var x represents variance calculated in a small neighbourhood around the current pixel at location ( i,j ). This local variance was calculated using a box filter (uniform filter) and can be explicitly written as:
  • N specifies the size of the local analysis window.
  • MEC ⁇ , Y 2 ), UE(U 1 , U 2 ), and ME(VT, V 2 ) are respectively errors of luma and two chroma channels
  • c is the linear mixing coefficient
  • VIIQA-JPG had parameters adjusted for JPEG distortion
  • VIIQA-J2K had parameters adjusted for JPEG 2000 distortion
  • VIIQA had parameters adjusted for combined JPEG and JPEG 2000 distortion.
  • Sigmoidal mapping according to equations 3 and 4 in [8] was applied prior to calculation of Pearson’s linear correlation coefficient (PLCC) on the linearized scores from the sigmoidal mapping.
  • the beta parameters b g - b 5 where found by minimizing the root mean square error (RMSE) between the sigmoidally mapped image fidelity measures and the MOS or DMOS values assigned to each image pair by human viewers for all databases.
  • RMSE root mean square error
  • the results performed on the different IQA databases confirmed very good correlations of the image fidelity measures of the present invention with the human subjective scores as well as its robustness and consistent behavior across databases.
  • the image fidelity measures of the present invention outperformed the prior art image fidelity measures when assessing both PLCC and standard deviation of PLCC, the latter (standard deviation of PLCC) being a measure of the consistency of the image fidelity measures.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

L'invention concerne une mesure de fidélité d'image qui est déterminée pour une image (10) par détermination d'une première carte représentant, pour chaque pixel (14) dans au moins une partie (12) de l'image (10), une distorsion dans des valeurs de pixel entre le pixel (14) et un pixel correspondant (24) dans une image de référence (20). Une deuxième carte est déterminée en tant qu'agrégation d'une troisième carte représentant, pour chaque pixel (14) dans ladite partie (12) de l'image (10), une variabilité locale dans des valeurs de pixel, et d'une quatrième carte représentant, pour chaque pixel correspondant (24) dans l'image de référence (20), une variabilité locale dans des valeurs de pixel. La mesure de fidélité d'image est ensuite déterminée sur la base de la première carte et de la deuxième carte.
PCT/EP2018/073239 2018-08-29 2018-08-29 Mesure de fidélité d'image WO2020043280A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/EP2018/073239 WO2020043280A1 (fr) 2018-08-29 2018-08-29 Mesure de fidélité d'image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2018/073239 WO2020043280A1 (fr) 2018-08-29 2018-08-29 Mesure de fidélité d'image

Publications (1)

Publication Number Publication Date
WO2020043280A1 true WO2020043280A1 (fr) 2020-03-05

Family

ID=63517858

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2018/073239 WO2020043280A1 (fr) 2018-08-29 2018-08-29 Mesure de fidélité d'image

Country Status (1)

Country Link
WO (1) WO2020043280A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4068779A1 (fr) * 2021-03-31 2022-10-05 Hulu, LLC Validation croisée de codage vidéo

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1727088A1 (fr) * 2005-05-25 2006-11-29 Thomson Licensing Procédé de déterminer la qualité d'un image.
US20170070745A1 (en) * 2014-03-10 2017-03-09 Euclid Discoveries, Llc Perceptual Optimization for Model-Based Video Encoding
CN104361593B (zh) * 2014-11-14 2017-09-19 南京大学 一种基于hvs和四元数的彩色图像质量评价方法
WO2018140158A1 (fr) * 2017-01-30 2018-08-02 Euclid Discoveries, Llc Caractérisation vidéo pour codage intelligent sur la base d'une optimisation de qualité perceptuelle

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1727088A1 (fr) * 2005-05-25 2006-11-29 Thomson Licensing Procédé de déterminer la qualité d'un image.
US20170070745A1 (en) * 2014-03-10 2017-03-09 Euclid Discoveries, Llc Perceptual Optimization for Model-Based Video Encoding
CN104361593B (zh) * 2014-11-14 2017-09-19 南京大学 一种基于hvs和四元数的彩色图像质量评价方法
WO2018140158A1 (fr) * 2017-01-30 2018-08-02 Euclid Discoveries, Llc Caractérisation vidéo pour codage intelligent sur la base d'une optimisation de qualité perceptuelle

Non-Patent Citations (10)

* Cited by examiner, † Cited by third party
Title
BOVIK A C ET AL: "Image Quality Assessment: From Error Visibility to Structural Similarity", IEEE TRANSACTIONS ON IMAGE PROCESSING, IEEE SERVICE CENTER, PISCATAWAY, NJ, US, vol. 13, no. 4, 1 April 2004 (2004-04-01), pages 600 - 612, XP011110418, ISSN: 1057-7149, DOI: 10.1109/TIP.2003.819861 *
CALLET; AUTRUSSEAU, SUBJECTIVE QUALITY ASSESSMENT IRCCYN/IVC DATABASE, 2005, Retrieved from the Internet <URL:http://ivc.univ-nantes.fr/en/databases/Subjective_Database>
HARALOCK ET AL.: "Textural Features for Image Classification", IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS, vol. 3, no. 6, 1973, pages 610 - 621
LARSON ET AL.: "Can visual fixation patterns improve image fidelity assessment?", 15TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP 2008, 2008, pages 2572 - 2575, XP031374566
LARSON; CHANDLER: "Most apparent distortion: full-reference image quality assessment and the role of strategy", JOURNAL OF ELECTRONIC IMAGING, vol. 19, no. 1, 2010, pages 011006, Retrieved from the Internet <URL:https://computervisiononline.eom/dataset/1105138666>
LIU ET AL.: "Lecture Notes in Computer Science", vol. 8509, 2014, SPRINGER INTERNATIONAL PUBLISHING, article "CID:IQ - A New Image Quality Database, in International Conference on Image and Signal Processing (ICISP", pages: 193 - 202
PONOMARENKO ET AL., IMAGE DATABASE TID2013: PECULIARITIES, RESULTS AND PERSPECTIVES, SIGNAL PROCESSING: IMAGE COMMUNICATION, vol. 30, 2015, pages 57 - 77
SHEIKH ET AL., LIVE IMAGE QUALITY ASSESSMENT DATABASE RELEASE, vol. 2, 2004, Retrieved from the Internet <URL:http://live.ece.utexas.edu/research/quality>
SHEIKH ET AL.: "A Statistical Evaluation of Recent Full Reference Image quality Assessment Algorithms", IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 15, no. 11, 2006, pages 3441 - 3452, XP055170592, DOI: doi:10.1109/TIP.2006.881959
ZHANG BO ET AL: "Gradient magnitude similarity deviation on multiple scales for color image quality assessment", 2017 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), IEEE, 5 March 2017 (2017-03-05), pages 1253 - 1257, XP033258619, DOI: 10.1109/ICASSP.2017.7952357 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4068779A1 (fr) * 2021-03-31 2022-10-05 Hulu, LLC Validation croisée de codage vidéo
US11622116B2 (en) 2021-03-31 2023-04-04 Hulu, LLC Cross-validation of video encoding

Similar Documents

Publication Publication Date Title
US11394978B2 (en) Video fidelity measure
CN112106370B (zh) 基于优先化排序的变换而优化动态点云的系统和方法
WO2019137749A1 (fr) Détermination de longueur de filtre en vue d&#39;un dégroupage pendant le codage et/ou le décodage d&#39;une vidéo
WO2021227833A1 (fr) Procédé et appareil de fourniture de service périphérique
WO2020159430A1 (fr) Nœuds de réseau et procédés exécutés dans ceux-ci et conçus pour prendre en charge un transfert intercellulaire d&#39;un dispositif sans fil
US20240195994A1 (en) Method to determine encoder parameters
US20230146433A1 (en) Evaluating overall network resource congestion before scaling a network slice
US20200245213A1 (en) Method for evaluating cell quality of cells using beamforming
US11694346B2 (en) Object tracking in real-time applications
US20240163471A1 (en) Generating a motion vector predictor list
WO2023274149A1 (fr) Procédé, appareil d&#39;assurance d&#39;accord sur le niveau de service dans un réseau mobile
EP3704803B1 (fr) Attribution de porteuse de secteur sensible à mu-mimo
WO2020043280A1 (fr) Mesure de fidélité d&#39;image
US12089246B2 (en) Methods for separating reference symbols and user data in a lower layer split
WO2020139178A1 (fr) Appareils et procédés destinés à une double connectivité
WO2020094229A1 (fr) Gestion de segments vidéo
WO2020141123A1 (fr) Dérivation de mode intra le plus probable basée sur un historique
US11727602B2 (en) Resolution of a picture
US20240171241A1 (en) Joint beamforming weights and iq data scaling approach for improved fronthaul
US20230403548A1 (en) Method and apparatus for terminal device behavior classification
US11791959B2 (en) Methods, apparatus and machine-readable mediums for signalling in a base station
WO2023211347A1 (fr) États de déclenchement apériodiques inactifs pour économie d&#39;énergie

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18765589

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18765589

Country of ref document: EP

Kind code of ref document: A1