EP3999946A1 - Transformation d'image - Google Patents

Transformation d'image

Info

Publication number
EP3999946A1
EP3999946A1 EP19753007.4A EP19753007A EP3999946A1 EP 3999946 A1 EP3999946 A1 EP 3999946A1 EP 19753007 A EP19753007 A EP 19753007A EP 3999946 A1 EP3999946 A1 EP 3999946A1
Authority
EP
European Patent Office
Prior art keywords
image
transformations
combinations
user
processing device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP19753007.4A
Other languages
German (de)
English (en)
Inventor
Milos KOMARCEVIC
Stephen Busch
Thanh Than AN
Maurizio FILIPPONE
Joseph Meehan
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of EP3999946A1 publication Critical patent/EP3999946A1/fr
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/24Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]

Definitions

  • This invention relates to visually transforming images: for example still photographs or frames or other sub-units of a video stream.
  • UI user interface
  • a second approach is to provide the user with a set of predefined effects each of which involves applying a combination of image adjustments.
  • These effects are commonly known as filters.
  • filters As an example, a single filter might involve applying low radius blur, emphasising skin tone colours and increasing brightness.
  • filters By applying a filter an inexperienced user can quickly enhance a photograph by applying a sophisticated effect that involves multiple image adjustments.
  • filters only give the ability to apply a limited set of predefined effects. This limits their use in many situations, and the limited set of filters can cause popular filters to be viewed as being cliched.
  • the transformations involved in a filter could be defined manually or by an artificial intelligence (AI) algorithm.
  • AI artificial intelligence
  • a typical current UI might be provided with an element such as a slider by means of which a single picture quality (PQ) metric, such as sharpness or noise, can be adjusted.
  • PQ picture quality
  • altering one UI element might actually change the other metrics: for example increasing sharpness might affect noise. It would be desirable to avoid having an improvement in one metric result in a deterioration in another metric.
  • Images are conventionally processed through a series of successive transformations.
  • the means for collectively implementing these transformations may be known as an image signalling processing pipeline (ISP).
  • ISP image signalling processing pipeline
  • machine learning techniques to tune a PQ metric in an ISP.
  • objective targets to tune the PQ metrics. If there are N PQ metrics, this results in a Pareto front in N dimensions, comprising multiple points in the space of PQ metrics, each point corresponding to a respective set of PQ metrics.
  • the Pareto front defines the value of each PQ metric that corresponds to a best picture quality according to the objective targets.
  • Figure 1 shows a block diagram of a current system.
  • a user 2 selects a parameter with a UI element such as a slider 3. This invokes the changing of a single image parameter 1 specifically associated with the slider.
  • the ISP transforms the image in accordance with the adjustment commanded by the user.
  • Figure 2 shows the corresponding process flow.
  • an image processing device comprising: a processor equipped to apply any of a set of transformations to an image, each transformation transforming a respective visual characteristic of the image, the processor being configured so as to be capable of applying each transformation with any of a range of strengths; the processor being configured to access a memory storing a series of combinations, each combination comprising a plurality of the transformations and a representation of a strength for each of those transformations, and for causing a user interface to present to a user a user interface element whereby a user can select a value in a range; the processor being further configured to, in response to a value being selected through the user interface element, select one of the combinations in dependence on that value and apply to an image the transformations comprised in that combination in the respective strength indicated in the selected combination.
  • a method for forming a series of combinations each combination comprising a plurality of image transformations and a representation of a strength for each of those transformations, the method comprising: selecting a quality metric; for each of a plurality of trial combinations of image transformations and respective associated strengths, transforming an image according to that set of transformations and respected associated strengths to form a transformed image; estimating the quality of the transformed image on the quality metric; grouping the plurality of trial combinations into multiple groups, the trial combinations of each group having one or more common transformation characteristics; selecting from each group the combination having the highest estimated quality; and assigning the selected combinations to the said series of combinations.
  • the combinations mentioned in the preceding paragraph may be determined by this method.
  • the transformations may include two or more transformations in the following list: altering white balance, altering colours, applying noise fdtering, applying sharpness filtering, adding texture, altering brightness, and another image signal processing transformation not listed above. These can be helpful for improving the quality of an image.
  • the strength may include two or more of the parameters in the following list: white colour temperature, colour mapping(s) from one or more input colours to one or more output colours, a noise filtering strength, a sharpness filtering strength, a texture addition strength, a black level and another image signal processing parameter not listed above. These can be helpful for improving the quality of an image.
  • Each combination may include two or more transformations in the said list. This can help to further improve image quality.
  • the user interface may be configured to present, simultaneously with presenting the user interface element, the image as transformed by the transformations comprised in any selected combination. This can assist a user in improving an image.
  • the user interface element may comprise a displayed region and a marker moveable by a user within the region, the location of the marker in the region representing the selected value. This can be a convenient user interface configuration.
  • each of the combinations may be substantially optimal for combinations of transformations having at least one transformation and associated strength in common with the respective combination. This can allow a user to readily achieve a good level of quality.
  • the memory may contain the combinations from manufacture of the image processing device. Alternatively, it may be loaded with the combinations later, for example by downloading them. These can provide convenient configuration options.
  • the image processing device may comprise an image sensor.
  • the said image may be an image captured by the image sensor.
  • the user may capture the image and then edit it locally.
  • a processor of the device may be configured to cause the user interface to present to a user the user interface element.
  • the device may comprise memory storing code executable by the processor to apply any of the set of transformations to an image.
  • the said quality metric may be a weighted combination of estimations of two or more of the following: edge acutance, noise, SSIM (structural similarity index), PSNR (peak signal to noise ratio), MOS (mean opinion score), colour uniformity, texture blur, chroma level, local geometry distortion and lateral chromatic displacement.
  • edge acutance noise
  • SSIM structural similarity index
  • PSNR peak signal to noise ratio
  • MOS mean opinion score
  • colour uniformity texture blur
  • chroma level local geometry distortion and lateral chromatic displacement.
  • one or more other metrics may also be input to such a weighted combination.
  • Figure 1 shows an example of a system for processing an image.
  • Figure 2 shows an example of a process flow in processing an image.
  • Figure 3 shows an example of an image processing device.
  • Figures 4 and 5 show example arrangements of PQ metric points.
  • Figures 6 and 7 show an example of a data flow for determining PQ metric weight combinations and implementing them to process a user’s image.
  • FIG. 3 shows a device suitable for implementing the present system.
  • the device comprises a housing 1. Inside the housing 1 are a camera 2, a processor 3, a memory 4, a display 5, a keypad 6 (which may be integrated with the display) and a communication interface 7.
  • the device could, for example, be a cellular telephone or a dedicated camera.
  • the camera 2 is capable of capturing images of the device’s surroundings and communicating them to the processor 3. Alternatively, the camera may be omitted and the processor could receive images from the communication interface 7.
  • the memory 4 stores in a non-transient form program code that is executable by the processor to cause it to perform the steps described of it herein.
  • the memory may also store images before, during and after their processing.
  • the display and keypad implement a user interface by which information can be presented to a user and input can be obtained from a user.
  • the communication interface may be a wired or a wireless interface.
  • the memory stores program code for allowing the processor to perform transformations on images.
  • a user may capture an image using the camera. That image may pass to the processor which stores it in the memory. The user can cause the processor to present the image on the display. The user may then wish to modify the image. To do that the user can cause the processor to present on the display and/or implement by the keypad a user interface by which the user can provide to the processor instructions for transforming the image.
  • the significance of those instructions and the manner in which the processor responds to them will be described in more detail below, but in summary the processor transforms the image in accordance with the user’s input and a an algorithm pre-stored in the memory.
  • the transformed image can be stored in the memory and presented to the user. The user can then cause the processor to pass the transformed image to the communication interface so that it can be transmitted to a remote location for storage or for display to other people, e.g. on a website.
  • the image could be a still image or a frame or sub-frame of a video stream.
  • the image could be a representative frame of a video stream.
  • the processor in response to the user’s command to transform the image the processor could transform not just the individual image but the entirety or a time-bounded segment of the video stream.
  • the memory stores code for enabling the processor to implement transformations in each of multiple picture quality metrics.
  • the PQ metrics that the process may be able to adjust include sharpness, noise, brightness (across all brightness levels or only a subset such as highlights, lowlights or mid-tones), emphasis of one or more colours such as red, green and blue, white balance, blur, contrast, and saturation.
  • the processor may be capable of implementing the respective transformation with a defined strength.
  • the transformation may involve increasing the brightness of each pixel.
  • the original brightness of a given pixel may be designated B.
  • the relevant algorithm may cause the transformed brightness of that pixel to be kB or B+k, where k is a value representing the strength of the transformation.
  • the transformed brightness may be subject to a thresholding step to avoid it being outside the maximum available bounds.
  • suitable mechanisms of providing a variable transformation strength may be used.
  • the memory may be programmed to store combinations of transformation strengths that have previously been assessed as providing, when applied together, a subjectively or objectively good level of picture quality. Each of these combinations will be referred to as a good combination.
  • good combinations One way in which good combinations may be identified is as follows.
  • One or more measures are defined by which the quality of an image may be determined. These measures may be established in any convenient way: for example they could be by human voting (e.g. the average score when a number of suitably selected human subjects are asked to rate the quality of an image according to a defined subjective scale), by running a pre-programmed AI algorithm on an image to generate an output value, or according to a non-AI algorithm (e.g. comparing the image’s brightness histogram to a preferred format and outputting a value indicating the degree of correlation).
  • human voting e.g. the average score when a number of suitably selected human subjects are asked to rate the quality of an image according to a defined subjective scale
  • a non-AI algorithm e.g. comparing the image’s brightness histogram to a preferred format and outputting a value indicating the degree of correlation.
  • One or more base images are selected. These images may be captured specifically for the purpose of training the system or may be identified from a library of suitable images. 3.
  • a PQ metric When a PQ metric is not applied, it may be considered to be applied with a strength of zero. Some of the PQ metrics may be capable of being applied at a range of strengths. Others of the PQ metrics may only be capable of being applied (strength 1) or not applied (strength 0).
  • the set of transformed images may be formed from one of the base images or from multiple ones of the base images.
  • the PQ metric strengths with which each transformed images is formed may be selected randomly, may be evenly distributed over the entire space of possible strengths or may be concentrated on combinations of the strengths that are expected to work well.
  • Each transformed image is assessed by means of the selected quality measure to give a value representing the quality of the image.
  • figure 4 shows a two dimensional space in which PQ metric 1 is represented on the Y axis and PQ metric 2 is represented on the X axis.
  • the triangular points at 1 and 2 on the graph represent combinations of those PQ metrics with which a transformation has been applied to a base image.
  • a line or surface is defined connecting combinations of the determined points that have the highest quality measures. This is illustrated in figure 4 by the line joining points 2.
  • the line or surface may be defined in any convenient way.
  • the line or surface joins points in the PQ metric strength space that have been found to have locally maximal quality measures. For example, one option is to scan over the available values for one of the PQ metrics, to select at each of those available values the point having the greatest quality measure, and to join those points to form a line.
  • the line may be considered as a Pareto front.
  • Other more sophisticated numerical analysis processes such as Bayesian optimisation (BO) may be used, e.g. to infer the presence of maxima based on nearby points or to select between points of equal quality value.
  • BO Bayesian optimisation
  • a line on that surface can subsequently be selected by any convenient method. Since the surface is defined by a set of good combinations, any line on that surface may be expected to yield good image quality. Optionally, the surface may be retained and used as will later be described below.
  • the processor When the user wishes to modify an original image, the processor provides the user with the facility to select a value from a range of values.
  • a slider control on the display which can be moved by input from the user and whose position represents a selected value in the range.
  • Another option is to permit the user to enter a number representing a position in the available range. In each case, the significance of the entered value is that it represents a position along the line joining good points. For example, if there are 100 points along the line, ranked from 0 to 99 in order along the line, the position of the slider may represent a number in that range and signify the selection of a corresponding point on the line.
  • the processor looks up the set of PQ metric transformations in the memory, including their respective strengths, associated with that point. It then applies those transformations with their respective strengths to the original image to form a transformed image. These transformations can optionally be applied directly to image signal processing hardware in the processor 3 (see figure 3). That transformed image can be stored and/or displayed to the user. If required, the user can then choose a different point on the line, the transformations associated with that point will be applied to the original image and the resulting image displayed. In this way the user can, in effect, scroll along the line of good points, applying any of a set of locally good combinations of PQ metric transformations. This provides the user with a convenient way of applying and choosing between multiple combination transformations, each of which has been assessed as achieving a locally good level of quality.
  • This approach can combine objective (i.e. image-based) and subjective (i.e. user-based) tuning without reducing overall picture quality.
  • Multiple candidate combinations of PQ metrics can be defined, the quality measurement function generates so-called good points and the user can scan, slide or scroll with relative simplicity between the transformations associated with those good points keeping overall picture quality at a good level since each of the points is associated with a local maximum in one or more dimensions in the PQ metric strength space.
  • Bayesian optimization may be used to help determine the set of good points.
  • BO of the ISP tuning parameters for PQ metrics will generate a Pareto front that comprises multiple combination PQ metric points that all have so-called good picture quality.
  • the Pareto front could indicate a domain of PQ metric points that have so-called good picture quality.
  • the Pareto front could consist or substantially consist of such points.
  • the Pareto front represents a set of PQ metric points giving the best overall picture quality that has been found for a given subset set of the available tuning parameters. PQ metric points not on the Pareto front are considered suboptimal in terms overall picture quality.
  • the user is provided with the option to progress (e.g. with a slider UI element) along the Pareto front.
  • each Pareto front may be defined by selecting a PQ metric and then, for each of a range of possible values of that PQ metric, finding the combination of strengths of the other PQ metrics that gives the highest quality value. There may then be one Pareto front for each of the selected PQ metrics. The data defining each of those fronts may then be stored.
  • the user may select a PQ metric (e.g. sharpness), and then be provided with the ability to scan through a range of the line corresponding to that metric. This can provide the user with the ability to adjust the selected metric whilst a level of picture quality is automatically maintained. That level is preferably one that has a relatively high level of quality, most preferably the highest level of quality that the system has the ability to achieve for that setting of the selected metric.
  • the X axis indicates sharpness and the Y axis indicates noise.
  • one line shows a path that increases sharpness but also increases noise and the other line is the Pareto Front long which sharpness is increased with minimum noise (in this simplified example) at each step.
  • the user can change a PQ metric to his preference while maintaining good or even the best possible overall objective picture quality. This can help the user to find the best possible trade-off of conflicting PQ metrics. It can give some control to the user who can then adjust the system to his or her personal preferences.
  • this method can allow a good level of picture quality even when increasing one PQ metric (best sharpness for best noise, for example).
  • the metric weights for each point on the preferred line can be stored individually in a list at the operating device. Alternatively, they may be encoded more efficiently, for example so they weights at some points can be determined by interpolation. Alternatively the weights may be stored remotely and downloaded to the device when required.
  • the selection of the metric weights can be performed in advance, which can reduce processing power needed on the device of figure 3.
  • PQ metrics examples include those mentioned above and additionally SSIM, PSNR, MOS, colour uniformity, texture blur, chroma level, local geometry distortion and lateral chromatic displacement.
  • FIG. 6 illustrates a set of steps for implementing the invention at the device of figure 3.
  • the PQ metric weights are defined as described above (step 2).
  • a user chooses a PQ parameter that he or she wants to alter and sets the slider or other UI element to a desired value. This could be done in an image adjustment or a camera application or app.
  • the software reads from the array/list of tuning parameters that was defined in step 2.
  • the sets of tuning parameters (per PQ metric point) are chosen so that some or all the PQ metrics are in some way optimised, for example by means of a Bayesian optimisation framework.
  • Each available PQ metric point e.g. on the Pareto front
  • the selected tuning parameters are read into software or into ISP hardware which can perform the image transformation.
  • the read parameters are used to program the ISP (step 4) and the ISP generates the output (step 5).
  • the image may be transformed in software.
  • the image is output in any suitable format, for example JPEG.
  • Figure 7 shows a block diagram of the present solution.
  • Block 4/5 represents a selected set of picture adjustment parameters.
  • the parameters are selected by iteration through the steps of providing a candidate set of parameters to an ISP 2, which modifies a training image in accordance with those parameters, the image is assessed according to a set of PQ measures at 1 and through a technique such as BO 3 a parameter set is evolved. These steps can be performed in advance.
  • a user 6 selects a parameter set using a UI slider 7. This triggers the choice of a parameter set from the block 4/5 which is provided to the real-time ISP 8 to transform a selected image accordingly.
  • the Pareto front When the Pareto front has more than two dimensions, it may, as indicated above, have more than one degree of freedom.
  • the dimensionality of the Pareto front may be up to N-l where N is the number of PQ metrics being used. Where N is 2 the user may be presented with a single slider UI element which navigates the linear Pareto front. Where N is more than 2 the user may be presented with other options such as (i) additional sliders (up to N- 1 sliders) which allow the entire Pareto front to be navigated, (iii) fewer than N-l sliders which allow a previously selected subset of the Pareto front to be navigated.
  • the subset may be a line that has been defined using a principal component analysis dimensionality reduction (PCADR) technique, or (iii) a hybrid of (i) and (ii).
  • PCADR principal component analysis dimensionality reduction

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

L'invention concerne un dispositif de traitement d'image comprenant : un processeur équipé pour appliquer l'un quelconque d'un ensemble de transformations à une image, chaque transformation transformant une caractéristique visuelle respective de l'image, le processeur étant configuré de façon à pouvoir appliquer chaque transformation avec l'une quelconque d'une plage de résistances ; le processeur étant configuré pour accéder à une mémoire stockant une série de combinaisons, chaque combinaison comprenant une pluralité de transformations et une représentation d'une force de chacune de ces transformations, et destinée à amener une interface utilisateur à présenter à un utilisateur un élément d'interface utilisateur, ce par quoi un utilisateur peut sélectionner une valeur dans une plage ; le processeur étant en outre configuré, en réponse à une valeur sélectionnée par l'intermédiaire de l'élément d'interface utilisateur, pour sélectionner l'une des combinaisons en fonction de cette valeur et pour appliquer à une image les transformations comprises dans cette combinaison conformément à la force respective indiquée dans la combinaison sélectionnée. L'utilisateur peut ainsi choisir la force d'une transformation d'image de manière commode.
EP19753007.4A 2019-08-06 2019-08-06 Transformation d'image Pending EP3999946A1 (fr)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2019/071123 WO2021023378A1 (fr) 2019-08-06 2019-08-06 Transformation d'image

Publications (1)

Publication Number Publication Date
EP3999946A1 true EP3999946A1 (fr) 2022-05-25

Family

ID=67620424

Family Applications (1)

Application Number Title Priority Date Filing Date
EP19753007.4A Pending EP3999946A1 (fr) 2019-08-06 2019-08-06 Transformation d'image

Country Status (2)

Country Link
EP (1) EP3999946A1 (fr)
WO (1) WO2021023378A1 (fr)

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1614059A1 (fr) * 2003-03-19 2006-01-11 Nik Multimedia Inc. Renforcement selectif d'images numeriques
JP2012027687A (ja) * 2010-07-23 2012-02-09 Casio Comput Co Ltd 画像処理装置及びプログラム
WO2018047293A1 (fr) * 2016-09-09 2018-03-15 オリンパス株式会社 Dispositif d'acquisition d'image, dispositif de traitement d'image, procédé de traitement d'image, programme de traitement d'image et système d'acquisition d'image

Also Published As

Publication number Publication date
WO2021023378A1 (fr) 2021-02-11

Similar Documents

Publication Publication Date Title
US11030722B2 (en) System and method for estimating optimal parameters
JP5479591B2 (ja) 画像処理の装置および方法
JP2022519469A (ja) 画像品質評価方法及び装置
CN107918929B (zh) 一种图像融合方法、装置及系统
US20130114894A1 (en) Blending of Exposure-Bracketed Images Using Weight Distribution Functions
EP1667066A1 (fr) Dispositif de traitement visuel, procede, programme de traitement visuel, circuit integre, afficheur, imageur, et terminal d'information mobile
CN104811684B (zh) 一种图像的三维美颜方法及装置
US20090027732A1 (en) Image processing apparatus, image processing method, and computer program
JP4687673B2 (ja) カラー画像のモノトーン化処理
US7046400B2 (en) Adjusting the color, brightness, and tone scale of rendered digital images
CN113610723B (zh) 图像处理方法及相关装置
US20170154437A1 (en) Image processing apparatus for performing smoothing on human face area
CN109242794A (zh) 图像处理方法、装置、电子设备及计算机可读存储介质
CN106815803A (zh) 图片的处理方法及装置
EP2398226A2 (fr) Appareil de traitement d'images et appareil de capture d'images
CN113658091A (zh) 一种图像评价方法、存储介质及终端设备
JP4001079B2 (ja) カラー画像のモノトーン化処理
CN110192388A (zh) 图像处理装置、数码相机、图像处理程序、以及记录介质
US10218880B2 (en) Method for assisted image improvement
EP1206124A2 (fr) Procédé et appareil d'amélioration des images numériques au moyen de plusieurs images sélectionnées
Fischer et al. NICER: Aesthetic image enhancement with humans in the loop
US20120301042A1 (en) Image processing apparatus and program
EP3999946A1 (fr) Transformation d'image
US9094581B2 (en) Imaging device and distance information detecting method
JP7005215B2 (ja) 画像処理装置、及び、画像処理方法、コンピュータプログラム

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20220218

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
PUAG Search results despatched under rule 164(2) epc together with communication from examining division

Free format text: ORIGINAL CODE: 0009017

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20240506

B565 Issuance of search results under rule 164(2) epc

Effective date: 20240506

RIC1 Information provided on ipc code assigned before grant

Ipc: G06T 11/60 20060101ALI20240430BHEP

Ipc: G06F 3/0484 20130101AFI20240430BHEP