EP4086744B1 - Reconnaissance de course de geste dans une entrée d'interface utilisateur tactile - Google Patents

Reconnaissance de course de geste dans une entrée d'interface utilisateur tactile Download PDF

Info

Publication number
EP4086744B1
EP4086744B1 EP21305574.2A EP21305574A EP4086744B1 EP 4086744 B1 EP4086744 B1 EP 4086744B1 EP 21305574 A EP21305574 A EP 21305574A EP 4086744 B1 EP4086744 B1 EP 4086744B1
Authority
EP
European Patent Office
Prior art keywords
stroke
sub
features
gesture
neighborhood
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP21305574.2A
Other languages
German (de)
English (en)
Other versions
EP4086744A1 (fr
Inventor
Udit ROY
Nibal NAYEF
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
MyScript SAS
Original Assignee
MyScript SAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by MyScript SAS filed Critical MyScript SAS
Priority to EP21305574.2A priority Critical patent/EP4086744B1/fr
Priority to PCT/EP2022/060926 priority patent/WO2022233628A1/fr
Publication of EP4086744A1 publication Critical patent/EP4086744A1/fr
Application granted granted Critical
Publication of EP4086744B1 publication Critical patent/EP4086744B1/fr
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/166Editing, e.g. inserting or deleting
    • G06F40/171Editing, e.g. inserting or deleting by use of digital ink
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/142Image acquisition using hand-held instruments; Constructional details of the instruments
    • G06V30/1423Image acquisition using hand-held instruments; Constructional details of the instruments the instrument generating sequences of position coordinates corresponding to handwriting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/32Digital ink
    • G06V30/333Preprocessing; Feature extraction
    • G06V30/347Sampling; Contour coding; Stroke extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/32Digital ink
    • G06V30/36Matching; Classification

Definitions

  • the present invention relates to the field of gesture recognition in touch-based user interfaces.
  • gesture strokes i.e., strokes which are associated with realizing a defined action on the content
  • non-gesture strokes such as actual content (e.g., text, math, shape, etc.) being added by the user.
  • Existing gesture recognition techniques are rules-based. More specifically, they rely on a manual definition of a set of heuristics for recognizing a defined set of gesture types. While the performance of these techniques is generally acceptable, they typically perform poorly for more elaborate gesture types or atypical gesture strokes. In addition, the update of these techniques to add new gesture types is difficult because of the need to develop new heuristics each time for the new gesture types.
  • Prior-art patent document US2004/054701 A1 discloses a pen-based editing system for manipulating mathematical expressions.
  • the system allows a user to make conventional changes to expressions, such as copy and move, and also to work with the expressions in ways peculiar to the problem domain, including, for example, handling ambiguity, expression fragments and alternate recognitions.
  • the system is a generalization of an online recognizer for mathematical expressions.
  • the system uses the same basic recognition techniques as the online recognizer, however the input information available to the editor is more varied, including mixtures of known and unknown characters and positional relations.
  • Prior-art patent document EP 1 141 941 A2 discloses a method and system for recognizing user input information including cursive handwriting and spoken words.
  • a time-delayed neural network having an improved architecture is trained at the work level with an improved method, which, along with preprocessing improvements, results in a recognizer with greater recognition accuracy.
  • Preprocessing is performed on the input data and, for example, may include resampling the data with sample points based on the second derivative to focus the recognizer on areas of the input data where the slope change per time is greatest.
  • the input data is segmented, featurized and fed to the time-delayed neural network which outputs a matrix of character scores per segment.
  • the neural network architecture outputs a separate score for the start and the continuation of a character.
  • a dynamic time warp is run against dictionary words to find the most probable path through the output matrix for that word, and each word is assigned a score based on the least costly path that can be traversed through the output matrix.
  • the word (or words) with the overall lowest score (or scores) are returned.
  • a DTW is similarly used in training, whereby the sample ink only need be labeled at the word level.
  • the present invention addresses some of the recognized deficiencies of the prior art. Specifically, the present invention proposes a method for recognizing gesture strokes in user input applied onto an electronic document via a touch-based user interface, comprising:
  • the stroke classifier may be implemented as a neural network.
  • the use of a neural network means that a new gesture type can be added easily with simple retraining of the stroke classifier on data including the new gesture type.
  • the electronic document may include handwritten content and/or typeset content.
  • sub-stroke segmentation allows for a sequential representation that follows the path of the stroke to be obtained. Each segment corresponds as such to a local description of the stroke. Compared to representing the stroke as a mere sequence of points, sub-stroke segmentation permits to maintain path information (i.e., the relationships between points within each segment) and results in a reduction in computation time.
  • the stroke classifier is implemented as a recurrent Bidirectional Long Short-Term Memory (BLSTM).
  • BLSTM Bidirectional Long Short-Term Memory
  • the use of a recurrent BLSTM neural network means that the network includes memory blocks which enable it to learn long-term dependencies and to remember information over time. This type of network permits the stroke classifier to handle a sequence of vectors (an entire stroke) and to account for temporal dependencies between successive sub-strokes (i.e., to remember the details of the path of the stroke).
  • the method further comprises generating a plurality of corrected timestamps based on the plurality of timestamps.
  • the correction of the plurality of timestamps is advantageous to remove artifacts related to device capture and to improve gesture stroke recognition. Indeed, due to device capture issues, it is common that certain timestamps do not correspond to the exact instants at which their respective ink points are drawn. For example, in certain devices, the timestamps assigned to ink points correspond to the time at which an event log containing the ink points is sent to a processor unit, not the precise instants at which the ink points are captured. As such, different successive ink points can have the same timestamp value in the received data. Correction of the plurality of timestamps ensures that the timestamps better reflect the exact instants at which the respective ink points are drawn by the user. Improved gesture recognition is thereby achieved.
  • generating the plurality of corrected timestamps based on the plurality of timestamps comprises:
  • the method further comprises resampling the plurality of ink points to generate a second plurality of ink points and an associated second plurality of timestamps,
  • Resampling the plurality of ink points is advantageous to ensure uniform performance across different devices. Indeed, as devices typically use different sampling techniques, the data received may differ in terms of sampling characteristics between devices.
  • the second plurality of timestamps are characterized by a fixed duration between consecutive timestamps.
  • the resampling comprises interpolating the plurality of ink points and associated plurality of timestamps to generate the second plurality of ink points and the associated second plurality of timestamps.
  • the segmenting of the plurality of ink points comprises segmenting the plurality of ink points such that the plurality of segments have equal duration.
  • the plurality of segments may have an equal number of ink points. Improved recognition accuracy was shown to result from using one or more of these segmentation techniques.
  • generating the plurality of feature vectors based respectively on the plurality of segments comprises, for each segment of the plurality of segments corresponding to a respective sub-stroke:
  • the content that neighbors the sub-stroke may be defined as content that intersects a window centered with respect to the sub-stroke.
  • the feature vector associated with a sub-stroke describes both the shape of the sub-stroke and the content in the neighborhood of the sub-stroke. These two types of information are complementary and allow for a highly accurate recognition of the stroke as a gesture stroke or a non-gesture stroke.
  • determining at least one scale of the electronic document comprises determining a predefined scale based on a document structure.
  • determining at least one scale of the electronic document comprises calculating the at least one scale independently of the document structure.
  • the calculated scale is calculated subsequently receiving the data generated based on the user input, according to dimensions of the stroke.
  • generating the intrinsic geometric features comprises generating statistical sub-stroke geometric features and/or global sub-stroke geometric features for the sub-stroke.
  • the statistical sub-stroke geometric features are features derived from statistical analysis performed on individual ink point geometric features.
  • the global sub-stroke geometric features are features that represent the overall sub-stroke path (e.g., length, curvature, etc.).
  • generating the statistical sub-stroke geometric features comprises, for each intrinsic geometric feature of a set of intrinsic geometric features:
  • generating the global sub-stroke geometric features for the sub-stroke comprises computing one or more of: a sub-stroke length, a count of singular ink points within the sub-stroke, and a ratio between the sub-stroke length and a distance between a first and a last ink point of the sub-stroke.
  • generating the neighborhood features comprises generating one or more of:
  • normalizing a subset of the features of the feature vectors according to the at least one scale comprises defining a first subset of the intrinsic geometric features and a first subset of the neighborhood features as a first group of features which are constrained by the document structure.
  • the first group of features are normalized according to the predefined scale.
  • normalizing a subset of the features of the feature vectors according to the at least one scale comprises defining a second subset of the intrinsic geometric features and a second subset of the neighborhood features as a second group of features which are independent of the document structure.
  • the second group of features are normalized according to the calculated scale.
  • the present invention provides a computing device, comprising:
  • any of the above-described method embodiments may be implemented as instructions of a computer program.
  • the present disclosure provides a computer program including instructions that when executed by a processor cause the processor to execute a method according to any of the above-described method embodiments.
  • the computer program can use any programming language and may take the form of a source code, an object code, or a code intermediate between a source code and an object code, such as a partially compiled code, or any other desirable form.
  • the computer program may be recorded on a computer-readable medium.
  • the present disclosure is also directed to a computer-readable medium having recorded thereon a computer program as described above.
  • the computer-readable medium can be any entity or device capable of storing the computer program.
  • FIG. 1 illustrates an example process 100 for recognizing gesture strokes in user input applied onto an electronic document via a touch-based user interface according to an embodiment of the present invention.
  • a gesture stroke is a stroke having particular characteristics or attributes and which is intended to realize a corresponding action on content.
  • six gesture types are defined and used. These gesture types correspond to the following actions: Scratch-out (an erase gesture having a zigzag or a scribble shape), Strike-through (an erase gesture performed with a line segment; the line segment can be horizontal, vertical, or slanted), Split (a gesture to split a single sequence of characters into two sequences of characters, or a single line into two lines, or a single paragraph into two paragraphs), Join (a gesture to join two sequences of characters into a single sequence of characters, or two lines into a single line, or two paragraphs into a single paragraph), Surround (a gesture to surround content), and Underline.
  • FIG. 2 illustrates a Split gesture stroke and a Join gesture stroke according to an example embodiment.
  • embodiments are not limited to having six gesture types and more or fewer gesture types may be defined and used.
  • an add-stroke is any stroke that is not one of the defined gesture types.
  • a non-gesture stroke may correspond to content being added by the user.
  • gesture strokes are recognized in user input applied onto an electronic document via a touch-based user interface.
  • the user input may be applied by a fingertip or a stylus pen, for example, onto the touch-based user interface.
  • the electronic document may include handwritten content and/or typeset content.
  • the touch-based user interface may be of any type (e.g., resistive, capacitive, etc.) and may be an interface to a computer, a mobile device, a tablet, a game console, etc.
  • example process 100 includes steps 102, 104, 106, 108, 110, and 112. However, as further described below, in other embodiments, process 100 may include additional intervening or subsequent steps to steps 102, 104, 106, 108, 110 and 112.
  • process 100 begins in step 102, which includes receiving data generated based on the user input applied onto the electronic document via the touch-based user interface.
  • the received data represents a stroke applied by the user and comprises a plurality of ink points and a plurality of timestamps associated respectively with the plurality of ink points.
  • the plurality of ink points are localized in a rectangular coordinate space (defined based on a screen of the touch-based user interface) with each ink point being associated with (X,Y) coordinates in the rectangular coordinate space. Therefore, each ink point is defined by a position (X,Y) and an associated timestamp (t).
  • the received data corresponds to data generated by the touch-based user interface and associated circuitry in response to capture of the stroke applied by the user. Different touch-based user interfaces may capture the stroke differently, including using different input sampling techniques, different data representation techniques, etc.
  • the received data is converted such as to generate a plurality of ink points and a respective plurality of timestamps therefrom.
  • process 100 may further include correcting the plurality of timestamps contained in the received data to generate a plurality of corrected timestamps.
  • the plurality of corrected timestamps are then associated with the plurality of ink points and used instead of the original timestamps for the remainder of process 100.
  • correction of the plurality of timestamps is advantageous to remove artifacts related to device capture and to improve gesture stroke recognition. Indeed, due to device capture issues, it is common that certain timestamps do not correspond to the exact instants at which their respective ink points are drawn. For example, in certain devices, the timestamps assigned to ink points correspond to the time at which an event log containing the ink points is sent to a processor unit, not the precise instants at which the ink points are captured. As such, different successive ink points can have the same timestamp value in the received data. Correction of the plurality of timestamps ensures that the timestamps better reflect the exact instants at which the respective ink points are drawn by the user. Improved gesture recognition is thereby achieved.
  • the correction of the plurality of timestamps is done by using a function that approximates an original timestamp curve of the plurality of ink points.
  • the approximating function may be a linear function, though embodiments are not limited as such.
  • FIG. 3 illustrates a linear function 302 which approximates an original timestamp curve 304 according to an example.
  • the original timestamp curve 304 provides for each of a plurality of ink points (numbered 1 to 163 as given by the X-axis) a corresponding timestamp (between 0 and 600 as given by the Y-axis).
  • the original timestamp curve 304 is a step function, reflecting that multiple successive ink points have the same timestamp value. As discussed before, this may be due to device capture issues.
  • the linear function 302 is a linear approximation of the original timestamp curve 304.
  • the linear function 302 is the best-fitting function to the original timestamp curve 304.
  • the linear function 302 is obtained by Least Squares fitting to the original timestamp curve 304.
  • the correction of a timestamp associated with an ink point includes modifying the timestamp associated with the ink point as provided by the original timestamp curve 304 to a corresponding value obtained by projecting the ink point onto the linear function 302.
  • process 100 may further include resampling the plurality of ink points to generate a second plurality of ink points and an associated second plurality of timestamps.
  • the resampling may be performed based on the original or the corrected timestamps.
  • the second plurality of ink points and the second plurality of timestamps are then used for the remainder of process 100.
  • Resampling the plurality of ink points is advantageous to ensure uniform performance across different devices. Indeed, as devices typically use different sampling techniques, the data received in step 102 may differ in terms of sampling characteristics between devices.
  • resampling techniques may be used: temporal, spatial, or both.
  • resampling according to a temporal frequency is used, resulting in the second plurality of timestamps being characterized by a fixed duration between consecutive timestamps.
  • the resampling comprises interpolating the plurality of ink points and associated plurality of timestamps to generate the second plurality of ink points and the associated second plurality of timestamps.
  • process 100 includes segmenting the plurality of ink points into a plurality of segments each corresponding to a respective sub-stroke of the stroke represented by the received data.
  • Each sub-stroke comprises a respective subset of the plurality of ink points representing the stroke.
  • sub-stroke segmentation The insight behind sub-stroke segmentation is to obtain a sequential representation that follows the path of the stroke. Each segment corresponds as such to a local description of the stroke. Compared to representing the stroke as a mere sequence of ink points, sub-stroke segmentation permits to maintain path information (i.e., the relationships between ink points within each segment) and results in a reduction in computation time.
  • sub-stroke segmentation techniques may be used according to embodiments.
  • sub-stroke segmentation based on temporal information is used, resulting in the plurality of segments having equal duration.
  • the same segment duration is used for all strokes. Further, the segment duration may be device independent.
  • FIG. 4A illustrates an example stroke 402 corresponding to an Underline gesture stroke.
  • the data corresponding to stroke 402 is resampled according to a temporal frequency resulting in ink points 404 with a fixed duration between consecutive timestamps.
  • the resampled ink points 404 are then split into sub-strokes of equal segment duration, defined by ink points 406.
  • the stroke 402 is split into segments having an equal number of ink points as shown in FIG. 4A .
  • process 100 includes determining at least one scale of the electronic document.
  • the at least one scale is determined by a document structure for guidance of the user input.
  • the document structure may be defined by guides and/or constraints set in the electronic document.
  • the guides and/or constraints may be set by the user and may have associated default values.
  • the at least one scale is defined, independently from received data, by predetermined parameters, including for example dimensions of a line pattern for inputting text, including a line gap distance and a column width; predetermined positions for inputting mathematical equation components, including a line-gap distance, a subscript-gap, and superscript-gap distance; a constrained canvas for inputting shapes in which users are guided to adhere to an alignment structure, such as a grid pattern background or the like.
  • a scale based on a predetermined parameter of the document structure is referred to as a predefined scale.
  • FIG. 4B shows a schematic view of an example visual rendering of an ink input or capture area 10 on a portion of the input surface 40 of an example computing device.
  • the input area 40 is provided such that an alignment structure in the form of a line pattern background 41 provides a document structure for guidance of user input and for the alignment of digital and typeset ink objects.
  • An example alignment structure is described in United States Patent Application No. 14/886, 195 titled “System and Method of Digital Note Taking" filed in the name of the present Applicant and Assignee, the entire content of which is incorporated by reference herein.
  • the line pattern has horizontal lines 41 separated by a vertical distance defined by a line pattern unit (LPU).
  • LPU line pattern unit
  • a vertical rhythm height unit provides a graduated measure of the LPU on a particular device.
  • the vertical rhythm height unit may be based on the density independent pixel (dp).
  • the LPU may be set at about one centimeter for any device, one centimeter being a certain multiple of the vertical rhythm height unit depending on the device.
  • a user may be allowed to customize the LPU to a different multiple of the vertical rhythm height unit according to their writing style.
  • the vertical rhythm height unit may be based on a typeset text size (e.g., the minimum text size), and the LPU is provided as a multiple of this typeset text size.
  • a typeset text size e.g., the minimum text size
  • All lines 41 are displayed with the same light and subtle color, e.g., grey, that is visible but faded with respect to the rendering of the content itself. In this way the line pattern is noticeable but unobtrusive so as to guide the handwriting input without distracting from the content entry.
  • determining the at least one scale of the electronic document comprises determining the at least one scale based on parameters independent from the document structure.
  • the parameters may be based on dimensions of a stroke applied by the user.
  • a bounding box 408 of the stroke 402 may be used to determine the at least one scale.
  • the bounding box 408 is defined by a vertical distance y between the top and bottom horizontal sides of the bounding box and by a horizontal distance x between the left and right vertical sides of the bounding box.
  • the x distance and/or the y distance could be used for scale calculation according to embodiments.
  • a scale based on a dimension of an input stroke, independent from the document structure, is referred to as a calculated scale.
  • a user may handwrite in free-mode, i.e. without any constraints of lines to follow or input size to comply with (e.g. on a blank page).
  • free-mode i.e. without any constraints of lines to follow or input size to comply with (e.g. on a blank page).
  • users may input handwriting that is not closely aligned to the line pattern or may desire to ignore the line pattern and write in an unconstrained manner, such as diagonally or haphazardly, the recognition of the handwriting input is performed by the HWR system 114 without regard to the line pattern.
  • the input area 40 may be provided as a constraint-free canvas that allows users to create object blocks (blocks of text, drawings, etc.) anywhere without worrying about sizing or alignment.
  • the user input may be diagrams or any other content of text, non-text, or mixed content of text and non-text.
  • process 100 includes generating a plurality of feature vectors based respectively on the plurality of segments.
  • each feature vector comprises features of a respective segment of the plurality of segments.
  • step 108 includes, for each segment of the plurality of segments which corresponds to a respective sub-stroke of the stroke, generating features of the segments including: generating intrinsic geometric features that represent the shape of the respective sub-stroke, the intrinsic geometric features computed from the respective subset of the plurality of ink points associated with the sub-stroke; and generating neighborhood features that represent spatial relationships between the sub-stroke and content that neighbors the sub-stroke, the neighborhood features computed from a relationship between the respective subset of the plurality of ink points associated with the sub-stroke and content that neighbors the sub-stroke.
  • the content that neighbors the sub-stroke is content that intersects a window centered with respect to the sub-stroke.
  • the window size may be configured in various ways. In one embodiment, the window size is set proportionally to the mean height of characters and/or symbols in the electronic document. In another embodiment, if the electronic document contains no characters or symbols, the window size is set proportionally to the size of the touch-based user interface (which may correspond to the screen size of the device).
  • generating the intrinsic geometric features associated with a segment or sub-stroke includes generating statistical sub-stroke geometric features and/or global sub-stroke geometric features.
  • the statistical sub-stroke geometric features are features derived from statistical analysis performed on individual ink point geometric features.
  • a set of individual geometric features of interest to be computed per ink point of the segment is defined as stroke relative geometric features.
  • the set of stroke relative geometric features may describe, for example, geometric relationships between the (current) ink point in the segment and any other ink point of the stroke, for example the first ink point in the stroke and/or a center of gravity of the stroke (obtained by averaging the X and Y coordinates of the ink points of the stroke), which may be represented by the projections "dx_s” and "dy_s” on the X and Y axes respectively of the distance between the current ink point and the first ink point in the stroke (shown in FIG. 5C ), and the projections "dx_g” and "dy_g” on the X and Y axes respectively of the distance between the current ink point and the center of gravity of the stroke (shown in FIG. 5D ).
  • the set of individual geometric features may include local geometric features.
  • the set of local geometric features may describe, for example, geometric relationships between the (current) ink point in the segment and any other ink point of the segment, which relationships may be represented by the absolute distance "ds" between the current ink point and the previous ink point in the segment (shown in FIG. 5A ); the projections "dx” and “dy” on the X and Y axes respectively of the distance "ds” (shown in FIG. 5A ); and a measure of the curvature at the current ink point, represented in an embodiment illustrated in FIG. 5B , by the values cos ⁇ , sin ⁇ , and ⁇ , where ⁇ is the angle formed between the line connecting the previous ink point to the current ink point and the line connecting the current ink point to the next ink point.
  • the feature is determined over all ink points of the segment (where appropriate) to determine respective values for the ink points of the segment. Then, one or more statistical measures are calculated, for each feature, based on the determined respective values corresponding to the feature. In an embodiment, for each feature, the minimum value, the maximum value, and the median value are obtained based on the determined respective values corresponding to the feature.
  • the one or more statistical measures, computed over all features of the set of individual geometric features correspond to the statistical sub-stroke geometric features for the sub-stroke.
  • the global sub-stroke geometric features are features based on the overall sub-stroke path (e.g., length as a summation of consecutive ink point distances, curvature as an average of curvature computed from consecutive ink points, etc.).
  • generating the global sub-stroke geometric features for a sub-stroke comprises computing one or more of: a sub-stroke length, a count of singular ink points (such as inflection points and/or crossing points (a crossing point being a point where the stroke intersects itself)), and a count of maximum or minimum curvature ink points determined according to a maximum or a minimum curvature value within the sub-stroke.
  • generating the global sub-stroke geometric features may comprise computing of a ratio between the sub-stroke length and a distance between its first and last ink points of the sub-stroke.
  • the intrinsic geometric features associated with a segment or sub-stroke includes both statistical sub-stroke geometric features and global sub-stroke geometric features determined based on the sub-stroke.
  • Intrinsic geometric features are features computed from the respective subset of the plurality of ink points associated with the sub-stroke, i.e., they describe a property inherent to the sub-stroke itself.
  • the neighborhood features associated with a segment or sub-stroke represent spatial relationships between the sub-stroke and content that neighbors the sub-stroke. This information is useful to eliminate ambiguity between different gesture types. For example, as shown in FIG. 6 , a Strike-through gesture stroke and an Underline gesture stroke can have similar shapes and as such similar intrinsic geometric features. However, when the position of the stroke relative to its neighboring content is considered (i.e., whether or not the stroke is below the baseline of the characters or words), distinction between the two gesture types become much easier.
  • generating the neighborhood features comprises generating one or more of:
  • the content that neighbors the sub-stroke is content that intersects a window centered with respect to the sub-stroke.
  • the window size may be configured in various ways. In one embodiment, the window size is set proportionally to the mean height of characters and/or symbols in the electronic document. In another embodiment, if the electronic document contains no characters or symbols, the window size is set proportionally to the size of the touch-based user interface (which may correspond to the screen size of the device).
  • the four types of neighborhood features are independent of one another. Each type may have its own fixed number of features.
  • FIG. 7 illustrates an example approach for generating textual neighborhood features for a sub-stroke according to an embodiment of the present invention.
  • the approach includes selecting a neighborhood window centered at the sub-stroke and then dividing the neighborhood window into four regions around the sub-stroke center. The four regions may be determined by the intersecting diagonals of the neighborhood window.
  • the four closest characters and/or the four closest words located at the left, right, top, and bottom of the sub-stroke are identified.
  • a text recognizer as described in US 9,875,254 B2 , may be used to identify the closest characters and/or words.
  • the selected neighborhood window contains characters only and as such only characters are identified. Specifically, a left character, a top character, and a right character are identified.
  • the group of features include the distance between the center of the sub-stroke and a center of the identified character or word (the center of the identified character or word being the center of a bounding box of the identified character or word); the projections on the X and Y axes respectively of said distance; the distance between the center of the sub-stroke and a baseline of the identified characters or words; and the distance between the center of the sub-stroke and a midline of the identified characters or words.
  • the baseline is the imaginary line upon which a line of text rests.
  • the midline is the imaginary line at which all non-ascending letters stop.
  • the baseline and the midline are determined and provided by a text recognizer to the gesture recognizer.
  • no character or word is identified in a given region (e.g., no bottom character or word in the example of FIG. 7 )
  • default values are used for the textual neighborhood features corresponding to the region.
  • the neighborhood window is not limited to a square window as shown in FIG. 7 and may be rectangular. Further, the neighborhood window may be divided into more or fewer than four regions in other embodiments. As such, more or fewer than four closest characters and/or four closest words may be identified.
  • Mathematical neighborhood features and non-textual neighborhood features for a sub-stroke may also be generated according to the above-described approach, with mathematical or non-textual content identified instead of textual content.
  • the closest mathematical symbols to the sub-stroke are identified.
  • a math symbol recognizer as described in WO 2017/008896 A1 , may be used to identify the closest mathematical symbols.
  • the features determined per identified symbol may include the projections on the X and Y axes of the distance between the center of the sub-stroke and the center of the symbol. As above, when a region does not include a mathematical symbol, the corresponding features are set to default values.
  • the closest shapes and primitives (parts of shapes) to the sub-stroke are identified.
  • a shape recognizer as described in WO 2017/067652 A1 or WO 2017/067653 A1 , may be used to identify the closest shapes and primitives.
  • the features determined per identified shape or primitive may include the distance between the center of the sub-stroke and the center of the shape or primitive. As above, when a region does not include a shape or primitive, the corresponding features are set to default values.
  • the feature vector associated with a segment or sub-stroke includes both intrinsic geometric features and neighborhood features as described above.
  • the feature vector describes both the shape of the sub-stroke and the content in which the sub-stroke is drawn.
  • the entire stroke is represented by a plurality of successive feature vectors (each vector corresponding to a respective sub-stroke of the stroke).
  • process 100 includes normalizing at least one group of the features of the feature vectors according to the at least one scale.
  • Each feature of the feature vectors described above may be normalized by at least one scale, which may be a predefined scale or a calculated scale.
  • a feature of the at least one group of features of the feature vectors is normalized by dividing a feature value with a scale value of the at least one scale.
  • the features of the feature vectors may be part of a first group G1 or a second group G2.
  • the features of the feature vectors belonging to G1 are normalized according to a first scale.
  • a first subset of the intrinsic geometric features and/or a first subset of the neighborhood features are included in the first group G1.
  • the first subset of the intrinsic geometric features may be individual geometric features.
  • the first subset of the neighborhood features may be character relative geometric features.
  • the first group of features of the feature vectors G1 may comprise features which may be constrained by the document structure (e.g. line gap distance, column width, grid pattern canvas), defined uniformly throughout the document and influencing the user handwritten input. Therefore, the features of the first group G1 are normalized according to the predefined scale. As illustrated on FIG. 8A , the features of the feature vector defining the second group of features of the feature vectors referred to as G2 are normalized according to a second scale. A second subset of the intrinsic geometric features and a second subset of the neighborhood features are included in the second group G2. The second subset of the intrinsic geometric features are stroke relative geometric features and the second subset of the neighborhood features are the point relative geometric features.
  • the document structure e.g. line gap distance, column width, grid pattern canvas
  • the second group G2 may comprise features describing characteristics inherent to user input style and independent of the document structure.
  • User input style is not constrained by the document structure because it is reliant on the user handwritten input variability and individual contours of characters or symbols. Therefore, the features of the second group G2 are normalized according to the calculated scale.
  • the features of feature vectors included in the first group G1 or the second group G2 may be set to default values as described above and further normalized likewise according to their belonging group.
  • some features of the feature vectors may be excluded from both the first group G1 and the second group G2, such as global sub-stroke geometric features, symbol relative geometric features, and shape relative geometric features.
  • the features of the feature vectors excluded from a defined group are not normalized and are used with their initial calculated values.
  • process 100 includes applying the plurality of feature vectors as an input sequence representing the stroke to a trained stroke classifier to generate a vector of probabilities, which include a probability that the stroke is a non-gesture stroke and a probability that the stroke is a given gesture type of a set of gesture types.
  • the set of gesture types includes pre-defined gesture types such as Scratch-out, Strike-through, Split, Join, Surround, and Underline.
  • step 108 may include determining the respective probabilities that the stroke is a gesture stroke for all gesture types of the set of gesture types (e.g., the probability that the stroke is a Scratch-out gesture stroke, the probability that the stroke is a Strike-through gesture stroke, etc.).
  • FIG. 8B illustrates an example stroke classifier 800 according to an embodiment of the present invention.
  • the stroke classifier is trained before use for inference.
  • An example approach which can be used to train the stroke classifier is described further below.
  • example stroke classifier 800 includes a recurrent Bidirectional Long Short-Term Memory (BLSTM) neural network 802.
  • Neural network 802 includes backward layers 804 and forward layers 806.
  • Detailed description of the functions that may be used for backward layers 804 and forward layers 806 can be found in " Graves, A., & Schmidhuber, J. (2005), Framewise phoneme classification with bidirectional LSTM and other neural network architectures, Neural networks, 18(5-6), 602-610 "; “ S. Hochreiter and J. Schmidhuber, Long Short-Term Memory, NC, 9(8): 1735-1780, 1997 ,” and “ F. Gers, N. Schraudolph, and J.
  • a recurrent BLSTM neural network means that network includes memory blocks which enable it to learn long-term dependencies and to remember information over time.
  • this network permits the stroke classifier to handle a sequence of vectors (an entire stroke) and to account for the temporal dependencies between successive sub-strokes (i.e., to remember the details of the path of the stroke).
  • example stroke classifier 800 includes an output layer 808 configured to generate a set of probabilities 810-1, 810-2, ..., 810-k based on the outputs of backward layers 804 and forward layers 806.
  • output layer 808 may be implemented using a cross-entropy objective function and a softmax activation function, which is a standard implementation for 1 of K classification tasks. A detailed description of such an implementation can be found for example in C. Bishop. Neural Networks for Pattern Recognition. Oxford University Press, Inc., 1995 .
  • each feature vector t i (representing a sub-stroke) includes a geometric descriptor (corresponding to the intrinsic geometric features described above) and a neighborhood descriptor (corresponding to the neighborhood features, including the textual, mathematical, and non-textual neighborhood features described above).
  • the input sequence is fed into neural network 802 both forwards and backwards by virtue of the bi-directionality of network 802.
  • the input sequence is fed in its original order (i.e., t 0 then t t then t 2 , etc.) to forward layers 806, and in the reverse order (i.e., t n then t n-1 then t n-2 , etc.) to backwards layer 804.
  • This permits the network 802 to process the stroke data both by considering previous information (information relating to past sub-strokes) and by considering following information (information relating to next sub-strokes).
  • Output layer 808 receives the outputs of backward layers 804 and forward layers 806 and generates the set of probabilities 810-1, 810-2, ..., 810-k.
  • output layer 808 sums up the activation levels from both layers 804 and 806 to obtain the activation levels of nodes of output layer 808.
  • the activation levels of the nodes of output layer 808 are then normalized to add up to 1. As such, they provide a vector with the set of probabilities 810-1, 810-2, ..., 810-k.
  • probability 801-1 corresponds to the probability that the stroke is an add-stroke or a non-gesture stroke.
  • Probabilities 810-2, ..., 810-k each corresponds to a respective probability that the stroke is a respective gesture stroke of the set of gesture types.
  • the gesture is recognized as being a particular gesture stroke (e.g., Underline) if the probability associated with the particular gesture stroke represents the maximum probability among the set of probabilities 810-1, 810-2, ..., 810-k. Otherwise, if the probability associated with a non-gesture stroke is the maximum, the stroke will be considered a non-gesture stroke or an add-stroke.
  • a particular gesture stroke e.g., Underline
  • the stroke classifier is trained based on a set of training data specifically tailored for the stroke recognition task.
  • the training data includes both gesture strokes of a set of gesture types (e.g., Underlines, Strike-throughs, etc.) and non-gesture strokes (e.g., text, math symbols, non-text strokes).
  • the training data includes gesture strokes and non-gesture strokes further classified according to a specific neighborhood content, as described above, including textual neighborhood, mathematical neighborhood and non-textual neighborhood.
  • specific neighborhood contents may be collected from real note samples or may be simulated content such as, for example, simulated typeset text in the neighborhood of a gesture stroke.
  • the training data is built by imitating real use cases. Specifically, using a dedicated protocol for data collection, users are asked to copy notes (the original notes can be handwritten or typeset) to generate handwritten electronic notes.
  • An example original note and a handwritten electronic copy thereof created by a user are shown in FIGs. 9A and 9B respectively.
  • the user is shown another version of the original note with additional strokes applied (the additional strokes may be applied to different types of content in the note) and is asked to reproduce this version.
  • FIG. 9C illustrates another version of the original note of FIG. 9A with some content highlighted.
  • FIG. 9D the user reproduces this version by double-underlining the highlighted content.
  • the stroke data is captured as the user reproduces the modified content to be used in training.
  • notes with various layouts simple, multi-column, with/without separators, with or without title, etc.
  • various content types text, tables, diagrams, equations, geometry, etc.
  • various languages and scripts may be used. For example, users of different countries may be invited to copy notes in their native languages and to perform strokes on these notes.
  • different touch-based devices e.g., iPad, Surface, etc.
  • different touch-based devices e.g., iPad, Surface, etc.
  • different touch-based devices e.g., iPad, Surface, etc.
  • different touch-based devices e.g., iPad, Surface, etc.
  • different ink capture characteristics e.g., different sampling rates, different timestamp generation methods, different pressure levels applied, etc.
  • the training data may also include notes generated in order to train the stroke classifier to perform on typeset documents.
  • these notes are generated by converting the produced handwritten notes into typeset versions by replacing each ink element (character, symbol, shape, or primitive) in the handwritten note with a respective model that corresponds to the path of the ink element.
  • each ink element character, symbol, shape, or primitive
  • its corresponding typeset model is rescaled to fit into a bounding box of the original ink element, and then positioned with respect to the baseline and the center of the corresponding ink element.
  • FIG. 10 illustrates an example handwritten note and a corresponding typeset version generated according to this approach.
  • the stroke data captured for the handwritten notes is then applied onto the respective typeset versions.
  • FIG. 11 illustrates a computer device 1100 which may be used to implement embodiments of the present invention.
  • computer device 1100 includes a processor 1102, a read-only memory (ROM) 1104, a random-access memory (RAM) 1106, a non-volatile memory 1108, and communication means 1110.
  • the ROM 1104 of the computer device 1100 may store a computer program including instructions that when executed by processor 1102 cause processor 1102 to perform a method of the present invention. The method may include one or more of the steps described above in FIG. 1 .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Character Discrimination (AREA)

Claims (15)

  1. Procédé pour la reconnaissance de traits gestuels dans une entrée d'utilisateur appliquée sur un document électronique via une interface utilisateur tactile, comprenant :
    réception (102) de données générées sur la base de l'entrée d'utilisateur, les données représentant un trait comprenant une pluralité de points d'encre dans un espace de coordonnées orthogonales et une pluralité d'horodatages associés respectivement à la pluralité de points d'encre ;
    segmentation (104) de la pluralité de points d'encre en une pluralité de segments correspondant chacun à un sous-trait respectif du trait et comprenant un sous-ensemble respectif de la pluralité de points d'encre ;
    détermination (106) d'au moins une échelle du document électronique ;
    génération (108) d'une pluralité de vecteurs caractéristiques basés respectivement sur la pluralité de segments, les vecteurs caractéristiques comprenant des caractéristiques des segments ;
    normalisation (110) d'un sous-ensemble des caractéristiques des vecteurs caractéristiques selon l'au moins une échelle ;
    et
    application (112) de la pluralité de vecteurs caractéristiques comme une séquence d'entrée représentant le trait sur un classificateur de trait entraîné pour générer un vecteur de probabilités comportant une probabilité que le trait soit un trait non gestuel et une probabilité que le trait soit un trait gestuel donné d'un ensemble de types gestuels.
  2. Procédé selon la revendication 1, dans lequel l'au moins une échelle est prédéfinie sur la base d'une structure de document.
  3. Procédé selon la revendication 1, dans lequel l'au moins une échelle est calculée indépendamment d'une structure de document.
  4. Procédé selon la revendication 3, dans lequel l'au moins une échelle est calculée, à la suite de la réception desdites données générées sur la base de l'entrée d'utilisateur, selon les dimensions du trait.
  5. Procédé selon l'une quelconque des revendications 1 à 4, dans lequel la génération de la pluralité de vecteurs caractéristiques basés respectivement sur la pluralité de segments comprend, pour chaque segment de la pluralité de segments correspondant à un sous-trait respectif :
    génération de caractéristiques géométriques intrinsèques qui représentent la forme du sous-trait respectif, les caractéristiques géométriques intrinsèques étant calculées à partir du sous-ensemble respectif de la pluralité de points d'encre associés au sous-trait ; et
    génération de caractéristiques de voisinage qui représentent les relations spatiales entre le sous-trait et un contenu qui est voisin du sous-trait, les caractéristiques de voisinage étant calculées à partir d'une relation entre le sous-ensemble respectif de la pluralité de points d'encre associés au sous-trait et un contenu qui est voisin du sous-trait,
    dans lequel le contenu qui est voisin du sous-trait fait intersection avec une fenêtre centrée par rapport au sous-trait.
  6. Procédé selon la revendication 5, dans lequel la génération des caractéristiques géométriques intrinsèques comprend la génération de caractéristiques géométriques de sous-trait statistiques et/ou de caractéristiques géométriques de sous-trait globales pour le sous-trait.
  7. Procédé selon la revendication 6, dans lequel la génération des caractéristiques géométriques de sous-trait statistiques comprend, pour chaque caractéristique géométrique d'un ensemble de caractéristiques géométriques intrinsèques :
    détermination de valeurs respectives pour les points d'encre du segment correspondant au sous-trait respectif ; et
    calcul d'une ou de plusieurs mesures statistiques basées sur les valeurs respectives déterminées.
  8. Procédé selon l'une quelconque des revendications 6 à 7, dans lequel la génération des caractéristiques géométriques de sous-trait globales pour le sous-trait comprend le calcul de l'un ou de plusieurs parmi : une longueur de sous-trait, un comptage de points d'encre uniques au sein du sous-trait, et un rapport entre la longueur de sous-trait et une distance entre un premier et un dernier point d'encre du sous-trait.
  9. Procédé selon l'une quelconque des revendications 5 à 8, dans lequel la génération des caractéristiques de voisinage comprend la génération de l'une ou de plusieurs parmi :
    des caractéristiques de voisinage textuelles représentant les relations spatiales entre le sous-trait et un contenu textuel qui est voisin du sous-trait ;
    des caractéristiques de voisinage mathématiques représentant les relations spatiales entre le sous-trait et un contenu mathématique qui est voisin du sous-trait ; et
    des caractéristiques de voisinage non textuelles représentant les relations spatiales entre le sous-trait et un contenu non textuel qui est voisin du sous-trait.
  10. Procédé selon l'une quelconque des revendications 5 à 9, dans lequel un premier sous-ensemble des caractéristiques géométriques intrinsèques et un premier sous-ensemble des caractéristiques de voisinage définissent un premier groupe de caractéristiques comprenant des caractéristiques des vecteurs caractéristiques qui sont contraintes par la structure de document.
  11. Procédé selon la revendication 10, dans lequel les caractéristiques du premier groupe de caractéristiques sont normalisées selon une échelle prédéfinie.
  12. Procédé selon l'une quelconque des revendications 10 à 11, dans lequel un deuxième sous-ensemble des caractéristiques géométriques intrinsèques et un deuxième sous-ensemble des caractéristiques de voisinage définissent un deuxième groupe de caractéristiques comprenant des caractéristiques des vecteurs caractéristiques qui sont indépendantes de la structure de document.
  13. Procédé selon la revendication 12, dans lequel les caractéristiques du deuxième groupe de caractéristiques sont normalisées selon une échelle calculée indépendante de la structure de document.
  14. Dispositif informatique, comprenant :
    un processeur (1102) ; et
    de la mémoire (1104) stockant des instructions qui, lors de leur exécution par le processeur (1102), configurent le processeur (1102) afin qu'il réalise un procédé selon l'une quelconque des revendications 1 à 13.
  15. Programme d'ordinateur comportant des instructions qui, lors de leur exécution par un processeur (1102), amènent le processeur (1102) à exécuter un procédé selon l'une quelconque des revendications 1 à 13.
EP21305574.2A 2021-05-04 2021-05-04 Reconnaissance de course de geste dans une entrée d'interface utilisateur tactile Active EP4086744B1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21305574.2A EP4086744B1 (fr) 2021-05-04 2021-05-04 Reconnaissance de course de geste dans une entrée d'interface utilisateur tactile
PCT/EP2022/060926 WO2022233628A1 (fr) 2021-05-04 2022-04-25 Reconnaissance de traits gestuels dans une entrée d'interface utilisateur tactile

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP21305574.2A EP4086744B1 (fr) 2021-05-04 2021-05-04 Reconnaissance de course de geste dans une entrée d'interface utilisateur tactile

Publications (2)

Publication Number Publication Date
EP4086744A1 EP4086744A1 (fr) 2022-11-09
EP4086744B1 true EP4086744B1 (fr) 2024-02-21

Family

ID=75904853

Family Applications (1)

Application Number Title Priority Date Filing Date
EP21305574.2A Active EP4086744B1 (fr) 2021-05-04 2021-05-04 Reconnaissance de course de geste dans une entrée d'interface utilisateur tactile

Country Status (2)

Country Link
EP (1) EP4086744B1 (fr)
WO (1) WO2022233628A1 (fr)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6393395B1 (en) * 1999-01-07 2002-05-21 Microsoft Corporation Handwriting and speech recognizer using neural network with separate start and continuation output scores
US20040054701A1 (en) * 2002-03-01 2004-03-18 Garst Peter F. Modeless gesture driven editor for handwritten mathematical expressions
FR2880709B1 (fr) 2005-01-11 2014-04-25 Vision Objects Procede de recherche, reconnaissance et localisation dans l'encre, dispositif, programme et langage correspondants
US9904847B2 (en) 2015-07-10 2018-02-27 Myscript System for recognizing multiple object input and method and product for same
US10643067B2 (en) 2015-10-19 2020-05-05 Myscript System and method of handwriting recognition in diagrams
US10417491B2 (en) 2015-10-19 2019-09-17 Myscript System and method for recognition of handwritten diagram connectors

Also Published As

Publication number Publication date
EP4086744A1 (fr) 2022-11-09
WO2022233628A1 (fr) 2022-11-10

Similar Documents

Publication Publication Date Title
US10664695B2 (en) System and method for managing digital ink typesetting
JP4745758B2 (ja) テキストおよびグラフィクスの空間認識およびグループ化
Kara et al. Hierarchical parsing and recognition of hand-sketched diagrams
US7369702B2 (en) Template-based cursive handwriting recognition
US5396566A (en) Estimation of baseline, line spacing and character height for handwriting recognition
US5454046A (en) Universal symbolic handwriting recognition system
KR102677200B1 (ko) 터치-기반 사용자 인터페이스 입력에서의 제스처 스트로크 인식
Kumar et al. A lexicon-free approach for 3D handwriting recognition using classifier combination
JPH06348904A (ja) 手書き字号の認識システム及び認識方法
US10579868B2 (en) System and method for recognition of objects from ink elements
CN108369637A (zh) 用于美化数字墨水的系统和方法
JPH06301781A (ja) コンピュータによるパターン認識のためのイメージ変換方法及び装置
EP3491580B1 (fr) Système et procédé pour embellir une encre numérique superposée
Singh et al. Online handwritten Gurmukhi words recognition: An inclusive study
EP4086744B1 (fr) Reconnaissance de course de geste dans une entrée d'interface utilisateur tactile
US20240231582A9 (en) Modifying digital content including typed and handwritten text
CN115311674A (zh) 手写处理方法、装置、电子设备和可读存储介质
Nouboud et al. A structural approach to on-line character recognition: System design and applications
KR101667910B1 (ko) 디지털 인공 필기 데이터를 생성하는 방법, 장치 및 컴퓨터 판독 가능 매체에 저장된 컴퓨터 프로그램
Ford On-line recognition of connected handwriting
Korovai et al. Handwriting Enhancement: Recognition-Based and Recognition-Independent Approaches for On-device Online Handwritten Text Alignment
WO2024110354A1 (fr) Définition de taille de police de caractères dans une toile non contrainte
Alkholy Arabic optical character recognition using local invariant features
Kara Sketch Understanding for Engineering Software
Stria Online Handwritten Mathematical Formulae Recognition

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20230503

RBV Designated contracting states (corrected)

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

RIC1 Information provided on ipc code assigned before grant

Ipc: G06F 40/171 20200101ALI20230907BHEP

Ipc: G06V 30/142 20220101ALI20230907BHEP

Ipc: G06F 3/0488 20220101AFI20230907BHEP

INTG Intention to grant announced

Effective date: 20230921

RIN1 Information on inventor provided before grant (corrected)

Inventor name: NAYEF, NIBAL

Inventor name: ROY, UDIT

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602021009592

Country of ref document: DE

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG9D

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20240221

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20240621

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20240221

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20240521

Year of fee payment: 4

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20240522

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 1659676

Country of ref document: AT

Kind code of ref document: T

Effective date: 20240221

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20240521

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20240221

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20240221

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20240221

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20240221

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20240521

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20240521

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20240221

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20240221

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20240621

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20240221

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20240522

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20240221

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20240221

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20240221

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20240221

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20240527

Year of fee payment: 4

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20240221

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20240621

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20240221

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20240621

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20240221

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20240221

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20240221

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20240221

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20240221

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20240221

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20240221