EP4287937A1 - Quantifying and visualizing changes over time to health and wellness - Google Patents
Quantifying and visualizing changes over time to health and wellnessInfo
- Publication number
- EP4287937A1 EP4287937A1 EP22750335.6A EP22750335A EP4287937A1 EP 4287937 A1 EP4287937 A1 EP 4287937A1 EP 22750335 A EP22750335 A EP 22750335A EP 4287937 A1 EP4287937 A1 EP 4287937A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- images
- image
- data
- aesthetic
- computing system
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/01—Social networking
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0059—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
- A61B5/0077—Devices for viewing the surface of the body, e.g. camera, magnifying lens
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/44—Detecting, measuring or recording for evaluating the integumentary system, e.g. skin, hair or nails
- A61B5/441—Skin evaluation, e.g. for skin disorder diagnosis
- A61B5/442—Evaluating skin mechanical properties, e.g. elasticity, hardness, texture, wrinkle assessment
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/55—Detecting local intrusion or implementing counter-measures
- G06F21/554—Detecting local intrusion or implementing counter-measures involving event detection and direct action
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/165—Detection; Localisation; Normalisation using facial parts and geometric relationships
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/68—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
- A61B5/6887—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient mounted on external non-worn devices, e.g. non-medical devices
- A61B5/6898—Portable consumer electronic devices, e.g. music players, telephones, tablet computers
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04847—Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
Definitions
- the present disclosure relates to methods, techniques, and systems for quantifying and visualized changes to aesthetic health and wellness and, in particular, to methods, techniques, and systems for scoring and visualizing changes to aesthetic appearance over time.
- Aesthetic medicine has been plagued by a difficultly in objectively assessing the effectiveness of procedures such as plastic surgery, chemical injections, and the like to improve aesthetic health and wellness.
- Visual assessments generally are both difficult to quantify and difficult to view over a period of time.
- general and specific group population data is lacking and there is no concept of a “norm” to compare an individual’s results to larger and/or specific populations, such as based upon geolocation, ethnicity, etc.
- an individual can appreciate visual changes in his/her own body and there are criteria that can be used to evaluate such change, such as different scales used for facial aging (e.g., wrinkles, sagging skin, etc.).
- visual works of art are generally involve a subjective assessment as to whether one is “good” or “bad.” Although there may be objective metrics as to the production of the art piece that can be used to characterize it (e.g., quality of brushstrokes, light, realism, balance of color, negative space in a painting), ultimately a judgement of whether a particular person likes or dislikes a particular piece of art is highly subjective and involves both a logical and emotional decision.
- objective metrics e.g., quality of brushstrokes, light, realism, balance of color, negative space in a painting
- Aesthetic health and wellness is treated similarly. Specifically, visual knowledge of procedure outcomes is scarce and thus there is little way to gain more statistically significant objective data both before and after aesthetic procedures are performed.
- Figure 1 is an example block diagram of an environment for practicing the Aesthetic Delta Measurement System.
- Figure 2 illustrates screen displays of several available front end user interfaces for interacting with an example Aesthetic Delta Measurement System.
- Figures 3A-3I are example screen displays and other illustrations that show a web based portal, an aesthetic visualization portal, for interacting with an example Aesthetic Delta Measurement System to generate crowd sourced labeled visual image data and to generate physician assisted ground truth data.
- Figures 4A-4J are example screen displays that illustrate an administration portal to the web based aesthetic visualization portal used for crowd sourced data
- Figures 5A-5Q are screen displays from an example Aesthetic Delta Measurement System illustrative of a personal analysis application for viewing an individual’s aesthetic changes over time.
- Figure 6 illustrates the different bounding rectangles provided in Aesthetic Delta Measurement System client applications.
- Figure 7 illustrates an example ranking of forehead lines on a discrete scale as produced by the trained Aesthetic Delta Measurement System ML model.
- Figure 8 is an example chart illustrating the quintiles and results used for ELO scoring of facial aging in women.
- Figure 9 illustrates a power curve used to adjust the ELO ranking algorithm K- coefficient.
- Figure 10 is an example block diagram of a computing system for practicing embodiments of an example Aesthetic Delta Measurement System including example components.
- Embodiments described herein provide enhanced computer- and networkbased methods, techniques, and systems for providing visual expertise to objectively measure, evaluate, and visualize aesthetic change.
- Example embodiments provide an Aesthetic Delta Measurement System (“ADMS”), which enables users to objectively measure and visualize aesthetic health and wellness and treatment outcomes and to continuously supplement a knowledge repository of objective aesthetic data based upon a combination of automated machine learning and surveyed human input data. In this manner aesthetic “knowledge” is garnered and accumulated at both an individual and at larger population levels.
- ADMS provides a labeling platform for labeling aesthetic health and wellness over large populations of individuals and a personal analysis application for viewing an individual’s aesthetic changes over time.
- the labeling platform currently implemented as software-as-a-service accessible from a web portal, allows objective scoring and ranking of a wide swatch of images reflecting aesthetic health and wellness over vast populations using a combination of machine learning and crowd sourcing.
- the accumulated, scored, and/or ranked visual image data are forwarded to a backend computing system for further use in machine learning and analysis.
- This data can be used, for example, as training, validation, or test data to a machine learning model to classify and predict aesthetic outcomes.
- the personal analysis application allows objective assessment and evaluation of an individual’s aesthetic health and wellness at an instant (e.g., current time) and over a period of time.
- both static and dynamic image capture of poses of facial features are collected and visualizations presented and labeled with associated objective assessments.
- the visual knowledge (collected, assessed, annotated/labeled data) can be forwarded to the backend computing system configured to apply machine learning to classify, objectively assess the data and/or to supply further training, validation, or test data.
- the ADMS accommodates dynamic acquisition of images and is able to adjust scoring as appropriate to accommodate this dynamic data using a dynamic ranking algorithm to generate data sets for machine learning purposes (e.g., training, test, and validation data).
- This system is also able to remove garbage responses and maintain HIPAA compliance.
- Data acquisition may come from a variety of sources, including physicians, aesthetic wellness and health providers, individuals, and crowd sourced survey data.
- the data automatically and anonymously collected may be used to adjust manually curated ground truth data for machine learning purposes.
- the data whether crowd sourced or based upon personal data from procedures can be used for active learning of the ADMS; that is, to further train the machine learning models as more data is accumulated.
- the machine learning is enhanced through a combination of data sourcing and guiding annotation to provide model improvements over time.
- data may be acquired by a combination of pairwise comparison and guided scale visual recognition. Other techniques are contemplated.
- ADMS the techniques of the ADMS are being described relative to aesthetic health and wellness, they are generally applicable to the objective measurement, assessment, evaluation, and visualization of any type of image, regardless of content, and provide a may to objectively assess and evaluate visual content in a statistically significant manner. Such images may be still images or videos.
- images may be still images or videos.
- an objective metric such as a scale of severity, presence or not of certain characteristics or features, use of different colors, and the like.
- Example embodiments described herein provide applications, tools, data structures and other support to implement an Aesthetic Delta Measurement System to provide statistically significant objective assessment of aesthetic health and wellness data and a knowledge repository for same both at an individual level and a larger group level.
- Other embodiments of the described techniques may be used for other purposes, including for example, for determining a valuation of artwork, success/failure of application of procedures that result in visual change, and the like.
- numerous specific details are set forth, such as data formats and code sequences, etc., in order to provide a thorough understanding of the described techniques.
- the embodiments described also can be practiced without some of the specific details described herein, or with other specific details, such as changes with respect to the ordering of the logic, different logic, etc.
- the scope of the techniques and/or functions described are not limited by the particular order, selection, or decomposition of aspects described with reference to any particular routine, module, component, and the like.
- ADMS Advanced Driver Assistance Systems
- a cognitive health and wellness data With respect to aesthetic health and wellness data, an example ADMS is described to address objective assessment and visualization of two examples relative to human aging facial features, namely the assessment of skin aging and wrinkles in the glabellar and forehead regions of the face. Similar techniques may be applied to other bodily areas such as the lip, chin and jowl area, thighs, buttocks, and the like. Aesthetic measurement and evaluation of human beings is particularly difficult because each individual is unique.
- FIG. 1 is an example block diagram of an environment for practicing the Aesthetic Delta Measurement System.
- an Aesthetic Delta Measurement System (ADMS) server 101 which communicates to one or more data repositories 102 for storing machine learning and other image data and to one or more front end user interfaces 120 and 130 through communications network 110.
- the user interfaces 120 and 103 may be mobile or wired applications and may communicate with one or more user participants using a variety of devices including phones, smart personal devices, laptops, computers, and the like.
- One example ADMS includes a web portal 130a and 130b for facilitating the acquisition of population level labeling of images using aesthetic providers (such as physicians) and using crowd sourced survey techniques.
- It also includes a consumer application 120a and 120b (e.g., a phone application) that allows an individual to visualize and measure his/her own aesthetic features along and provide the acquisition of individual level labeling of images over time.
- a consumer application 120a and 120b e.g., a phone application
- the consumer application 120a-b also facilitates the acquisition of population level labeling.
- Figure 2 illustrates screen displays of several available front end user interfaces for interacting with an example Aesthetic Delta Measurement System.
- the ADMS provides a labeling platform for labeling aesthetic health and wellness over large populations of individuals, client application (web portal interface) 201 , and a personal analysis application for viewing an individual’s aesthetic changes over time, a phone application (consumer app) 202.
- Web portal 201 is targeted for physicians to obtain ground truth data for the ADMS machine learning capabilities and for obtaining other training, validation, and test data from physicians and from “surveys” available from crowd sourcing technology such as through AMAZON’S Mechanical Turk Application Programming Interfaces (API).
- Phone user interface 202 is directed to providing analysis tools for information regarding an individual’s aesthetic features currently and over time. It also provides another manner for obtaining objective labeling of aesthetic images which are forwarded to the ADMS server 101 for machine learning purposes.
- Figures 3A-3I are example screen displays and other illustrations that show a web based portal, an aesthetic visualization portal, for interacting with Aesthetic Delta Measurement System to generate crowd sourced labeled visual image data and to generate physician assisted ground truth data.
- Figure 3A illustrates a portion of the web based aesthetic visualization portal after successful login. This is the physician portal, which is currently separate from the crowdbased labeling platform portal. In other example ADMS interfaces, these access interfaces may be combined or arranged differently.
- Figure 3B is a screen display showing the different scales or characteristics/features 305 that the ADMS is interested in evaluating.
- the first scale of interest measures facial aging.
- the second scale of interest is used to measure neck volume.
- the user evaluates a series of images (photos or videos) to complete a survey.
- the aesthetic visualization portal provides a set of instructions for how to complete the survey (e.g., scoring of the images).
- the severity or scale of facial aging is represented by an evaluation guide 306 having respective scalar values 307.
- a user is instructed to move slider 308 along scale 307 (corresponding to guide 306) to indicate where the user evaluates an image 309 (shown as a placeholder) along the scale 307.
- Each scalar position (which is a number or other indicator such as a color or other discrete value) of scale 307 corresponds to an guide image in guide 306. Any form of facial scale may be incorporated.
- the user can evaluate the image 309 using three different facial views: frontal, oblique and lateral views (not shown).
- the user can place the image at a scalar position using the slider 308 as illustrated.
- Figure 3C shows the slider 308 method applied to a frontal view shown in image 309. The user can drag the slider 308 left and right to adjust where the user wants to “place” the image 309 on scale 307. Placement of the image 309 on the scale 307 assigns the corresponding scalar value to the image (data structure that corresponds to the image).
- Figure 3D illustrates a result of a user moving the slider 308 further to the right to a scalar position 311 (having a rating of 7.78) to indicate more facial aging seventy is present in the image 309.
- Figure 3E illustrates guides and corresponding scales for an oblique view 312 and a lateral view 313. These icons are placeholders for icons/thumbnails that are more representative of oblique and lateral views.
- the user interface works as described relative to the frontal view to assign objective values to image 309 (not shown).
- Figure 3F is an example screen display for an interface for assigning objective measures of neck volume to an image.
- the interface for assigning such values to a frontal view of an person in image 315 along scale 322 according to guide 321 using slider 323 operates similarly to that described with respect to Figures 3C-3E.
- Instructions 320 are given first.
- the interface in response to the user using slider 323 to move image 315 along scale 322, assigns the corresponding scalar value to the image.
- scale 322 and guide 321 have only 5 major positions.
- different scales and guides may present different numbers of primary differentiation positions.
- photo-numeric scores are used (1 -9, 1-5, etc.)
- Other ADMS examples may use different scoring or measurements.
- Figures 3G-3I illustrate another portion of the web.
- a different approach to assigning objective measurable criteria to images is used and the scoring is performed by the back end, the ADMS, to rank each images according to an entire population of images.
- measurements are collected by presenting a series of image pairs to be compared (pairwise) only relative to each other, as one having more of the surveyed feature than the other, or as equal to the other. In this manner, the results of each pairwise comparison are used to dynamically rank the current image being labeled in the entire population of labeled images.
- Figure 3G is a screen display showing the characteristics/features the ADMS wants the “crowd” to use to evaluate and provide objective scoring for a plurality of images.
- the first feature of interest measures forehead lines 331 .
- the second feature of interest is used to measure glabellar lines (glabellar lines are the vertical lines between the eyebrows and above the nasal bone that are positioned over the glabellar bone).
- the URL displayed to the user determines which measurements (which aesthetic feature review) is desired. For example, a different URL may point to a glabellar line image review than the URL used to access a forehead line image review.
- the aesthetic visualization portal provides a set of instructions 333 for how to complete the survey (e.g., scoring of the images).
- the participant is asked to select which photo image of a pair of images, e. g., image 334 and image 335, contains more prominent wrinkles or wrinkles of a deeper grade in the glabellar lines. If the participant believes the images depict wrinkles in the glabellar lines that are equivalent, the participant is to choose block 336 to indicate equality. In the interface shown, the image/block selected is highlighted (here shown in red).
- Figure 3I illustrates a screen display of a similar interface to Figure 3H for forehead lines.
- the participant is asked to select which photo image of a pair of images, e, g., image 337 and image 338, contains more prominent wrinkles or wrinkles of a deeper grade in the forehead lines.
- the forehead lines are the horizontal lines above the eyebrows. If the participant believes the images depict wrinkles in the forehead lines that are equivalent, the participant is to choose block 339 to indicate equality.
- the image/block selected is highlighted (here shown in red).
- the corresponding information regarding the selection is sent to the ADMS server 101 (or ADMS ranking service, not shown) where the ranking is computed (this may be also performed by other software ranking servers or services distributed throughout the ADMS environment).
- the images (photos) to be analyzed pairwise are divided and put into bins of presumably similar images. For example, a set of 1000 images may be divided into 5 bins where a scale of 1 -5 scalar values are at play. Then, a participant is tasked to compare a set of images from the same bin, in an effort to allow the system to rank all of the images within the bin using an ELO style rating system.
- ELO rating systems are typically used to stack rank players in zero-sum two-player games, when not all players can play all other players in the system (such as in chess). Other rating systems can be used by the ADMS ranking service.
- each participant is requested to analyze in a pairwise fashion a batch of photos comprising: 5 photos (images) against another single photo in a same single bin and then 1 photo in each of the other bins (9 total analysis comparisons in each survey).
- Figures 4A-4J are example screen displays that illustrate an administration portal to the web based aesthetic visualization portal used for crowd sourced data in an example Aesthetic Delta Measurement System.
- Figure 4A illustrates the various functions available from the administrative view of the ADMS web portal. This view is used to manage Mechanical Turk engine support for administering and collecting data from the participant surveys, to provide an interface for visualizing the resultant ELO rankings, a console for managing, uploading and administering the photo images, and other administrative functions.
- Figure 4B is a screen display illustrating the various functions available for controlling the Mechanical Turk (MT) engine through the MT API. In the first function 411 , “create a HIT,” a crowd job is defined and initiated.
- MT Mechanical Turk
- HIT Human Intelligence Task
- Figure 4C is a detail screen display showing components of a hit when it is created and details the assignment duration, reward, number of works, description, when it expires, etc.
- Figure 4D is a detail screen display showing the results of selecting a batch from a screen (not shown), when the “view hit results” function 412 is selected in Figure 4B.
- the results shown in Figure 4D show the distribution of the average scores that each photo in the batch 413 received as well as the maximum minimum and standard deviation of these ratings. Further, the administrator can view the photos of each person in photo area 415 as well as the number of evaluations they received, and the average and standard deviation of their scores. For example, the score 416 indicates an average value of 1.09, a standard deviation of +/-.901 , and 39 evaluations performed. These results give a visual representation of the objective scoring performed using crowd evaluated data.
- Figure 4E is a detail screen display showing the results of selecting a batch from a screen (not shown), when the “compare GT and crowd data” function 410 is selected in Figure 4B on a particular batch of captured data. A batch is selected (not shown) for comparison.
- This function enables a comparison of manually input ground truth data and data resulting from crowd performed evaluations.
- a portion of the resultant comparison display is shown in Figure 4F.
- the interface shows similar information to that shown in Figure 4D.
- metric 426 represents the average value minus the standard deviation value to the average value plus the standard deviation value (full range).
- Figure 4H is a screen display showing the results when the “check for Turker scammers” function 409 is selected in Figure 4B.
- the administrator portal support checks each worker’s average rating, their maximum and minimum rating, and displays a chart which shows their ratings for each photo.
- the interface automatically flags the worker, if their standard deviations of ratings is so small, that they're basically just repeating the same thing every single time.
- Chart 430 illustrates a worker who appears to be fraudulently misusing the system. As shown in a close up snapshot of the display screen in Figure 4I, this worker appears to be scoring all photos in his/her batch with the same number - a 2.
- the administrative is then given an opportunity to remove such results (using the MT API), thereby preventing any skew of otherwise objective results.
- Figure 4J is a screen display illustrating a visualization and use of the ELO ranking data being accumulated.
- a bar chart visualization 443 of all of the ELO rankings for forehead line scores 440 is shown.
- the buckets/bins are differentiated by the red lines in graph 443.
- the number of photos that differ from the original ground truth data are shown in list 441 relative to the number of classes they differed (list 442) from their corresponding ground truth data. For example, this table shows that only 27 photos were “off” from their respective ground truth data by 2 classes (scores), whereas 399 photos were “off” by 1 class.
- Such analysis provides an effective check on ground truth data when such data is being curated and not already known.
- the techniques presented here may be used to determine an initial set of ground truth data. This analysis also illustrates the power of combining manually curated ground truth data with crowd sourced data to improve accuracy and reliability of a machine learning system.
- the administrator can review each of the individual images (close ups) in each scalar bin for further review (not shown).
- Figures 5A-5Q are screen displays from an example Aesthetic Delta Measurement System illustrative of a personal analysis application for viewing an individual’s aesthetic changes over time.
- This application corresponds to phone application (consumer app) 202 in Figure 2.
- Phone user interface 202 is directed to providing analysis tools for information regarding an individual’s aesthetic features currently and over time. It also provides another manner for obtaining objective labeling of aesthetic images which are forwarded to the ADMS server 101 for machine learning purposes.
- the user After logging in and agreeing to any presented terms of use, privacy etc., the user is greeted by the main display for navigating the application as shown in Figure 5A.
- the main display 500 shows which area of the image (in this case facial features), the user is interested in analyzing.
- facial region 501 comprises two different types of dots (here in different colors) to illustrate that the user can choose to work with forehead lines or glabellar lines.
- the user can either select a dot (such as dot 502) or one of the appropriate links 503 and 504 to navigate to the appropriate collection screens.
- the main display 500 offers four navigation user interface (III) controls 505-508 at the bottom.
- the first III control (target) 505 is for capturing images of the individual operating the phone.
- the second “people” III control 506 is for navigating to a history of images taken for that individual, meant to show differences (the delta) of an aesthetic view over time.
- the abacus III control 507 is for navigating to a timeline of images, again to show differences over time, and allows various filtering control.
- the “hamburger” menu III control 508 is for access to other information such as privacy policy, terms of use, etc.
- Figure 5B shows the results of user selection of forehead line capture and analysis, for example by selecting a yellow dot in region 501 or by selecting forehead line link 503.
- the user is navigated through a set of instruction screens 510 and 511 regarding the type of aesthetic capture to be performed.
- the user is prompted in the next screens to perform a capture within a bounding rectangle, shown as green rectangle 513 in Figure 5C.
- the green rectangle signifies where the user is supposed to frame his/her forehead in the image.
- the user is instructed to take a photo by pressing III control (button) 514.
- the user interface changes in some way, such as by making the color of the bounding rectangle less vivid or by changing the color to yellow instead of green.
- Other feedback mechanisms such as audio or haptic feedback are also possible.
- the blue bounding rectangles also presented in Figure 5C are debugging rectangles which are used to calculate the location of the forehead and the glabella region using internal feature and image recognition software present in the phone itself. Other visual prompts (not shown) may also be present.
- Figure 5D the user is prompted to retake or accept the photo in display 516.
- FIG. 5E illustrates a screen display for instructions on taking dynamic photos of forehead lines.
- Figure 5F illustrates a screen display for reviewing and submitting a captured image of forehead lines but resulting from movement.
- the forehead lines captured as a result of movement in Figure 5F are more pronounced that those captured statically, Figure 5D.
- ADMS these images are forwarded to the ADMS server 101 and data storage 102 in Figure 1 to be scored using machine learning engines/models as described above
- the scores are presented to the user; in some examples they are used to enhance machine learning datasets, for example to acquire more training, test, and validation data.
- Figures 5G-5L are screen displays used for glabellar line capture and works similarly to capture of forehead lines described with reference to Figures 5B-5F. These screen displays result from an individual selecting the glabellar line link 504 (Figure 5A) to navigate to glabellar line capture.
- the sequence shown in Figures 5G-5L operate in the same manner as that described with reference to Figures 5B-5F including both static and dynamic capture.
- the bounding rectangle shown in Figures 5I and 5L is appropriately relocated and redraw by the capture software.
- the confirmation and submit screens (e.g., for scoring or retaking the photos) are not shown.
- Figures 5M-5O are screen displays resulting from user selection of the people III control 506 in Figure 5A to navigate to an image history showing evaluations of aesthetic features captured for that individual over time.
- Figure 5M shows a series of images of forehead static captures annotated with a label regarding when they were captured. Similar images are displayed (not shown) for the other captures taken dynamically and for glabellar lines, e.g., as displayed in Figure 50.
- a particular image may be selected, and in some example ADMS applications, for example as shown in Figure 5W, the score and other metadate and/or annotations may be displayed (e.g., body part, date, score, and other data, such as treatment data.
- the individual can observe objective measurements of his/her aesthetic history, for example as a result of a series of aesthetic wellness and health treatments over time.
- Figures 5P-5Q are screen displays resulting from user selection of the people abacus (timeline) Ul control 507 in Figure 5A to navigate to a timeline visual of objective scores of the aesthetic features captured for that individual over time.
- each “dot” 540 (or other indicator) corresponds to a photo and is associated with an objective measurement determined by the ADMS server (e.g., ADMS server 101 in Figure 1 ).
- a score displayed as in Figure 5P may be different or modified from that of the ADMS server, for example so as to convey more or different information.
- the different types of captures can be indicated using different visuals, such as different colored dots.
- the display can be filtered to only shown certain designated types of aesthetic features (e.g., by selecting filtering link 541 ).
- An individual can also navigate to the scoring of a particular image by selecting one of the dots, such as dot 540, which results in Figure 5Q, similar to Figure 5N.
- some personal analysis applications for an example ADMS also includes an ability for the individual to contribute to the ADMS aesthetic data repository.
- This device annotation capability operates similarly to the interface described for crowd workers/participants in the web-based aesthetic visualization portal used for crowd sourced data described with reference to figures 3G- 3I. In this scenario, the individual would sign up and agree to answer “surveys” regarding aesthetic appearance data of other persons.
- the interface can operate using the pairwise comparisons described with reference to Figures 3G-3I or the guide/scale indications described with reference to Figures 3A-3F. In either case, the obtained evaluation and objective measurement data is forwarded to the ADMS and associated data repository for further use in machine learning and analysis. Other annotations and labeling could also be performed.
- the ADMS provides bounding rectangles allow the user to position the phone camera correctly to acquire data.
- the application uses facial landmark detection algorithms to help the user position their picture correctly and to determine the regions of interest (patches) within the image that should be analyzed by the remainder of the machine learning pipeline. For example, different bounding rectangles are provided for measuring glabellar lines versus forehead lines (compare Figure 5C with Figure 5I).
- FIG. 6 illustrates the different bounding rectangles 601 and 602 that are provided for acquisition of forehead lines 603 and glabellar lines 604, respectively. Once satisfied the user can take the picture or it can be taken automatically depending upon the configuration.
- the pupils of the individual are the facial landmark used to determine a correct location of the forehead region and glabellar region.
- Other landmarks may be similarly incorporated.
- facial landmark detection is based on keypoint detection in images.
- a traditional architecture is stacked hourglass networks, but other approaches have emerged in recent years and are frequently published.
- the ADMS is operative with any and all of these approaches.
- off-the-shelf detectors can be used (e.g., the facial feature detection included in the iOS SDK on iOS devices).
- landmark detection is similarly customized and adapted to one or more of keypoint detection algorithms used in pose estimation for this purpose (e.g., stacked hourglass networks).
- the application then forwards (e.g., sends, communicates, etc.) the extracted images to a server side application (such as running on the ADMS 101 in Figure 1 ) for further processing.
- a server side application such as running on the ADMS 101 in Figure 1
- Example ADMS environments use traditional machine learning pipelines. Once data is acquired as just described, the next step in the machine learning pipeline is to take the extracted regions (the determined patches) from step 1 and rate them on an appropriate scale (see Figures 3A-3F) or rank them as described further below using an ELO ranking algorithm (see also Figures 3G-3I).
- the rating/scaling is used to generate the training data set for the ADMS machine learning based rating algorithm.
- the machine learning models take into account both trusted human data (just as seed data) and data from both untrusted crowd data interfaces and data from user applications associated with aesthetic procedures who can view their own changes over time.
- the ADMS may guide the human labeling process so that annotators are steered towards areas where its models are underperforming. For example, suppose that the data in one geolocation of the world includes a different population composition than a second geolocation (for example, the latter might have a younger population of or different ethnicities). This results in an active learning feedback loop which ultimately enhances the precision and recall of the predictive ratings.
- One example ADMS machine learning environment uses convolutional neural networks (CNN) typically used in the context of image classification to assign each image a position on a scale.
- CNN convolutional neural networks
- the CNN is trained based on the manually (crowd or individual) ranked training using CNN architectures such as VGG16, ResNet-50 and other variations) either as a classification (assign image to discrete/scalar categories, e.g., 0,1 , 2, 3, 4, 5) or a regression model (predict “soft” ELO score directly, e.g., score between 800 and 1600 ).
- Figure 7 illustrates an example ranking of forehead lines on a discrete scale as produced by the trained (classification) model.
- CNN architecture is made based on performance on the given dataset (aesthetic area of interest) and may vary from model to model. Ultimately CNNs are particularly well suited to this task due to their outstanding performance in image classification.
- ADMS uses the Mechanical Turk (MT) web platform for crowdsourcing.
- MT is used to employ some number (e.g., 40) of independent workers to navigate through groups of photos with an attached scale. These workers will give their opinion of the state of the subject’s photo on a sliding scale with precision to the hundredth of a point. These data are then aggregated to create overall scores for a user’s photo.
- one example ADMS uses a ranking service (application, server, or the like) to dynamically recompute rankings for the entire population of images based upon a newly acquired image.
- a ranking service application, server, or the like
- One ADMS ranking service incorporates an ELO rating system to stack the images.
- K-coefficient a coefficient explained below
- K-coefficient could be dynamically changed based on a number of variables that more accurately reflect a person’s physical trait such as:
- ADMS The logic for computing the ranking of a new image (image evaluation) is performed by the ADMS is as follows:
- N_new N_comparison -> the images are tied.
- Figure 8 is an example chart illustrating the quintiles and results used for ELO scoring of facial aging in women.
- the Elo score for a new image may be calculated by comparing to an existing image (‘Old’) after they are evaluated against one another. Given initial Elo scores:
- EL0_new initial score is ML predicted Elo score for a new image
- the final ‘ADMS score’ for a given image is calculated as the percentile of the image’s ELO score within that image’s Trait-Sex-Fitzpatrick and 5-year age strata.
- the ADMS score within important user demographic strata will always be bounded between 0 and 100, where 0 indicates lowest rated image and 100 indicates the highest rated image.
- the 0 to 100 ADMS score is therefore comparable across images across body regions within the same individual user, as well as images obtained for a given user over time. Ratings obtained prior to appearance-enhancing treatments may be compared to post-treatment scores as the difference in ADMS scores, as well as percentage improvements. Very high and very low percentiles can be presented as ‘>99’ (for example) for any ADMS score.
- FIG 10 is an example block diagram of a computing system for practicing embodiments of an example Aesthetic Delta Measurement System server including example components.
- an ADMS server may be implemented in software, hardware, firmware, or in some combination to achieve the capabilities described herein.
- the computing system 1000 may comprise one or more server and/or client computing systems and may span distributed locations.
- each block shown may represent one or more such blocks as appropriate to a specific embodiment or may be combined with other blocks.
- the various blocks of the Aesthetic Delta Measurement System 1010 may physically reside on one or more machines, which use standard (e.g., TCP/IP) or proprietary interprocess communication mechanisms to communicate with each other.
- computer system 1000 comprises a computer memory (“memory”) 1001 , a display 1002, one or more Central Processing Units (“CPU”) 1003, Input/Output devices 1004 (e.g., keyboard, mouse, CRT or LCD display, etc.), other computer-readable media 1005, and one or more network connections 1006.
- the ADSM Server 1010 is shown residing in memory 1001 . In other embodiments, some portion of the contents, some of, or all of the components of the ADSM Server 1010 may be stored on and/or transmitted over the other computer-readable media 1005.
- the components of the Aesthetic Delta Measurement System 1010 preferably execute on one or more CPUs 1003 and manage the acquisition and objective measurement and evaluation use of aesthetic features and images, as described herein.
- the ADSM Server 1010 includes one or more machine learning (ML) engines or models 1011 , one or more data acquisition tools, ranking services and support 1012, one or more ML model support 1013 (for supporting the ML implementation, storage of models, testing and the like, and visualization and graphics support 1014.
- ML machine learning
- data repositories may be present such as ML data 1015 and other ADMS data 1016.
- the components may be provided external to the ADMS and is available, potentially, over one or more networks 1050.
- Other and /or different modules may be implemented.
- the ADMS may interact via a network 1050 with application or client code 1055 that, for example, acquires and causes images to be scored or that uses the scores and rankings computed by the data acquisition and ranking support 1012, one or more other client computing systems such as web labeling/annotating platform 1060, and/or one or more third-party information provider systems 1065, such as provides of scales/guides to be used in the visualizations and ML predictions.
- the ML Data data repository 1016 may be provided external to the ADMS as well, for example in a data repository accessible over one or more networks 1050.
- components/modules of the ADSM Server 1010 are implemented using standard programming techniques.
- the ADSM Server 1010 may be implemented as a “native” executable running on the CPU 103, along with one or more static or dynamic libraries.
- the ADSM Server 1010 may be implemented as instructions processed by a virtual machine.
- a range of programming languages known in the art may be employed for implementing such example embodiments, including representative implementations of various programming language paradigms, including but not limited to, object-oriented, functional, procedural, scripting, and declarative.
- the embodiments described above may also use well-known or proprietary, synchronous or asynchronous client-server computing techniques.
- the various components may be implemented using more monolithic programming techniques, for example, as an executable running on a single CPU computer system, or alternatively decomposed using a variety of structuring techniques known in the art, including but not limited to, multiprogramming, multithreading, client-server, or peer-to-peer, running on one or more computer systems each having one or more CPUs.
- Some embodiments may execute concurrently and asynchronously and communicate using message passing techniques. Equivalent synchronous embodiments are also supported.
- programming interfaces to the data stored as part of the ADMS server 1010 can be available by standard mechanisms such as through C, C++, C#, and Java APIs; libraries for accessing files, databases, or other data repositories; through scripting languages such as XML; or through Web servers, FTP servers, or other types of servers providing access to stored data.
- the repositories 1016 and 1017 may be implemented as one or more database systems, file systems, or any other technique for storing such information, or any combination of the above, including implementations using distributed computing techniques.
- the example ADMS server 1010 may be implemented in a distributed environment comprising multiple, even heterogeneous, computer systems and networks. Different configurations and locations of programs and data are contemplated for use with techniques of described herein.
- the [server and/or client] may be physical or virtual computing systems and may reside on the same physical system.
- one or more of the modules may themselves be distributed, pooled or otherwise grouped, such as for load balancing, reliability or security reasons.
- a variety of distributed computing techniques are appropriate for implementing the components of the illustrated embodiments in a distributed manner including but not limited to TCP/IP sockets, RPC, RMI, HTTP, Web Services (XML-RPC, JAX-RPC, SOAP, etc.) and the like. Other variations are possible.
- other functionality could be provided by each component/module, or existing functionality could be distributed amongst the components/modules in different ways, yet still achieve the functions of an ADMS.
- ADMS Server 1010 may be implemented or provided in other manners, such as at least partially in firmware and/or hardware, including, but not limited to one or more application-specific integrated circuits (ASICs), standard integrated circuits, controllers executing appropriate instructions, and including microcontrollers and/or embedded controllers, field- programmable gate arrays (FPGAs), complex programmable logic devices (CPLDs), and the like.
- ASICs application-specific integrated circuits
- FPGAs field- programmable gate arrays
- CPLDs complex programmable logic devices
- system components and/or data structures may also be stored as contents (e.g., as executable or other machine-readable software instructions or structured data) on a computer-readable medium (e.g., a hard disk; memory; network; other computer-readable medium; or other portable media article to be read by an appropriate drive or via an appropriate connection, such as a DVD or flash memory device) to enable the computer-readable medium to execute or otherwise use or provide the contents to perform at least some of the described techniques.
- a computer-readable medium e.g., a hard disk; memory; network; other computer-readable medium; or other portable media article to be read by an appropriate drive or via an appropriate connection, such as a DVD or flash memory device
- Some or all of the components and/or data structures may be stored on tangible, non-transitory storage mediums.
- system components and data structures may also be stored as data signals (e.g., by being encoded as part of a carrier wave or included as part of an analog or digital propagated signal) on a variety of computer-readable transmission mediums, which are then transmitted, including across wireless-based and wired/cable- based mediums, and may take a variety of forms (e.g., as part of a single or multiplexed analog signal, or as multiple discrete digital packets or frames), computer program products may also take other forms in other embodiments. Accordingly, embodiments of this disclosure may be practiced with other computer system configurations.
- FIG. 10 there are different client applications that can be used to interact with ADMS Server 1010. These client application can be implemented using a computing system (not shown) similar to that described with respect to Figure 10. Note that one or more general purpose virtual or physical computing systems suitably instructed or a special purpose computing system may be used to implement an ADMS client. However, just because it is possible to implement the Aesthetic Delta Measurement System on a general purpose computing system does not mean that the techniques themselves or the operations required to implement the techniques are conventional or well known.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Computing Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Public Health (AREA)
- Biophysics (AREA)
- Veterinary Medicine (AREA)
- Pathology (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Artificial Intelligence (AREA)
- Business, Economics & Management (AREA)
- Human Computer Interaction (AREA)
- Computer Security & Cryptography (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Dermatology (AREA)
- Primary Health Care (AREA)
- Strategic Management (AREA)
- Economics (AREA)
- General Business, Economics & Management (AREA)
- Tourism & Hospitality (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Computer Hardware Design (AREA)
- Mathematical Physics (AREA)
- Geometry (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202163145463P | 2021-02-03 | 2021-02-03 | |
PCT/US2022/014959 WO2022169886A1 (en) | 2021-02-03 | 2022-02-02 | Quantifying and visualizing changes over time to health and wellness |
Publications (2)
Publication Number | Publication Date |
---|---|
EP4287937A1 true EP4287937A1 (en) | 2023-12-13 |
EP4287937A4 EP4287937A4 (en) | 2024-12-25 |
Family
ID=82742478
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP22750335.6A Pending EP4287937A4 (en) | 2021-02-03 | 2022-02-02 | QUANTIFICATION AND VISUALIZATION OF CHANGES IN HEALTH AND WELL-BEING OVER TIME |
Country Status (5)
Country | Link |
---|---|
US (1) | US20240120071A1 (en) |
EP (1) | EP4287937A4 (en) |
CA (1) | CA3207165A1 (en) |
MX (1) | MX2023009104A (en) |
WO (1) | WO2022169886A1 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116012300B (en) * | 2022-12-05 | 2025-07-25 | 福州大学 | Multi-mode image aesthetic quality evaluation method integrating local and global image features |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090055245A1 (en) * | 2007-08-15 | 2009-02-26 | Markettools, Inc. | Survey fraud detection system and method |
WO2016025989A1 (en) * | 2014-08-18 | 2016-02-25 | Epat Pty Ltd | A pain assessment method and system |
US10172517B2 (en) * | 2016-02-25 | 2019-01-08 | Samsung Electronics Co., Ltd | Image-analysis for assessing heart failure |
CN107590478A (en) * | 2017-09-26 | 2018-01-16 | 四川长虹电器股份有限公司 | A kind of age estimation method based on deep learning |
US11151362B2 (en) * | 2018-08-30 | 2021-10-19 | FaceValue B.V. | System and method for first impression analysis and face morphing by adjusting facial landmarks using faces scored for plural perceptive traits |
CN110689523A (en) | 2019-09-02 | 2020-01-14 | 西安电子科技大学 | Personalized image information evaluation method based on meta-learning and information data processing terminal |
-
2022
- 2022-02-02 MX MX2023009104A patent/MX2023009104A/en unknown
- 2022-02-02 EP EP22750335.6A patent/EP4287937A4/en active Pending
- 2022-02-02 US US18/275,129 patent/US20240120071A1/en active Pending
- 2022-02-02 WO PCT/US2022/014959 patent/WO2022169886A1/en active Application Filing
- 2022-02-02 CA CA3207165A patent/CA3207165A1/en active Pending
Also Published As
Publication number | Publication date |
---|---|
EP4287937A4 (en) | 2024-12-25 |
WO2022169886A1 (en) | 2022-08-11 |
US20240120071A1 (en) | 2024-04-11 |
MX2023009104A (en) | 2023-10-19 |
CA3207165A1 (en) | 2022-08-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zhang et al. | Data-driven online learning engagement detection via facial expression and mouse behavior recognition technology | |
US20240257981A1 (en) | Method and apparatus for determining health status | |
Orquin et al. | Areas of interest as a signal detection problem in behavioral eye‐tracking research | |
US20190355271A1 (en) | Differentially weighted modifiable prescribed history reporting apparatus, systems, and methods for decision support and health | |
JP6700396B2 (en) | System and method for data driven identification of talent | |
KR20170042286A (en) | Systems and methods for data-driven identification of talent | |
WO2021168254A1 (en) | Systems and methods for data-driven identification of talent and pipeline matching to role | |
Nepal et al. | Moodcapture: Depression detection using in-the-wild smartphone images | |
Jokinen et al. | Relating experience goals with visual user interface design | |
US20240233219A1 (en) | System and method for improved data structures and related interfaces | |
Man et al. | Detecting preknowledge cheating via innovative measures: A mixture hierarchical model for jointly modeling item responses, response times, and visual fixation counts | |
Lootus et al. | Development and assessment of an artificial intelligence-based tool for ptosis measurement in adult myasthenia gravis patients using selfie video clips recorded on smartphones | |
US20240120071A1 (en) | Quantifying and visualizing changes over time to health and wellness | |
KR20210084443A (en) | Systems and methods for automatic manual assessment of spatiotemporal memory and/or saliency | |
Nossair et al. | Eating Smart: Advancing Health Informatics with the Grounding DINO based Dietary Assistant App | |
Pejić et al. | Determining gaze behavior patterns in on-screen testing | |
Kundu et al. | RelaxVR: Cybersickness Reduction in Immersive Virtual Reality Through Explainable AI and Large Language Models | |
De Bruin | Automated usability analysis and visualisation of eye tracking data | |
Min et al. | NutriGuru, a Food Detection and Nutrition Tracking Mobile Application based on Deep Learning and Computer Vision Techniques | |
AlKadhi et al. | An Interacting Decision Support System to Determine a Group-Member’s Role Using Automatic Behaviour Analysis | |
Chandrakala et al. | Health-Lens: A Health Diagnosis Companion. | |
O'Leary | Advancing Clinical Decision-Making for Speech-Generating Device Control Interfaces: Investigating Performance, Assessment, and User Experience | |
Yazgan et al. | Evaluating and modeling the effects of brightness on visual attention using multiple object tracking method | |
O'Connor et al. | Development of methods to evaluate probability of reviewer’s assessment bias in Blinded Independent Central Review (BICR) imaging studies | |
Bentley et al. | Evaluating Mobile Visualizations |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20230821 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R079 Free format text: PREVIOUS MAIN CLASS: A61B0005000000 Ipc: G06F0021550000 |
|
A4 | Supplementary search report drawn up and despatched |
Effective date: 20241126 |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: G06F 3/04847 20220101ALN20241121BHEP Ipc: G06V 10/774 20220101ALI20241121BHEP Ipc: G06V 10/82 20220101ALI20241121BHEP Ipc: G06V 40/16 20220101ALI20241121BHEP Ipc: A61B 5/00 20060101ALI20241121BHEP Ipc: G06F 21/55 20130101AFI20241121BHEP |