WO1998020458A9 - Systeme d'imagerie esthetique - Google Patents

Systeme d'imagerie esthetique

Info

Publication number
WO1998020458A9
WO1998020458A9 PCT/US1997/020394 US9720394W WO9820458A9 WO 1998020458 A9 WO1998020458 A9 WO 1998020458A9 US 9720394 W US9720394 W US 9720394W WO 9820458 A9 WO9820458 A9 WO 9820458A9
Authority
WO
WIPO (PCT)
Prior art keywords
image
original
user
digital image
imaging system
Prior art date
Application number
PCT/US1997/020394
Other languages
English (en)
Other versions
WO1998020458B1 (fr
WO1998020458A1 (fr
Filing date
Publication date
Application filed filed Critical
Priority to AU52501/98A priority Critical patent/AU5250198A/en
Publication of WO1998020458A1 publication Critical patent/WO1998020458A1/fr
Publication of WO1998020458B1 publication Critical patent/WO1998020458B1/fr
Publication of WO1998020458A9 publication Critical patent/WO1998020458A9/fr

Links

Definitions

  • This invention generally relates to computer imaging programs and, more specifically, to a method and apparatus for manipulating digital photographs. Background of the Invention
  • Existing aesthetic imaging systems typically use a number of tools to allow a physician to manipulate a pre-operative image of a patient to illustrate an intended post-operative result.
  • the tools allow the physician to manipulate the preoperative image during a consultation with a patient. By manipulating the image with the patient in attendance, the patient receives immediate feedback from the displayed results.
  • the use of the editing tools should be as unobtrusive as possible.
  • a physician would like the patient to focus on the end results of the surgery, rather than the technologic wizardry used to demonstrate those results on the video monitor.
  • the editing tools used in existing aesthetic imaging systems typically hinder rather than help the physician in demonstrating the results that may be achieved through cosmetic surgery.
  • a disadvantage of existing aesthetic imaging programs is that a physician or facilitator in a pre-operative consultation typically must go back and forth through many windows-based menus in order to edit an image. Cycling between the various menus to invoke the tools necessary for a consultation is disadvantageous in that it is time consuming. For example, some physicians schedule a follow-up visits for patients to allow the physician time to edit the images. More important, however, is that the process is distracting to the patient and tends to make the pre-operative consultation all the more mystifying. As a result of the disadvantages associated with prior art systems, some patients lose interest or become frustrated with the interview, both of which may reflect back on the physician.
  • a further disadvantage of existing aesthetic imaging systems is that it is impossible for a physician or facilitator to display different combinations of the edits that they have performed.
  • existing aesthetic imaging programs as a physician edits a patient's image, the physician's edits are added to the preexisting edits of the image. Most programs are only capable of showing two version of the patient's image; the unedited, original version, and the final edited version incorporating all of the physician's changes. It is therefore difficult for the physician to show various combinations of the edits that had been performed. For example, a physician may edit an image to remove wrinkles around a patient's eyes and to narrow the patient's nose.
  • Existing aesthetic imaging programs only allowed the physician to simultaneously show all of these changes.
  • a still further disadvantage of existing aesthetic imaging systems is that the systems allow a physician to perform nearly flawless editing of a patient's image.
  • the edits performed by a physician on an aesthetic imaging system are often unobtainable results that cannot be achieved when actual surgery is performed.
  • the physician is especially skilled at using the aesthetic imaging system, it is difficult to show the patient achievable results, which typically fall within a range somewhere between the original patient image, and the optimum results as displayed by the edited image on the screen. It therefore would be advantageous to develop an aesthetic imaging system that allowed a physician to display more realistic results that are achievable through surgery.
  • the method comprises: (a) evaluating the following variables: (i) the status of the tip of the pen; (ii) the status of the side button on the pen; and (iii) movement of the pen tip relative to the tablet; (b) actuating a freehand drawing mode if a first set of variables are present, wherein movement of the pen relative to the tablet edits pixels that are located at positions corresponding to the position of the cursor; and (c) actuating a curve drawing mode if a second set of variables are present, wherein a line segment is displayed between two endpoints and movement of the pen relative to the tablet stretches the line segment, forming a curve and editing pixels that are located at positions corresponding to the position of the curve.
  • method further includes: (a) actuating a freehand undo mode if a third set of variables are present, wherein movement of the pen relative to the tablet restores pixels that are located at positions corresponding to the position of the cursor to their pre-edited color; and (b) actuating a curve undo mode if a fourth set of variables are present, wherein a line segment is displayed between two endpoints and movement of the pen relative to the tablet stretches the line segment, forming a curve and restoring pixels that are located at positions corresponding to the position of the curve to their pre-edited color.
  • the freehand draw mode is actuated if the tip of the pen is depressed and pressure is maintained while the tip is moved a predetermined distance.
  • the curve draw mode is actuated if the tip of the pen is depressed and released within a predetermined distance.
  • the curve draw mode is actuated by: (a) establishing a first endpoint at the position of the pen when the second set of variables are present; and (b) monitoring the status of the tip of the pen and establishing a second endpoint at the position of the pen if the tip is toggled from an off state to an on state.
  • an improved prioritize feature for viewing an image.
  • a user may identify several areas in a modified patient image containing edits that alter the image from the original image. As each area is identified by the user, an identifying tag is assigned to each of the areas. When desiring to show various combinations of the edits that have been performed on the image, the user may select the areas to display using the identifying tags. A user may therefore quickly cycle through various permutations of the procedures that have been edited for patient display.
  • an improved user interface is provided to minimize the distraction of a patient as the patient is watching the image being edited.
  • a menu bar on the top of the display is removed during most editing, so that only the image of the patient is displayed.
  • the bar itself is transparent to allow the patient to see the image through the menu bar. Only the commands and the outline of the menu bar are presented in a contrasting color, minimizing the overall visual impression created by the menu bar.
  • a number of modules are provided to allow the user to improve the quality of an image, to analyze an image, or to prepare an image for meetings and presentations.
  • a color correction module allows the color of an original image to be closely matched with the color of a target image.
  • An orientation correction module allows the size and orientation of an original image to be closely matched with the size and orientation of a target image.
  • a measurement module allows angles, distances, areas, and proportions to be measured and recorded on an image for presentation to a patient or colleagues.
  • a label module allows structures in a patient's image to be linked to textual descriptions. The modules greatly improve the ability to compare two images or maintain accurate records of achievable surgical results.
  • a module is also provided that simulates the effect of laser resurfacing treatment on a patient.
  • the module accurately portrays the image of the patient immediately following surgery, the image of the patient during the healing process, and the image of the patient when the patient has fully healed.
  • An advantage of the tools, features, and modules described herein are that they improve the overall experience of a patient during a preoperative visit with a physician.
  • the powerful tools in the aesthetic imaging system allow the physician to easily manipulate the patient's image in response to feedback provided by the patient.
  • the aesthetic imaging system interface also allows the patient to focus on the image being manipulated, rather than on the aspects of the aesthetic imaging system that allow the manipulation.
  • the end result is an improved preoperative visit that provides a more realistic impression of the results that a physician may achieve through surgery.
  • FIGURE 1 is a block diagram of an aesthetic imaging system in accordance with the invention
  • FIGURE 2 is a block diagram illustrating various buffers used by the aesthetic imaging system to store and manipulate data
  • FIGURE 3 is a flow chart illustrating an exemplary routine by which digital images may be viewed and edited using the aesthetic imaging system
  • FIGURE 4A is a flow chart of an exemplary routine for photographing patients in accordance with the invention.
  • FIGURE 4B is a flow chart of an exemplary routine for calculating a checksum value and comparing the calculated value to a previously stored value to determine if an image has been altered;
  • FIGURE 5 is a flow diagram of an exemplary routine for implementing a combination tool for use with various drawing (draw) tools in accordance with the invention
  • FIGURE 6 is a pictorial representation of an image to be edited
  • FIGURES 7A-7E are pictorial representations of editing an image using a prior art imaging program
  • FIGURES 8A-8E are pictorial representations of using the aesthetic imaging system to accomplish the identical edits shown in FIGURES 7A-7E;
  • FIGURE 9A is a flow chart of an exemplary routine of a contour tool in accordance with the invention.
  • FIGURES 9B-9C are pictorial representations illustrating the function of the contour tool of FIGURE 9 A;
  • FIGURES 9D-9G are pictorial representations illustrating exemplary edits that may be accomplished using the contour tool
  • FIGURE 10 is a flow chart of an exemplary routine for implementing an autoblend tool in accordance with the invention
  • FIGURE 11 is a pictorial representation of a user interface for implementing the autoblend tool of FIGURE 10;
  • FIGURE 12A is a flow chart of an exemplary routine illustrating a cutout tool in accordance with the invention
  • FIGURE 12B is a flow diagram of an exemplary routine illustrating a rotate tool in accordance with the invention
  • FIGURE 13 is a flow chart of an exemplary routine for viewing images in accordance with the invention.
  • FIGURES 14A-14D are pictorial representations illustrating the effects of a compare feature in accordance with the invention.
  • FIGURES 15A-15C illustrate a split image option of viewing images in accordance with the invention
  • FIGURE 16 is a pictorial representation illustrating the use of a translucent image to allow a patient to accurately position themselves in order to capture a second image having the same location and orientation as an original stored image;
  • FIGURE 17 is a pictorial representation illustrating a compare image wherein a presurgical image of a patient is compared side-by-side with a postsurgical image having the same location and orientation;
  • FIGURES 18A-18C are pictorial representations illustrating the use of a warp shape tool to edit a patient's image
  • FIGURES 19A-19E are pictorial representations illustrating the function of the warp tool
  • FIGURE 20 is a pictorial representation illustrating the use of a transparent menu bar when viewing an image of a patient
  • FIGURE 21 is a flow chart of an exemplary routine for implementing a zoom viewing feature in accordance with the invention
  • FIGURES 22A-22D are flow charts of exemplary routines for implementing a number of modules that may be used to optimize an image for presentations or display;
  • FIGURE 23 is a pictorial representation illustrating the use of a color correction module in accordance with the invention.
  • FIGURE 24 is a pictorial representation illustrating the use of an orientation correction module in accordance with the invention.
  • FIGURES 25A-25D are pictorial representations illustrating the use of a measurement module in accordance with the invention.
  • FIGURE 26 is a pictorial representation illustrating the use of a labeling module in accordance with the invention.
  • FIGURES 27A-27E are block diagrams and pictorial representations of the implementation and use of a laser resurfacing simulation in accordance with the invention.
  • FIGURE 1 An aesthetic imaging system 20 in accordance with the invention is illustrated in FIGURE 1.
  • the aesthetic imaging system 20 includes an imaging program 21 that runs on a processing unit 22 controlled by an operating system 24.
  • a memory 26 is connected to the processing unit and generally comprises, for example, random access memory (RAM), read only memory (ROM), and magnetic storage media such as a hard drive, floppy disk, or magnetic tape.
  • RAM random access memory
  • ROM read only memory
  • magnetic storage media such as a hard drive, floppy disk, or magnetic tape.
  • the processing unit and memory are typically housed within a personal computer 28 which may be, for example, a personal computer 28 which may be, for example, a
  • MacintoshTM International Business Machines (IBMTM) or IBM-compatible personal computer.
  • IBM and IBM-compatible personal computers the operating system 24 may be DOS based or may incorporate a windowing environment such as Microsoft WindowsTM or OS/2TM.
  • the aesthetic imaging system also includes an image capture board 30 that is coupled to the processing unit 22, a monitor 32, video source 34, and printer 36.
  • the video source, monitor, and printer are coupled to the processing unit 22 through the image capture board 30.
  • the video source may include one or more video cameras, a VCR, a scanner, or similar source for providing digital images to be edited by the aesthetic imaging system.
  • the aesthetic imaging system further includes a pointing device, which is preferably a stylus (pen) and tablet 38, that is connected to the processing unit 22.
  • the aesthetic imaging system may include a modem 40 to provide on-line capabilities to users of the system, such as technical support and teleconferencing.
  • the image capture board 30 has a plurality of buffers in high-speed memory, e.g., RAM, that are used by the imaging program 21 to provide very fast response times to image edits.
  • high-speed memory e.g., RAM
  • four buffers are illustrated for use in explaining the operation of the aesthetic imaging system. These include an original image buffer 50, a modified image buffer 52, a current image buffer 54, and a working buffer 56.
  • Suitable image capture boards for use in the aesthetic imaging system include the Targa +64 and Targa 2000 boards, distributed by Truevision, Inc. of Indianapolis, Indiana. The buffers are discussed in regard to a single pose only, such as a profile or front view of a person.
  • the original image buffer 50 contains an unedited digital image, for example, a side profile picture of a potential patient.
  • the modified image buffer 52 contains any edits made to a copy of the original image.
  • the modified image buffer is updated during a save and after each session.
  • the current image buffer 54 contains information identical to the modified image buffer upon beginning a session. Thereafter, edits made to the current image are saved in the working buffer 56 as an overlay to the current image. During a save, the contents of the current image buffer 54 are copied to the modified image buffer 52, and the working buffer 56 is cleared.
  • Stylus The "pen” that may be used to select menus, modify images, and carry out other commands in the program.
  • the stylus controls the cursor, just as a mouse pointing device does on a personal computer.
  • Tablet or Pad
  • the electronic notepad used in conjunction with a stylus The pen must be held relatively close to the pad in order for the pen to communicate with the tablet. Unlike a mouse, the tablet follows an X / Y grid that matches the monitor, i.e., if the pen is positioned at the top left corner of the tablet, the cursor is displayed at the top left corner on the monitor.
  • Selecting Selecting also referred to as “tipping” or “pressing" the tip of the pen briefly onto the tablet. This selects a command or affects the drawing tool, depending on the current procedure being implemented.
  • FIGURE 3 illustrates an exemplary routine for implementing the imaging program 21 in accordance with the invention.
  • a system startup is performed wherein the computer looks for peripheral devices that are connected to the aesthetic imaging system, the memory is tested, and any other startup procedures needed to get the system up and running are implemented.
  • the imaging program displays a main menu, which provides access to the various features of the imaging program. Specifically, the main menu includes the following options: Storage, Camera, View, Shape, Draw, Print, Modules, and Exit. Those options that are pertinent to the invention are described in further detail below.
  • a test is made to determine if the Camera option has been selected from the main menu, indicating that the user wants to take a picture of a patient. If the Camera option has been selected, a routine to implement this command is called at block 66. A suitable subroutine for this task in illustrated in FIGURE 4. Upon return from the Camera routine, the program loops to block 62.
  • the selected images are copied to the appropriate buffers in the frame grabbing board, as described in FIGURE 2 and the accompanying text. For example, if the selected image is an original image that has not yet been edited, the original image will be copied to the original, modified, and current image buffers. If the selected image is an image that has previously been modified, the original image is copied to the original image buffer 50 and the modified image is copied to both the modified and current image buffers 52 and 54. It will be appreciated that the number of images that may be loaded at one time will be limited, in part, by the capacity of the frame grabbing board. The program then loops to block 62.
  • a view subroutine is invoked at block 83.
  • An appropriate routine for the View option is shown in FIGURE 13. The program then loops to block 62.
  • Block 84 determines if the Modules option has been selected. If the Modules option has been selected, a list of the modules available to the user is provided at block 85. Suitable routines for implementing the various modules are shown in FIGURES 22A-22D. If the Modules option is not invoked, the routine continues to a decision block 86.
  • a test is made to determine whether the Exit option has been selected from the main menu. If not, the program loops to block 62. Otherwise, any edits to the image are saved at block 88. At this point in the program, the image in the current image buffer 54 is saved to the modified image buffer 52, and the working buffer 56 is cleared. The program then terminates. L Taking Pictures of a Patient Using an Inverse Image
  • FIGURE 4 A illustrates an exemplary user interface that utilizes a video camera for acquiring a digital image of a patient. It is noted a scanner or other input device may also be used to input an image into the aesthetic imaging system.
  • the solid blocks indicate user interface options presented to the user by the aesthetic imaging system and the dashed blocks represent system responses to the decisions made.
  • a patient is positioned in front of the video camera.
  • an inverse or "mirror" image of the patient's image will be displayed on the monitor, as indicated at block 101.
  • the inverse image is computed using data from the original image, and is representative of how patients see themselves day to day when looking into a mirror.
  • digital images are comprised of pixels or picture elements. It is known to those skilled in the art how digital image pixels may be manipulated to create an image that is the inverse of the original. Displaying an inverse image of a patient is advantageous when taking pre- and post-surgical pictures of patients because it allows patients to more easily center or otherwise position themselves on the monitor. Without the pixel manipulation, the input from a camera or other digital device may create confusion when positioning a patient. Under normal viewing, if a patient appears left of center in the monitor, they are in reality too far to the right. In this instance, a typical patient's reaction is to move even further to the right. With a mirror image displayed, the tendency of most patients is to naturally adjust to the desired position.
  • the displaying of a mirror image is particularly important when taking postsurgical pictures.
  • post-surgical pictures it is advantageous to have the patient in exactly the position they were in when taking the pre-surgical picture.
  • the aesthetic imaging system will preferably display a translucent inverse image of the pre-surgical picture on the monitor, and then overlay an inverse image of the picture currently being taken.
  • a translucent patient image 370 in this case a patient's profile, is displayed on the monitor.
  • the translucent image is the preoperative image taken prior to undergoing a surgical procedure.
  • a "live" video image 372 of the patient is also displayed under the translucent inverse image.
  • the aesthetic imaging system of the present invention may generate a side-by-side display of the two images to allow a patient to easily and accurately compare a presurgical picture with a post-surgical picture.
  • the left half of the monitor may display the presurgical image 370, and the right half of the monitor may display the post-surgical image 372. Allowing a patient to view the two images side-by-side in precisely the same orientation provides the patient with an accurate impression of the results achieved by surgery.
  • the image is focused and sized at block 102 by using the aesthetic imaging system to adjust the electronic controls on the video camera.
  • the tip of the pen is pressed anywhere on the tablet to freeze the digital image onto the monitor.
  • the user makes a determination if the image currently displayed on the monitor is acceptable. If the image is not acceptable, the routine loops to block 100. If the image is acceptable, an appropriate command is entered at block 108 and the image is stored in nonvolatile memory for future viewing.
  • a test is made to determine if an exit or other similar command has been entered by the user, i.e., if any more pictures are to be taken. If additional pictures are to be taken, the program loops to block 100. Otherwise, at block Ili a checksum value (described below) is calculated by the imaging program for each (original) image that has been stored. At block . 112, the imaging program stores each image and its associated checksum value. The routine then returns to block 68 of FIGURE 3. II. Determining Authenticity Using Checksum Values
  • the checksum value is an addendum to an original stored image that is used to determine its authenticity when the image is subsequently displayed or printed.
  • Those skilled in the art will recognize that there are a number of methods of implementing such a checksum procedure. For example, one checksum computation is to add up the grayscale values for one of the colors, i.e., red, green, or blue, for each pixel comprising the image. Assuming a screen of 640 by 480 pixels and 256 colors per pixel, the checksum values would range from 0 to (640)(480)(255). When an image is recalled for display or to be printed, the checksum value is recalculated.
  • the newly calculated checksum value will be equivalent to the addendum value, and the image is certified as being unaltered. If the image has been modified, the checksum values will vary, indicating the image has been modified. In this instance, an indication of the fact that the image has been modified may be displayed or printed with the image, if desired.
  • the authentication of an original image using a checksum value is ideal for situations in which physicians display before and after pictures of a patient who has undergone cosmetic surgery. In some instances, viewers are skeptical as to whether an "after" image is really representative of a patient's appearance after surgery. This is in reaction to beliefs that post-surgical images have been altered to make the patients look better. For example, there may be temptation to slightly fade wrinkles or otherwise edit features that the physician was attempting to address in a surgery. Using the described checksum feature, the post-surgery image can be verified as an authentic, unaltered image based upon the addendum value, and the veracity of the image is not questioned. This is beneficial to physicians when illustrating postsurgical results during lectures or in other teaching situations.
  • FIGURE 4B illustrates an exemplary routine for determining whether or not an original image, i.e., pre- or post-surgical image, has been altered in accordance with the invention.
  • This routine may be invoked whenever a pre- or post-surgical image is displayed on a monitor or printed on a page.
  • a test is made to determine whether the image to be displayed is portrayed as an "original image" that was not modified, e.g., a before or after picture. If the image is supposed to be an original image, a current checksum value for the image is calculated at block 115.
  • the calculated checksum value for the image is compared to the checksum that was stored when the image was acquired by the aesthetic imaging system, e.g., when the picture was taken.
  • the calculated checksum value is compared to the stored checksum value to see if they are equivalent. If the two values are not equivalent, at block 118 an icon is added to the image, e.g., displayed or printed along with the image, indicating that the image has been altered. If the checksum values are equivalent, an icon is added to the image at block 119 verifying the authenticity of the image. Once either icon has been added to the image, or if the image being printed or displayed is not an original image, the program terminates. As will be appreciated, the same checksum computational method must be used on each image, i.e., when an original image is acquired and when an image is to be displayed, or the comparison will be meaningless.
  • a checksum is contemplated in the exemplary routine for determining whether an original image has been altered, it will be appreciated that other techniques may be used to detect when an original digital image has been modified. For example, a flag or other marker may be uniquely associated with the original digital image. If the flag or other marker is absent from the digital image being displayed, the digital image is a copy that is presumed to have been changed. Alternatively, two versions of the image may be stored, included an unaltered original and a copy. The two images may be compared in order to determine whether modifications have been made to the copy.
  • the imaging program includes a unique combination draw (CD) feature that generally works with all of the drawing tools.
  • CD combination draw
  • the CD feature allows a user to freehand draw, use curves to edit an image, as well as undo using either freehand or curves, without having to invoke a separate menu for each item.
  • the airbrush tool is selected as the drawing tool, although it is to be understood that the CD tool generally applies to all of the drawing tools.
  • the aesthetic imaging system prompts the user to choose a color from a color palette that appears on the monitor. A color is selected using the pen. After selecting a color, a side bar menu is displayed. The user may select from a number of options on the side bar, including brush size and intensity, or select away from the side bar menu to remove the menu from the screen. In the latter case, the system defaults are used.
  • the user presses on the tablet with the pen tip and continues pressure while moving or "rubbing" the pen on the tablet. At this point the chosen color is written onto the image at the location on the monitor that corresponds to the pen location. Pressing the side bar while repeating the motion will allow the user to selectively remove any edits to the image using a freehand motion.
  • the user To draw a curve, the user must set a first anchor point by selecting with the pen. Thereafter, as the user moves or "floats" the pen across the tablet, a green line will extend from the first anchor point to the current position of the pen. In a desired location, a second anchor point is set by selecting with the pen. Once both anchor points have been established, the green line appears on the monitor as a segment between the two anchor points.
  • the user floats the pen across the tablet.
  • the system will display a curved line bending and moving with the movement of the pen.
  • the pen movement (top to bottom, or side to side) determines the arc of the curve.
  • the image is edited in accordance with the selected draw tool and draw tool options.
  • the system displays the curved line repeating itself with the chosen color. Pressing the side bar while repeating the motion will allow selective removal of any edits to the image using a curve established between the anchor points.
  • FIGURE 5 illustrates an exemplary routine for implementing the CD feature of the imaging program.
  • the draw tool group includes: Airbrush, Tint, Texture, Blend, Undo, and Contour.
  • the imaging program begins a curve mode by establishing a first endpoint, as indicated at block 132, and drawing a line on the monitor from the endpoint to the current pen position, indicated at block 134.
  • a test is made to determine if the tip has been pressed. The imaging program at this point is looking for a second endpoint to be entered. If not, the program loops back to block 136 to await the input.
  • the user can go back to the beginning of the routine using the cancel button if the user has changed his or her mind, although this is not shown in the flow diagram. Specifically, the first cancel would place the routine at block 124, the second at either block 122 or 124, depending on the draw tool selected, and subsequent cancels would forward the routine to the main menu.
  • the second endpoint is established at the tip position, and a line segment is drawn on the monitor from the two endpoints, as indicated at block 138.
  • the imaging program is in "curve draw mode" as indicated at block 140, and the user can make any edits desired using a curvilinear line segment having the established endpoints.
  • the routine loops to block 124 while in this mode.
  • FIGURE 6 an original (unedited) image 130 that is representative of an image displayed on a monitor is illustrated in FIGURE 6.
  • a main menu 132 is displayed across the top of the image to allow a user to select editing, viewing and printing options, as discussed in FIGURE 3 and accompanying text.
  • the main menu 132 is from an embodiment of the aesthetic imaging system 20.
  • FIGURES 7A-7E and 8A-8E contrast exemplary steps taken to make identical edits to the image 130.
  • the steps required to modify the image using a relatively advanced prior art imaging system are illustrated in FIGURES 7A-7E. These steps are modeled after a prior art imaging system that has been distributed by Mirror Image Technology, Inc., a division of Virtual Eyes, Incorporated, the assignee of the present invention.
  • the steps required using an embodiment of the aesthetic imaging system 20 in accordance with the invention are illustrated in FIGURES 8A-8E.
  • each set of drawings illustrates examples of edits to a patient's nose, chin, and neck regions. The edits are for use in explaining the invention only, and merely exemplary in nature.
  • FIGURE 7 A the curve option of an airbrush tool is used to modify the bridge of a patient's nose.
  • a resultant curve 134 is displayed having anchor points (endpoints) 136 and 138.
  • FIGURE 7B a freehand motion is used to eliminate a portion of the tip of the patient's nose.
  • FIGURE 7C an undo tool is used to replace a portion of the bridge of the patient's nose that was removed by the edits performed in FIGURE 7A.
  • FIGURE 7D the neck area of the patient has been edited using the curve option of an airbrush tool.
  • the resultant curve 140 has anchor points 142 and 144.
  • FIGURE 7E an undo tool is used to add back a portion of the neck area that was removed in FIGURE 7D.
  • FIGURE 7 A Step SI move pen to draw on main menu
  • Step S2 select draw
  • Step S3 move pen to airbrush; Step S4 select airbrush; Step S5 move pen to curve; Step S6 select curve; Step S7 select an airbrush color; Step S8 move pen to the first anchor point position;
  • Step S9 select at the position to establish the anchor point 136;
  • Step S10 move pen to the second anchor point position
  • Step Sll select at the position to establish the anchor point 138;
  • Step S12 move pen to bend the curve 134 into the bridge of the nose
  • Step S13 press the side button on the pen to exit to the main menu
  • Step S14 move pen to draw; Step S15 select draw; Step S16 move pen to airbrush; Step S17 select airbrush; Step S18 move pen to freehand; Step S19 select freehand; Step S20 select a color for the airbrush tool; Step S21 use a rubbing motion with the pen to make the freehand edit;
  • Step S22 press the side button on the pen to exit to the main menu
  • Step S23 move pen to draw; Step S24 select draw; Step S25 move pen to undo; Step S26 select undo; Step S27 move pen to curve; Step S28 select curve; Step S29 use a rubbing motion with the pen to undo the previous edit;
  • FIGURE 7D Step S30 press the side button on the pen to exit to the main menu;
  • Step S31 move pen to draw; Step S32 select draw; Step S33 move pen to airbrush; Step S34 select airbrush; Step S35 move pen to curve; Step S36 select curve; Step S37 select color to be used by the airbrush tool;
  • Step S38 move pen to the first anchor point position
  • Step S39 select at the position to establish the anchor point 142;
  • Step S40 move pen to the second anchor point position
  • Step S41 select at the position to establish the anchor point 144;
  • Step S42 move pen to bend the curve 140 toward the neck, thereby making the edit shown.
  • Step S43 press the side button on the pen to exit to the main menu
  • Step S44 move pen to draw; Step S45 select draw; Step S46 move pen to undo; Step S47 select undo; Step S48 move pen to freehand; Step S49 select freehand; Step S50 use a rubbing motion with the pen to undo a portion of the previous edit; and
  • Step S51 press the side button to exit back to the main menu.
  • the steps required to perform the same edits using the aesthetic imaging system 20 are now described. With reference to FIGURE 8A-8E, the steps required to perform the edits include:
  • FIGURE 8A Step Nl move pen to draw
  • Step N2 select draw
  • Step N3 move pen to airbrush
  • Step N4 select airbrush
  • Step N5 elect any color for the airbrush tool
  • Step N6 move pen to the first anchor point position
  • Step N7 select at the position to establish the anchor point 136;
  • Step N8 move pen to the second anchor point position
  • Step N9 select at the position to establish the anchor point 138;
  • Step N10 move pen to bend the curve 134 into the bridge of the nose
  • FIGURE 8B Step Nil pressing the tip of the pen against the tablet and use a rubbing motion to make the freehand edit.
  • FIGURE 8C Step N12 pressing the tip of the pen and the side button simultaneously, and maintain pressure while rubbing in the area to be undone;
  • FIGURE 8D Step Nil move pen to the first anchor point position
  • Step N12 select at the position to establish the anchor point 142
  • Step N13 move pen to the second anchor point position
  • Step N14 select at the position to establish the anchor point 144; Step N15 move pen to bend the curve 140 toward the neck, thereby making the edit shown.
  • FIGURE 8E Step N16 press tip of pen and the side button simultaneously, while rubbing to undo a portion of the previous edit;
  • Step N17 release pressure on the pen and side button, and press the side button to return to the main menu.
  • FIGURE 9A illustrates a user interface for a contour tool for use in editing images in accordance with the invention.
  • the contour tool is invoked from the main menu of the imaging program, as indicated at block 200.
  • the contour tool has similarities to a blend tool, but utilizes pixel manipulation to pull pixels from one area to another. For example, the tool works great for chin and lip augmentations.
  • a side bar menu is displayed by the aesthetic imaging system, as indicated at block 201.
  • the point size for the tool may be selected from the side bar menu.
  • an opacity percentage is entered by the user. If the opacity is at 100 percent, any areas affected by the edit are completely covered by the replacement pixels. As the percentage is reduced, more and more of the original pixels will remain, creating a blending of the replacement and prior pixels.
  • Anchor points are selected at block 202. The selection is accomplished as described in blocks 128 and 130 of FIGURE 5. As also described, once the anchor points are selected, a line is displayed between the points by the aesthetic imaging system, as indicated at block 205.
  • a curve having the anchor points as endpoints is positioned along a part of the body, e.g., lips or chin, to be edited.
  • the body part is then edited by dragging the curve in the direction in which a body part is to be augmented. Edits made in block 206 are saved at block 208 by pressing and then releasing the tip of the pen. The program then terminates.
  • FIGURES 9B-9C further describe the operation of the contour tool, by illustrating how pixels are replicated from one area of an image to another.
  • the image areas described are for exemplary purposes only, and are simplified for clarity in this discussion.
  • an area 209 of an image is comprised of red 218R, green 218G, blue 218B, and yellow 218Y areas separated by boundary lines 210, 212, 214, and 216. It is assumed that a pair of anchor points 218 and 219 have been established by a user along the boundary 212, wherein the aesthetic imaging system will display a line segment 220 between the two anchor points.
  • FIGURE 9C it is assumed that the user has moved the midsection of the line segment 220 to the right.
  • the blue area 218B has been stretched into the yellow area 218Y.
  • This area is bounded by the line segment 220 (now curved) and the boundary line 214.
  • the green area 218G has been stretched into the blue area 218B.
  • This area is bounded by a curved line segment 221 and the boundary line 212.
  • the red area 218R has expanded into the green area 218G; this area is bounded by a curved line segment 222 and the boundary line 210.
  • the newly defined areas will be comprised of the color being expanded.
  • the area bounded by segments 220 and 221 will be blue; the area bounded by segments 221 and 222 will be green; and the area bounded by segment 222 to the left edge of the diagram will be red.
  • the opacity level is less than 100 percent, pixels from the underlying image areas that are being written over by the newly defined areas will be blended into the newly defined areas.
  • the area bounded by segments 220 and 221 will still be primarily blue, but the portion of this area bounded by the segment 220 and the boundary line 214 may have a yellow tinge; and the portion of this area bounded by the boundary line 212 and segment 221 may have a green tinge.
  • the opacity percentage is dropped, the effects on these areas will be even greater. While somewhat simplistic, the illustration in FIGURES 9B and 9C describes the function of the contour tool and the effect on more complicated pixels patterns will be appreciated by those skilled in the art.
  • FIGURES 9D-9G illustrate edits to a patient's lips 224 using the contour tool.
  • FIGURE 9D is a "before" picture without any modifications.
  • the user To edit the right half of the patient's upper lip, the user first selects the desired end points surrounding the feature to be modified.
  • a first anchor point 374 is designated near the middle of the upper lip, and a second anchor point 376 is designated at the right outer margin of the upper lip.
  • a line 378 is displayed between the points by the aesthetic imaging system.
  • the line By floating the pen over the tablet in a direction generally indicated by arrow 380, the line is bent to form a curve 382 that approximates the contour of the feature being edited.
  • the shape of the curve is set by tipping the pen.
  • the user selects a point along the portion of the curve by tipping the pen at the desired location.
  • the user By floating the pen above the tablet, the user can stretch the selected feature in the manner described above.
  • the user floats the pen in a direction generally indicated by arrow 384 to enlarge the patient's right upper lip. As the upper lip is expanded, the pixels forming the upper lip are replicated to expand the portion of the lip, while the pixels outside of the upper lip are deleted.
  • FIGURE 9E the left side of the person's upper lip has been edited using the contour tool.
  • FIGURE 9F both the left and right sides of the person's upper lip are shown modified using the contour tool.
  • FIGURE 9G the lower lip has been edited.
  • these edits are accomplished very quickly, in part because the augmented area will automatically match the area around it.
  • the features of the contour tool make it much easier to perform augmentations than is currently available using prior art imaging tools.
  • FIGURES 10-12B are directed to the shape tool features of the aesthetic imaging system. Similar to the draw tool, the imaging program has a combination feature that is generally available with any of the shape tools.
  • the black dots are brush size options that allow a user to choose the thickness of a shaping tool.
  • the zoom option allows a user to look at an image in greater detail.
  • the aesthetic imaging system displays a square overlaid on the image. The square can be positioned by the user with the pen. After positioning the square, that area of the image will be magnified when the pen is selected. Canceling with the pen will display normal viewing mode.
  • the undo option allows a user to undo edits to an image.
  • the compare option allows a user to transition between before and after images.
  • the split image option allows a user view before and after images side by side.
  • the inverse option creates the mirror image of all or only a portion of an image that has been designated by the user.
  • the blend tool will blend edits with the surrounding area. Many of the options shown are also implemented as separate tools under View in the main menu. These are described in greater detail below. It is noted that the side bar menus available for the drawing tools are similar to the shape tool side bar menus shown. They do, however, typically include an option wherein the user may choose the intensity or opacity of a color used in conjunction with a draw tool.
  • the user is prompted to designate an area of the image to be edited. In a preferred embodiment, this is accomplished by pressing down on the pen and freehand drawing an area, e.g., a circle, that is to be subject to the edit.
  • the imaging program contains a unique feature wherein if a partial area is designated and the pen subsequently released, the drawing area will automatically be formed into a contiguous area by the imaging program.
  • a test is made to determine if an area has been designated by the user. If not, the routine loops back to block 234 and awaits a designation.
  • any edits to the designated area of the image are performed in accordance with the selected shape tool, as indicated at block 236.
  • Two exemplary shape tools for editing an image are illustrated in FIGURES 12A and 12B.
  • a test is made to determine if editing of the designated area is complete. In one embodiment, this involves testing for when the user "selects" with the pen anywhere on the tablet. The routine remains at block 238 until editing is complete (or the user exits using the side button).
  • an autoblend box is displayed in the vicinity of the edited area, as indicated at block 240.
  • a test is made to determine if the tip of the pen has been pressed against the tablet. If not, the routine loops, testing for this occurrence (or an exit command from the user). After the pen tip has been pressed, the imaging program calculates the location of the pen relative to the autoblend box.
  • FIGURE 11 illustrates an example of an autoblend box 250 that may be drawn by the aesthetic imaging system in accordance with the invention.
  • the autoblend box 250 may be used to: (1) move an edited area, (2) paste the edited area onto the image while blending the edge created between the edited area and the rest of the image, and (3) paste the edited image without blending.
  • the autoblend box 250 uses the conventions set forth below, those skilled in the art will appreciate that other conventions may be used without departing from the scope of the invention.
  • To move an edited area the user must press down on the pen anywhere inside the autoblend box, except not at the three-, six-, nine-, and twelve-o'clock positions of the box.
  • the "move" area is designated with reference numeral 254.
  • a pressing of the tip within any of these areas results in the edited area being pasted without blending, as indicated at block 264.
  • a selection in a location in the autoblend box apart from the three-, six-, nine-, and twelve-o'clock areas will allow the image to be moved. In this case, the edited area will track movement of the pen as long as the tip remains pressed. After a move is completed, the routine loops to block 242.
  • a test is made at block 268 to determine if the user wishes to exit the shape routine, e.g., by pressing the side button. If not, the routine loops to block 232 where a new area of the image may be considered. Otherwise, the routine returns to block 82 of FIGURE 3.
  • FIGURES 12 A and 12B illustrate two exemplary shape tools that are available when using the aesthetic imaging system.
  • a cutout tool is unique in that a user can select an area of the image to be cut out, thereby creating a "hole" in the image, and an identical image underneath the cutout image can then be moved in all directions as it is viewed through the hole.
  • the cutout feature is especially useful for profile views including chin augmentation, brow lifts, and maxillary and mandibular movement; and frontal views, including otoplasty, brow lift, lip augmentation, nasal base narrowing, and maxillary and mandibular movement.
  • the current image is copied to a working buffer.
  • a working buffer As is discussed in FIGURE 10, when the cutout subroutine is called the user has defined an area of the image to be edited.
  • a boundary is created around the designated area designated in block 234 of FIGURE 10.
  • the working area is displayed inside the boundary, and the current image displayed outside the boundary. In this manner, the image in the working buffer can be moved relative to the image in the current buffer until the desired alignment has been achieved.
  • the program then returns to the routine of FIGURE 10.
  • the edit may be frozen by selecting with the pen. Thereafter, the autoblend box is displayed. Selecting within the area 252 of the autoblend box allows the designated area to be moved. Selecting anywhere outside the autoblend box will make the edit permanent, with automatic blending. Selecting within the box at the three-, six-, nine- or twelve-o'clock positions (areas 254) will make the edit permanent, without blending.
  • a rotate tool in accordance with the invention is particularly useful when editing profile views including the nasal tip, mandible, maxilla and brow areas; and frontal views including nares, brows, and the corners of the mouth.
  • the rotate routine when the rotate routine is called, the user has defined an area of the image to be edited.
  • the area designated in block 234 of FIGURE 10 is shown in phantom.
  • the imaging program waits for the user to enter an axis of rotation. An axis is then entered by the user, as indicated at block 279.
  • a display line emanating from the axis point is displayed on the monitor, as indicated at block 280. Also, the number of degrees of rotation is displayed. The position of the pen dictates the degree of rotation. As the pen is moved away from the axis point, the display line will lengthen, providing the user greater control of the rotation of the designated area.
  • the system waits for the user to enter a desired degree of rotation. The degree of rotation is entered by the user by selecting with the pen, as indicated at block 283. Once the degree of rotation is entered, the designated area is redrawn onto the current image, as indicated at block 284. After the redraw, the routine returns to FIGURE 10. Upon returning, the autoblend box is displayed.
  • Selecting within the autoblend box allows the designated area to be moved. Selecting anywhere outside the autoblend box will make the edit permanent, with automatic blending. Selecting within the box at the three-, six-, nine-, and twelve-o'clock positions will make the edit permanent without blending. While prior art imaging programs have a rotate feature, they do not allow a user to select the axis of rotation. The ability to select the axis is valuable in the procedures listed above.
  • FIGURES 18A-18D disclose the capabilities of the wa ⁇ shape tool in the aesthetic imaging system.
  • the warp shape tool is a powerful tool that allows users to easily edit a patient's features. Similar to the contour tool, the warp tool allows a user to select and manipulate a feature of the patient's image, with the aesthetic imaging system automatically redrawing the area surrounding the manipulated feature as the manipulation is being performed.
  • the user first defines an image to be manipulated by the warp tool by encircling the portion of the image to be edited as shown by dotted line 386.
  • a user may tip the pen to designate a stretch point within the selected area.
  • a stretch point 388 has been designated near the top portion of the patient's right upper lip.
  • the user may float the pen in the desired direction that they would like to stretch the image.
  • the portion of the image that is located at the stretch point is pulled in the direction that the user floats the pen, with the area surrounding the stretch point either expanded or compressed.
  • the area in the direction that the user floats the pen is compressed, and the area away from the direction that the user floats the pen is expanded.
  • the amount of expansion or compression is dictated by the distance of the area away from the stretch point, in a manner discussed in greater detail below.
  • the image is manipulated in real time, so that the user is presented with a seamless and continuous stretching or movement of the selected feature.
  • FIGURES 19A-19E The technique used to implement the warp tool is portrayed in FIGURES 19A-19E.
  • a user first defines a manipulation area 420 to be edited by circling the desired feature of the patient.
  • the aesthetic imaging system creates a pixel map of a wa ⁇ ing area 422 that completely encompasses the manipulation area.
  • the wa ⁇ ing area is preferably rectangular and approximately 33% larger in area that the manipulation area. It will be appreciated, however, that the wa ⁇ ing area may be differently shaped or sized depending on the particular application and available system hardware.
  • the user selects a stretch point 424 within the manipulation area by tipping the pen at a desired location within the area.
  • the aesthetic imaging system maps four rectangles 426a, 426b, 426c, and 426d in the wa ⁇ ing area. One corner of each rectangle is defined by the stretch point, and the diagonally opposing corner is defined by a corner of the wa ⁇ ing area.
  • Pe ⁇ endicular lines 427 drawn through the stretch point intersect the wa ⁇ ing area boundary at points 428a, 428b, 428c, and 428d.
  • the user may float their pen to manipulate the selected feature, for example in the general direction indicated by arrow 430.
  • the rectangles in the wa ⁇ ing area are distorted.
  • the stretch point has been moved upwards and to the left in the wa ⁇ ing area, to an intermediate location 432.
  • the aesthetic imaging system determines the shape of four quadrilaterals 434a, 434b, 434c, and 434d, with sides that extend from the intermediate location 432 of the stretch point to original points 428a, 428b, 428c, and 428d.
  • the pixels in the original rectangles 426a, 426b, 426c, and 426d are then mapped into the quadrilaterals.
  • the user tips the pen to fix the stretch point in a desired location.
  • automatic blending around the outer margins of the manipulation area is performed by the aesthetic imaging system.
  • the aesthetic imaging system also remaps the wa ⁇ ing area, creating four new rectangles 436a, 436b, 436c, and 436d based on the ending location 438 of the stretch point.
  • a second stretch point may then be selected within the manipulation area and the process repeated, with the second wa ⁇ ing transforming rectangles 436a, 436b, 436c, and 436d that resulted from the first wa ⁇ .
  • the user may then select a third or additional wa ⁇ point to further manipulate the image. Each manipulation is performed without the user having to redefine the manipulation area.
  • FIGURES 18A-18C The flexibility and power of allowing multiple wa ⁇ s on a selected region is demonstrated in FIGURES 18A-18C.
  • a user designates a stretch point 388 and floats the pen in a direction generally indicated by arrow 390. Floating the pen upwards and outwards generally causes the upper right portion of the lip to be expanded in that direction.
  • the resulting manipulation is shown in FIGURE 18B, wherein the right lip has been made fuller. The area surrounding the lip is expanded or compressed to ensure that there are no discontinuities between the edited lip and the surrounding face.
  • the user selects a second stretch point within the manipulation area to further stretch the selected feature.
  • a stretch point 392 may be positioned on the upper portion of the left lip, and the pen floated in a direction generally designated by the arrow 394. As shown in FIGURE 18C, this generally causes the upper left portion of the lip to be made fuller to match the upper right portion of the lip. Again, during the wa ⁇ ing the aesthetic imaging system automatically expands or contracts the surrounding unmanipulated area to ensure that there are no discontinuities between the upper left and the unedited portion of the face.
  • the wa ⁇ tool with multiple stretch points is a very powerful tool as it allows the user to quickly manipulate an image with a minimum use of drawing tools or piecemeal editing. Because the wa ⁇ tool performs the manipulation in real-time, the edits are accomplished very quickly and fluidly. A user may therefore generate a desired image in a minimal amount of time.
  • the wa ⁇ ing area can alternatively be automatically defined by the aesthetic imaging system.
  • the user simply selects a stretch point 424 and floats the pen in the desired wa ⁇ ing direction.
  • the imaging system automatically defines a rectangular wa ⁇ ing area 422 of a predetermined size that surrounds the stretch point. As the user moves the stretch point, the area within the wa ⁇ ing area is wa ⁇ ed.
  • the size of the wa ⁇ ing area 422 may also be dynamically adjusted by the imaging system.
  • FIGURE 13 illustrates an exemplary routine for implementing the view features of the imaging program.
  • the solid blocks indicate user interface options presented to the user by the aesthetic imaging system and the dashed blocks represent system responses to the decisions made.
  • the view group includes: Compare, Prioritize, Split Image, Mirror Image, and Restore to Original, as well as other options including Zoom and Emboss.
  • a test is made to determine if the Compare option has been selected.
  • the Compare option allows a modified image to be compared to the original image so that a viewer can more readily see the changes. Specifically, as the pen is floated from the top to the bottom of the tablet, the user will see one image transition or "mo ⁇ h" into the other.
  • the mo ⁇ hing is accomplished by overlaying the original image with the modified image, and varying the opacity of the modified image.
  • the modified image is opaque, only the modified image may be viewed by a user.
  • the modified image is completely transparent, only the original image may be viewed by a user. In between these two extremes, varying amounts of feed back the edits made to the image will be apparent to the user.
  • the feedback to the patient as the original image mo ⁇ hs into the modified image is much more powerful than a side- by-side comparison of the two images.
  • a user can press the tip of the pen to freeze an image displayed on the monitor at a point anywhere from zero to 100% of the transition from the original to the modified image. Freezing an image at a partial transition is extremely helpful where edits have been performed on an image that are not realistically achievable in surgery, but an achievable result lies somewhere between the original and the modified image. For example, it is easy to edit a blemish on a face so that area resembles the surrounding skin and thus becomes invisible. However, the total removal of the blemish may not be realistic. In this case, a transition of that area toward the original image will slowly "fade in" the blemish. A physician may then freeze the fading process at a desired point to provide a realistic image of what surgery can achieve to the patient.
  • FIGURE 14A illustrates a modified profile image 302 of a patient that includes a rhinoplasty procedure (nose) 304, a chin augmentation procedure 306 and a submental lipectomy procedure (neck) 308.
  • the boundaries that have been edited are illustrated by dashed lines 304a, 306a, and 308a, corresponding to the patients original nose profile, chin, and neck, respectively.
  • the user can designate one area on the modified image, and illustrate transitions between the original and modified images at that area only by floating the pen. Any areas not selected will continue to be displayed as the original image.
  • a test is made at block 309 to determine if the entire image is to be compared or only certain portions of the image, i.e., using the Prioritize option. If less than the entire image is to be compared, the user is prompted to enter the area or areas that are to be compared at block 310. A user may then define one or more "priority areas" by freehand circling the desired area.
  • a first priority area 312 has been defined that corresponds generally to the nose. Given this selection, the nose area only will transition from original to modified as the pen is moved, with the rest of the image being displayed unedited. Thus, the modifications to the chin and neck no longer are shown.
  • a second priority area 314 has been defined that corresponds generally to the chin. The first priority area 312 has been kept. Given these selections, the nose and chin areas only will transition from original to modified as the pen is moved, with the rest of the image being displayed unedited. Thus, the modifications to the neck are not illustrated.
  • a third priority area 316 has been defined that corresponds generally to the neck, along with the former designations.
  • the priority areas may also be automatically defined by the aesthetic imaging system.
  • a comparison may be made between an original image stored in a buffer and the edited image that has been modified by the user. Any areas containing differences between the original and the edited image may be highlighted by the aesthetic imaging system, and a priority area automatically defined for each area containing differences. Whether a priority area is defined may also depend on the number of differences between the original image and the edited image.
  • the user is prompted to enter a textual identifier for the defined area.
  • the user after circling the area in FIGURE 14B and defining a first priority area 312, the user would be prompted to enter a textual identifier corresponding to the first priority area.
  • the user may assign a descriptive identifier related to that priority area, such as "nose”, or the user may assign a non-related textual identifier such as "areal”.
  • Textual identifiers assigned by the user in this manner are displayed to the user in a submenu of the Prioritize option. The user may then select from the submenu of the Prioritize option those areas that they wish to display from the list presented to them.
  • the prioritize submenu corresponding to FIGURE 14D may present the user with a choice of "nose,” "chin,” and “neck.” By selecting the nose and neck from the prioritize submenu, the user could simultaneously display only the effects of the nose and neck procedures. After making such a comparison, the user could deselect neck from the prioritize submenu, and instead select the chin and nose. This would allow the user to display only the effects of the chin and nose procedures using the compare option. It will be appreciated that assigning textual identifiers to each of the defined priority areas provides greater flexibility to a user, since the user does not have to redefine each of the priority areas that are to be displayed each time. A user may therefore quickly cycle through various permutations of the procedures that have been edited for patient display.
  • a test is made to determine if the user wishes to save a transitional or mo ⁇ hed view of an image. If a transitional view is to be saved, the user may establish the percentage transition, i.e., anywhere from zero to 100 percent transition (zero percent being the original image and 100 percent being the edited image), by floating the pen up or down above the tablet to establish the view, and the pressing the tip of the pen against the tablet to freeze the transitional image, as indicated in block 322. If the tip is pressed again, the frozen image is saved.
  • the save options are available with or without the priority areas in effect. After the save has been accomplished, or if the user did not wish to save a transitional view of an image, the Compare option is complete and the routine branches to block 326.
  • the Compare option allows a user to compare a modified image with any edits made to the modified image during the current editing session, i.e., before the changes are permanently saved to the modified image.
  • this embodiment of the Compare option contrasts the image in the current image buffer 54 with the image in the modified image buffer 52.
  • this embodiment of the Compare option may also be used in conjunction with the Prioritize option to allow the user to select priority areas for comparison. In this case, the priority areas transition from the modified to the current image, while the modified image only is displayed in the other (nonselected areas) areas.
  • a test is made to determine whether the user wishes to view a split image.
  • the Split Image option is used on a frontal picture only, and allows a patient to see his or her asymmetries. If a split image view is desired, the user is prompted to select an image, e.g., original or modified, at block 330.
  • a vertical centerline is displayed on top of the selected image. The user is then prompted to position the centerline at the location desired, as indicated at block 334. Typically, the centerline will be positioned to dissect the image into equal halves, using the nose and the eyes as reference points.
  • the aesthetic imaging system displays two images, one showing the left halves pieced together and the other the right halves pieced together. Specifically, the aesthetic imaging system will produce an inverse image of the left (right) half and then add it to the left (right) half.
  • FIGURES 15A-15C illustrate the resultant images that are displayed when the
  • FIGURE 15 A a frontal image 350 of a patient is shown, including a centerline 352 that has been positioned at the center of the patient by a user.
  • FIGURE 15B is an illustration of the left halves of the image after being pieced together by the aesthetic imaging system, as indicated by reference numeral 354.
  • FIGURE 15C is an illustration of the right halves of the image, as indicated by reference numeral 356.
  • FIGURE 20 is a representative image 410 of a patient with a menu bar 412 located across the top of the screen 414.
  • the menu bar is preferably translucent to allow the user to view the patient's image through the menu bar.
  • the text 416 and the line 418 indicative of the menu bar are preferably presented in a contrasting, yet muted, color to allow the user to read the commands.
  • the text and the line outlining the menu bar may be presented in an off-white. While editing the image with a patient present, the patient is therefore not overly distracted when the menu bar periodically appears at the top of the screen.
  • the menu bar 412 may be moved by a user to different locations on the screen 414.
  • the menu bar may be moved by a user to the bottom of the screen. Particularly when editing the face of a patient on the screen, the majority of the editing will take place on the upper two-thirds of the screen. Locating the menu bar on the bottom of the screen therefore positions the menu bar away from the area on which the patient should remain focused.
  • FIGURE 21 is a flow chart of an exemplary routine for implementing a zoom feature in the aesthetic imaging program.
  • the zoom feature allows a user to increase the scale of the image to better view a selected area and to improve the ability of the user to edit fine details in the image.
  • a zoom point is selected by a user.
  • the zoom point identifies the center of the image to be expanded under the control of the user.
  • the picture is adjusted to position the zoom point at the center of the monitor. Centering the picture ensures that as the image is enlarged, the portion of the image surrounding the zoom point will be displayed.
  • the user is allowed to input a desired magnification factor.
  • the magnification factor is selected by floating the pen from the bottom (minimum magnification) to the top (maximum magnification) of the tablet.
  • the image on the monitor is correspondingly magnified and redisplayed at a block 406.
  • a test is made to see if the user has frozen the image by pressing or tipping the pen.
  • a user can manipulate the image using the array of drawing tools described above. It will be appreciated that for very fine work, such as removing small wrinkles surrounding a patient's eyes, the ability to magnify the image increases the quality of the editing that may be achieved.
  • a smoothing function may be inco ⁇ orated in the zoom feature to ensure that as the image is magnified it does not become "pixelly" or grainy.
  • the smoothing function may be implemented in software.
  • the smoothing function is implemented in hardware, such as a smoothing feature provided in the Targa 2000 board described above and inco ⁇ orated in the aesthetic imaging system.
  • feedback may be provided to the user in the form of a numerical display on the image to indicate the approximate magnification as the user floats the pen from the bottom to the top of the tablet.
  • Other means can also be implemented to allow the user to select the desired magnification, including a pull-down menu or numerical entry.
  • emboss viewing option is very helpful in allowing a user to discern wrinkles or other skin imperfections in a displayed image.
  • the user By selecting the emboss option, the user causes an image of the patient to be displayed in a gray scale.
  • the emboss option displays an image that is similar to an etching made of a three-dimensional raised surface. A two dimensional image is portrayed, with the depth of the raised surface indicated by a darker shade of gray.
  • the emboss option removes any deceptive information conveyed by the color or shading of the skin of the patient and allows any raised or depressed areas to be clearly highlighted. When viewing wrinkles or other imperfections on a patient, the emboss option therefore clearly identifies the raised or depressed features over the smooth skin of the patient.
  • the emboss option is implemented using a function provided in the Targa 2000 board described above and inco ⁇ orated in the aesthetic imaging system.
  • the user may manipulate the offset of the emboss by floating the pen over the tablet.
  • the aesthetic imaging system 20 contains several modules to allow an image to be manipulated to improve the quality of the image, to analyze the image, or to prepare the image for meetings or presentations.
  • FIGURES 22A-22D illustrate an exemplary routine for implementing some of the modules in the aesthetic imaging system.
  • the modules are accessible from a pull-down menu or other function key.
  • the first module that the user may select is a color correction module 500 that allows the user to adjust the color of an image so that the image matches the color of a target image. If the user has selected the color correction module, at a block 502 the user is allowed to select an original image and a target image.
  • FIGURE 23 depicts a preferred screen of the aesthetic imaging system when the color of an image is to be corrected. On the left hand side of the screen, a number of stored images 620a, 620b, 620c, 620d, and 620e are displayed in an image index 620. From the image index, the user may select an original image to be manipulated and a target image from which the color is to be copied.
  • the user selects the original and target images by moving the cursor over the desired image, selecting the image by tipping the pen on the tablet, and dragging the image to the appropriate location on the screen.
  • the original image is selected by dragging an image from the image index 620 to an original image box 622, while the target image is selected by dragging an image to a target image box 624.
  • each of the images is displayed in the appropriate box by the aesthetic imaging system.
  • a block 504 the user is allowed to select comparison points on the original image and the target image.
  • An original comparison point 626 is selected on the original image by moving the cursor to the appropriate point on the original image and tipping the pen on the tablet.
  • a target comparison point 628 is selected on the target image by appropriately moving the cursor and tipping the pen on the tablet.
  • the comparison points define the color correction that is made to the original image.
  • the image captured by the image capture board in the aesthetic imaging system is comprised of a number of pixels, each pixel having an 8-bit red component, an 8-bit green component, and an 8-bit blue component.
  • a comparison is made of the red, green, and blue (RGB) components of the pixels in the regions immediately surrounding the selected comparison points. That is, the RGB components of the pixels in the region immediately surrounding the original comparison point are compared with the RGB components of the pixels in the region immediately surrounding the target comparison point.
  • the pu ⁇ ose of the comparison is to allow the color of the original image to be modified to closely approximate the color of the target image.
  • an offset of the difference in color between the original image and the target image is calculated.
  • the offset is calculated by subtracting the average of the red, green, and blue components of the pixels at the original comparison point from the average of the red, green, and blue components of the pixels, respectively, at the target comparison point.
  • the RGB components in a 9 x 9 pixel region surrounding each comparison point are averaged to arrive at an approximate value for the red component, the green component, and the blue component in each selected region.
  • the image in the original image box 622 is modified by adding the calculated offset to the red component, the green component, and the blue component of each pixel in the image. For example, if the average RGB components of the target image are less than the average RGB components of the original image at the comparison point, the calculated offset will be negative, thereby subtracting from the RGB components of each pixel in the original image. Conversely, if the RGB components in the target image are greater than the RGB components in the original image, the offset will be positive, thereby adding to the RGB components of each pixel in the original image.
  • the modified original image is displayed to the user as an adjusted image in an adjusted image box 630. The result of the modification made to the original image is to change the overall color of the image.
  • the original image will be bluer when displayed in the adjusted image box 630. Conversely, if the target comparison point contains more red than the original comparison point, the original image will be displayed as a redder image in the adjusted image box 630.
  • the disclosed technique for correcting the color of an image allows a user to quickly match the color of two images that were taken under different conditions so that the images may be accurately compared.
  • images may be captured by the aesthetic imaging system using a digital camera under different lighting conditions, or with different aperture settings.
  • images that are scanned into the aesthetic imaging system from photographs will likely have a different color composition than images that are captured from a digital camera.
  • the variations in color make it difficult to compare two images accurately.
  • the disclosed color correction module allows a user to quickly select a target image that has a desired color composition, and modify an original image to match the target image so that an accurate comparison may be made between the images.
  • the user may continue to adjust the color of the image if the desired color is not achieved.
  • the user may move the original comparison point 626 or the target comparison point 628 to change the image depicted in the adjusted image box 630. Such modifications occur in real time, allowing the user to quickly try a variety of different color corrections.
  • the adjusted image contained in the adjusted image box 630 is stored. It will be appreciated that the adjusted image may be stored over the old original image, or stored at a new memory location to preserve both the original image and the adjusted image. Once an offset is calculated, the user may also continue to modify a number of images using the same offset.
  • the user selects another image from the image index 620 and drags the image to the original image box 622.
  • the selected original image is automatically modified using the current offset, and displayed in the adjusted image box 630.
  • the adjusted image is then stored. In this manner, a user can quickly adjust the color of a number of images so that the images may be accurately compared.
  • the second module that the user may select is an orientation correction module 520 to allow the size and orientation of an image to be corrected. If the user has selected the orientation correction module, at a block 522 the user is allowed to select an original image and a target image from an image index. As depicted in FIGURE 24, an original image is selected from the image index 620 and displayed in an original image box 640. A target image is selected and displayed in a target image box 642. Oftentimes, the orientation or size of the patient's image in the original image will be different from the target image. For example, in the representative images shown in FIGURE 24, the patient's head in the original image is tilted slightly from the orientation in the target image.
  • the user selects at least three reference points 644 on the target image.
  • the user selects at least three corresponding reference points 648 on the original image.
  • the reference points selected on the original image should be located at approximately the same location on the patient's image as they are located on the target image.
  • a calculation is made to determine the transform necessary to rotate and size the original image so that it will be the same size and orientation as the target image.
  • the relative size of the original and the target images may be determined from the distance between the reference points on each of the images.
  • the orientation of the two images may be determined by the relative location of the two sets of reference points on the screen. Based on a comparison of the reference points 648 on the original image and the reference points 644 on the target image, a transform is constructed that will rotate and scale the original image to arrive at the size and orientation of the target image. While preferably three reference points are used to arrive at the transformation, it will be appreciated that additional reference points could be added to the original and target images to improve the accuracy of the transform.
  • the original image is rotated and sized in accordance with the transform to match the size and orientation of the target image.
  • the modified original image is displayed as an adjusted image in the adjusted image box 644.
  • the data from the original image is bounded by the screen size, that is, the number of pixels in the patient image that is stored in the aesthetic imaging system corresponds to the number of pixels displayed on the screen, portions of the image may be lacking when the image is rotated or sized.
  • the image of the patient in FIGURE 24 is missing a portion 646 of the head and a portion 649 of the neck.
  • the missing portions may be displayed in a background color or inte ⁇ olated to complete the image.
  • the size of the target image and the adjusted image may be varied so that the adjusted image completely fills the display without missing portions.
  • the user may continue adjusting the orientation and size of the image if the desired orientation and size is not achieved.
  • the user moves any of the reference points on the target image or the original image at a block 524. Moving the reference points causes the adjusted image depicted in the adjusted image box 644 to be redisplayed in real time.
  • the adjusted image is stored. The adjusted image may be stored to replace the original image, or may be stored as an additional image for later manipulation.
  • the third module that the user may select is a measurements module 540 to perform measurements of an image.
  • Four types of measurements may be made: an angle measurement, an area measurement, a distance measurement, or a proportion measurement.
  • Each measurement option allows a different measurement to be displayed on the image without changing the underlying image. While each measurement option will be discussed as being implemented on a separate image below, it will be appreciated that the various different measurements may be combined on a single image.
  • FIGURE 25A depicts the screen of the aesthetic imaging system when the user selects the angle measurement option.
  • the user initially selects an origin 650 for the angle measurement by moving a cursor 656 to the appropriate location on the screen and tipping the pen on the tablet.
  • the user To make an angle measurement, at blocks 546 and 548 the user must define a first line 658 and a second line 660 that intersect at the origin.
  • a dashed line 652 is displayed that passes through the origin after the origin has been selected.
  • a solid line segment 654 extends from the origin along the dashed line to a point where the cursor 656 is located.
  • Movement of the cursor 656 along the dashed line 652 causes the length of the solid line segment 654 to change. Movement of the cursor around the origin 650 causes the angle of the dashed line with respect to the origin to change.
  • a desired location of the dashed line is achieved, such as parallel with a patient's feature, the user tips the pen to set the location and length of the first line 658.
  • the dashed line 652 extending through the origin is again displayed to allow the user to define the second line. Movement of the cursor changes the direction and length of the line segment along the dashed line.
  • the second line 660 is defined so that it forms an angle 662 with respect to the first line 658.
  • the user selects the orientation and length of the second line by tipping the pen when the cursor is at the desired location.
  • the angle between the two lines is calculated and displayed on the image.
  • Two lines that intersect at the origin 650 will define two angles, one greater than or equal to 180 degrees, and one less than or equal to 180 degrees.
  • the aesthetic imaging system automatically displays the calculated angle that is less than or equal to 180 degrees.
  • the angle is displayed at a location 664 that is opposite the defined angle. While preferably the angle is displayed in degrees, it will be appreciated that the angle may also be displayed in other units of measure, such as radians.
  • the aesthetic imaging system also allows the user to link multiple angles together. For example, as shown in FIGURE 25 A, three angles are defined by a common line 666.
  • a test is made to determine if the user has enabled a chaining option. If the user has enabled the chaining option, at a block 554 the user is allowed to define additional lines following the definition of the first two lines. That is, after each line is defined, a dashed line is redisplayed from the end of the previously defined line to allow the user to define an additional line having a desired length and direction that starts from the end of the previously defined line.
  • the routine then returns to block 550 where the angle between the last two defined lines is calculated and displayed.
  • Each additional line added by a user therefore defines an additional angle.
  • the chaining option allows the user to outline portions of the image to quickly and accurately obtain angle measurements around a patient's feature.
  • FIGURE 25B depicts the image during the calibration process. The user selects a first calibration point 680 and a second calibration point 682 by moving the cursor to an appropriate location and tipping the pen. To visually indicate the calibration distance to be entered, a line 684 is shown connecting the two calibration points.
  • a dialogue box 686 is displayed to allow the user to enter a known distance between the two calibration points.
  • the user In order to accurately calibrate the image the user must measure the (two-dimensional) distance on the actual patient between the two calibration points. When the actual measurement is known, the user enters the distance into the dialogue box 686 and the dialogue box and line 684 are removed from the image. Each image must only be calibrated a single time. Once the user has calibrated the image, accurate distance or area measurements may be made of the image.
  • the area measurement option allows the user to calculate and display the area of features on the patient's image. After calibrating the image, at a block 562 the user defines an area 688 to be measured on the image by circling the area.
  • the user has circled the lips on the image of the patient shown in FIGURE 25B.
  • the area is distinguished from the remainder of the image by filling the area with cross-hatching 690.
  • the size of the area defined by the user is calculated by known methods and displayed at a location 690 adjacent the defined area.
  • the area may be expressed in English units, metric units, or any other units of measure used to calibrate the image. Further, the user may define any area within the image to be measured.
  • FIGURE 25 C depicts an image in which a user has made several distance measurements. If the user has not calibrated the image, at a block 570 the image must first be calibrated in the manner described above. If the image is already calibrated, at a block 572 the user defines a line segment by selecting a first end point 700 and a second end point 702. After selecting the two end points, a line 704 is displayed on the image connecting the two end points. At a block 574, the two-dimensional distance between the two points is calculated and displayed at a location 706 adjacent the line.
  • the aesthetic imaging system also allows a user to chain multiple distance measurements together. If the user has selected the chaining option at a decision block 576, the user is allowed to enter another end point after the first two end points are displayed at a block 578. The third and subsequent end points are connected to the preceding end point by a line, and the distance of each line displayed. As depicted in FIGURE 25C, the user may use the chaining option to quickly enter a number of lines 708 to measure around a desired area or between multiple points.
  • the fourth measurement that a user may make on an image is a proportion measurement.
  • a test is made to determine if the user has selected the proportion measurement option. If the user has selected a proportion measurement, the user is presented the option of making a horizontal proportion measurement (comparing the horizontal relationship of features in the image) or a vertical proportion measurement (comparing the vertical relationship of features in the image).
  • a test is made to determine if the horizontal or the vertical proportion measurement has been selected. If a vertical proportion measurement is selected, at a block 584 the user defines a number of horizontal lines on an image of the patient. As depicted in FIGURE 25D, a number of lines 720a, 720b, . . .
  • the user split the image into a number of regions 722a, 722b, 722c, and 722d.
  • the user preferably defines the lines by floating the pen over the tablet and tipping the pen at the appropriate locations as the cursor moves from top to bottom on the image. While the lines are shown in FIGURE 25D as extending across the entire image, it will be appreciated that the end points of the lines may be adjustable to allow the length of the lines to correspond to a desired contour, such as the exterior of the patient's face.
  • the distance between the first and last horizontal lines defined by the user is determined.
  • the total distance is used to determine the percentage that each region encompasses of the total distance between the starting and the ending lines.
  • the distance between lines 720d and 720e is divided by the distance between the first line 720a and the last line 720e to arrive at the percentage associated with region 722d.
  • the percentage corresponding to each region is then displayed at a location 724 within each region.
  • the region 722 encompasses 27 % of the total distance between lines 720a and 720e. Dividing an image to determine how the image is proportioned is extremely advantageous in the aesthetic imaging industry since certain proportions are considered to be more desirable than others.
  • FIGURE 25D depicts an image divided by horizontal lines to determine the vertical proportions of the patient.
  • an image may be divided by vertical lines to determine the horizontal proportions of the patient. If the horizontal proportions of the patient are to be measured, the user defines a plurality of vertical lines including a first line and a last line at a block 590.
  • the total distance between the first line and the last line is calculated.
  • the distance between each of the lines is compared with the total distance to arrive at a percentage of the total distance defined by the first and the last lines. The percentages are then displayed to the user within the regions defined by the vertical lines.
  • FIGURE 26 portrays a representative screen when the user has selected the labeling module.
  • the user defines a line segment on an image by selecting a first end point and a second end point.
  • a user may define a first end point 730 near the eyebrow on a patient's image, and a second end point 732 located at a distance from the eyebrow.
  • the user enters a label corresponding to the line segment.
  • a transparent box 734 is displayed to the user at the location where the label will be displayed.
  • the user enters the label using a keyboard or other technique, such as selecting predefined labels from a pull-down menu.
  • the box 734 used to enter the label is preferably transparent to allow the user to see the image of the patient through the box when a label is being entered.
  • a label 736 may be linked to two or more line segments that point to different structures on the image.
  • the label "Lips” is connected by two line segments to each of the patient's lips.
  • the labels are also linked to the line segments so that if the labels are moved, the line segments are correspondingly moved to continue to connect the label with the associated structure.
  • the label "Eye,” which points to the eye of the image in FIGURE 26 may be selected by the user and dragged from a first location 738 to a second location 740 as indicated by the directional arrow. Moving the label "Eye” automatically moves the line segment associated with the label so that the label and the structure remain linked.
  • the aesthetic imaging system modules disclosed above are particularly useful for allowing an imaging professional to manipulate an image to improve the presentation of the image to the patient or to colleagues. Such modules allow a more accurate comparison to be made between two or more images. The result is a more satisfying and beneficial imaging session for a patient.
  • FIGURE 27A depicts a representative screen of the aesthetic imaging system when the laser resurfacing module has been selected.
  • the laser resurfacing module allows the aesthetic imaging system to display an image of the patient that approximates how the patient will appear immediately following a laser resurfacing treatment and throughout the healing process.
  • the steps required to simulate the results of the laser resurfacing treatment are depicted in the flow chart of FIGURE 27B.
  • the user initially selects an image to be edited, and defines a region in which the laser resurfacing treatment is to be simulated.
  • the region is defined by the user by circling the desired area.
  • the user has selected a patient image 750 and circled a region 752 that encompasses the majority of the cheek, chin, and jaw of the patient.
  • the unedited image is stored in a buffer by the aesthetic imaging system as an original image 762, as depicted diagrammatically in FIGURE 27C.
  • the aesthetic imaging system also uses the region 752 that is circled by the user to create an alpha mask 764.
  • the alpha mask is constructed using an alpha key channel associated with the patient image captured by the image capture board.
  • each image includes an alpha key channel that provides information about the opacity of each pixel.
  • the alpha key channel contains an 8-bit value that represents the transparency of the pixel when displayed or superimposed over another pixel.
  • the alpha mask 764 created by the aesthetic imaging system uses the alpha key channel to make a mask for manipulating the original image.
  • the mask corresponds to the shape of the defined region 752. That is, the alpha channel values corresponding to pixels outside the region 752 are set to zero (transparent) in the alpha mask, and the majority of the alpha channel values corresponding to pixels within the region 752 are set to 255 (opaque) in the alpha mask.
  • the edges of the mask are gradiated to smooth the boundary of the mask.
  • FIGURE 27E depicts a representative alpha mask 764 having a gradiated boundary. Areas outside the mask contain an alpha value of zero, and areas at the center of the mask contain an alpha value of 255. Around the boundary of the mask, the alpha values incrementally increase from 0 to 255.
  • FIGURE 27E While three gradiations (of 50, 125, and 200) are depicted in FIGURE 27E, it will be appreciated that a greater or lesser number of gradiations may be included in the mask.
  • the alpha mask is stored after creation by the aesthetic imaging system. It has been found that slightly blurring an image produces an accurate approximation of the results that laser resurfacing treatment can achieve.
  • the selected region 752 of the image is therefore blurred to simulate the treatment results.
  • FIGURE 27D diagrammatically depicts the technique for blurring the portion of the original image 762 that is associated with the region 752.
  • the original image 766 is initially filtered by a low pass filter 766 to produce a blurred image 770.
  • the alpha mask 764 is then added to the blurred image 770 and superimposed over the original image 762 so that only the portion of the blurred image corresponding to the defined region 752 is displayed to the user.
  • the amount of blurring depicted in the image is varied by changing the opacity of the alpha mask 764. As the mask becomes more transparent, less blurring of the image occurs. As the mask becomes more opaque, more blurring of the image occurs.
  • the opacity of the alpha mask 764 is tied to the movement of the cursor to allow the user to change the blurring of the image by floating the pen from the top to the bottom of the tablet.
  • the blurred image 770 is preferably only calculated once and the amount of blurring varied by changing the opacity of the alpha mask 764 as described above.
  • the amount of blurring may be varied by directly changing the response of the low pass filter 766.
  • the low pass filter response may be tied to the movement of the cursor so that the user may directly vary the blurriness of the selected region by floating the pen from the top to the bottom of the tablet.
  • the user freezes the image of the patient by tipping the pen on the tablet.
  • An image 772 representative of the anticipated treatment results (after the patient has healed) is then stored in a buffer in the aesthetic imaging system.
  • FIGURE 27D diagrammatically represents the process for displaying the selected region 752 in a reddened state.
  • the alpha mask 764 is added to a solid image 774 containing pixels of the desired redness, and the alpha channel values in the resulting mask changed to make the mask semi-transparent.
  • the mask and colored image is then superimposed over the image 772 of the representative treatment results after healing.
  • the result is an image 776 that is representative of the image of the patient after the treatment has been performed, but before the patient has healed.
  • the user is allowed to fade the redness of the image at a block 760 to show how the patient will likely look during the healing process.
  • the alpha values in the alpha mask 764 are reduced until the entire mask is transparent. As the mask becomes more transparent, more of the underlying image is visible to the user.
  • the transparency of the alpha mask 764 is tied to the movement of the cursor to allow the user to change the redness of the image by floating the pen from the top to the bottom of the tablet.
  • the healing process is typically very fast in the first few days following treatment, after which the healing process becomes less noticeable. To accurately simulate the pace of the healing process, the transparency of the mask 764 may change very quickly in response to an initial cursor movement before tapering off.
  • only the image 772 of the patient that is representative of the treatment results is displayed to the user.
  • the laser resurfacing simulation module allows the aesthetic imaging system to display an image of a patient before laser resurfacing treatment, an image of the patient with redness and irritation immediately after treatment, and a simulation of the healing process leading to the anticipated results that are achievable by the treatment. While the discussion above reflects a series of steps performed by the user, it will be appreciated that the entire process could be automated by the aesthetic imaging system. In an automated system, the user would merely select the region 752 in which the treatment is to be simulated. The aesthetic imaging system would then automatically apply a preselected amount of blurriness and redness, and slowly cycle through the display of the healing process before ending with an image of the achievable results.

Abstract

Cette invention se rapporte à un système d'imagerie esthétique (20) conçu pour être utilisé dans l'édition d'images numériques. Ce système d'imagerie esthétique comprend un programme d'imagerie (21) qui tourne sur un ordinateur personnel (28) ayant une carte de saisie d'images (30), un écran (32), une source vidéo (34) servant à fournir les images numériques devant être éditées par le système d'imagerie esthétique, et un ensemble photostyle et tablette (38) devant être utilisé pour l'édition des images. Le programme d'imagerie contient un outil de dessin de combinaison unique, qui comporte un mode de dessin à main libre, un mode de traçage des courbes et un mode de retour à l'état antérieur, auquel on peut accéder sans qu'il soit nécessaire de parcourir les menus. Cet outil de dessin combiné peut être utilisé avec n'importe lequel des outils de dessin. Une autre caractéristique du programme d'imagerie est le mode automélange constitué par une interface utilisateur rectangulaire qui peut être appelée par chacun des outils de traçage de forme. L'interface d'automélange simplifie d'édition lorsqu'on utilise les outils de traçage de forme en consolidant les instructions 'déplacer, coller et mélanger' et 'coller sans mélanger' en une seule interface pratique.
PCT/US1997/020394 1996-11-08 1997-11-07 Systeme d'imagerie esthetique WO1998020458A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU52501/98A AU5250198A (en) 1996-11-08 1997-11-07 Aesthetic imaging system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US74557496A 1996-11-08 1996-11-08
US08/745,574 1996-11-08

Publications (3)

Publication Number Publication Date
WO1998020458A1 WO1998020458A1 (fr) 1998-05-14
WO1998020458B1 WO1998020458B1 (fr) 1998-06-11
WO1998020458A9 true WO1998020458A9 (fr) 1998-08-20

Family

ID=24997290

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1997/020394 WO1998020458A1 (fr) 1996-11-08 1997-11-07 Systeme d'imagerie esthetique

Country Status (2)

Country Link
AU (1) AU5250198A (fr)
WO (1) WO1998020458A1 (fr)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001090875A1 (fr) * 2000-05-24 2001-11-29 Koninklijke Philips Electronics N.V. Commande de souris immediate pour mesurer les fonctionnalites d'images medicales
JP2003534080A (ja) * 2000-05-24 2003-11-18 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ 医療画像の簡便な処理のための方法および装置
FR2818529A1 (fr) 2000-12-21 2002-06-28 Oreal Procede pour determiner un degre d'une caracteristique de la typologie corporelle
US7324668B2 (en) 2001-10-01 2008-01-29 L'oreal S.A. Feature extraction in beauty analysis
US7634103B2 (en) 2001-10-01 2009-12-15 L'oreal S.A. Analysis using a three-dimensional facial image
US7437344B2 (en) 2001-10-01 2008-10-14 L'oreal S.A. Use of artificial intelligence in providing beauty advice
US6761697B2 (en) 2001-10-01 2004-07-13 L'oreal Sa Methods and systems for predicting and/or tracking changes in external body conditions
FR2831014B1 (fr) * 2001-10-16 2004-02-13 Oreal Procede et dispositif pour determiner le degre souhaite et/ou effectif d'au moins une caracteristique d'un produit
EP3364644A1 (fr) 2017-02-20 2018-08-22 Koninklijke Philips N.V. Capture d'images

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU7689891A (en) * 1990-04-11 1991-10-30 Washington University Method and apparatus for creating a single composite image of a plurality of separate radiographic images
US5687259A (en) * 1995-03-17 1997-11-11 Virtual Eyes, Incorporated Aesthetic imaging system

Similar Documents

Publication Publication Date Title
US5854850A (en) Method and apparatus for selectively illustrating image modifications in an aesthetic imaging system
US5990901A (en) Model based image editing and correction
US9959649B2 (en) Image compositing device and image compositing method
US5689287A (en) Context-preserving display system using a perspective sheet
USRE40384E1 (en) Apparatus and method for manipulating scanned documents in a computer aided design system
US20010056308A1 (en) Tools for 3D mesh and texture manipulation
KR20040029258A (ko) 화상편집방법, 화상편집장치, 화상편집방법을 실행하기위한 프로그램 및 기록 매체 기록 프로그램
JPH10255066A (ja) 顔画像の修正方法、化粧シミュレーション方法、化粧方法、化粧サポート装置及びファンデーション転写膜
JP2004228994A (ja) 画像編集装置、画像のトリミング方法、及びプログラム
US20020118209A1 (en) Computer program product for introducing painting effects into a digitized photographic image
WO1998020458A9 (fr) Systeme d'imagerie esthetique
WO1998020458A1 (fr) Systeme d'imagerie esthetique
JP3444148B2 (ja) 眉描画方法
WO1998021695A1 (fr) Systeme d'imagerie pour simuler des coiffures
JP2001209802A (ja) 顔抽出方法および装置並びに記録媒体
JP2014006604A (ja) 画像生成装置、画像生成方法及びプログラム
JP2000279228A (ja) 頬紅メーキャップシミュレーションシステム
JP3656570B2 (ja) 画像処理を行なうための装置、方法およびコンピュータプログラム
WO2020129660A1 (fr) Dispositif d'édition de modèle tridimensionnel, procédé d'édition de modèle tridimensionnel et programme
JPH05250445A (ja) 三次元モデルデータ生成装置
JP3002971B2 (ja) 3次元モデル作成装置
JPH0546718A (ja) 画像編集装置
JP4405750B2 (ja) 画像編集装置
JP3620658B2 (ja) マッピング処理方法
JPH05249951A (ja) 画像情報提示装置