WO2011043645A1 - Système d'affichage et procédé d'affichage d'un modèle tridimensionnel d'un objet - Google Patents

Système d'affichage et procédé d'affichage d'un modèle tridimensionnel d'un objet Download PDF

Info

Publication number
WO2011043645A1
WO2011043645A1 PCT/NL2009/050607 NL2009050607W WO2011043645A1 WO 2011043645 A1 WO2011043645 A1 WO 2011043645A1 NL 2009050607 W NL2009050607 W NL 2009050607W WO 2011043645 A1 WO2011043645 A1 WO 2011043645A1
Authority
WO
WIPO (PCT)
Prior art keywords
view data
input device
display system
dimensional model
display
Prior art date
Application number
PCT/NL2009/050607
Other languages
English (en)
Inventor
Jurriaan Derk Mulder
Original Assignee
Personal Space Technologies
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Personal Space Technologies filed Critical Personal Space Technologies
Priority to PCT/NL2009/050607 priority Critical patent/WO2011043645A1/fr
Publication of WO2011043645A1 publication Critical patent/WO2011043645A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04805Virtual magnifying lens, i.e. window or frame movable on top of displayed information to enlarge it for better reading or selection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04806Zoom, i.e. interaction techniques or interactors for controlling the zooming operation

Definitions

  • the invention relates to a display system for displaying a three dimensional model of an object.
  • the invention further relates to a method for displaying a three dimensional model of such an object.
  • the input parameters of the rendering unit are input by manipulating a known input device such as for example a mouse, a joystick, or a gamepad.
  • moving a joystick to the left may rotate the viewed object along a vertical axis in one direction
  • moving a joystick to the right may rotate the viewed object along the same axis in the opposite direction
  • moving the joystick forward may bring the viewed object closer
  • moving it back may move the viewed object further away.
  • viewed object here is meant the (virtual) object as represented in the rendered view of the three dimensional data on the display.
  • a disadvantage of the known display systems is that manipulating the user interface is often a cumbersome process.
  • a more detailed look at the object is to obtained by moving the rendered object closer, which means that parts of the object view may fall outside of the display area, which can easily lead to the user loosing the feeling that she is interacting with a real object. This is even worse if the user intended to examine a part of the object that has moved out of view while zooming in on the object.
  • An object of the invention is achieved by providing a display system for displaying a three dimensional model of an object, the system comprising
  • an input device interface arranged to generate a first input signal representing a state of the first input device and a second input signal representing a state of the second input device
  • a processing unit arranged to receive the first and second input signals from the input device interface and arranged to determine a first model rendering parameter set based on the first input signal and independent of the second input signal and a second model rendering parameter set based on the first and second input signals,
  • a rendering unit arranged to receive the first model rendering parameter set and the second model rendering parameter set from the processing unit, to render as first view data a first part of the three dimensional model of the object according the first model rendering parameter set, to render as second view data a second part of the three dimensional model of the same object according the second model rendering parameter set, and to combine the first and second view data,
  • a display unit arranged to receive the combined view data from the rendering unit and to display said combined view data
  • a three dimensional model of an object is a collection of data that can for example comprise three dimensional vertex coordinates, volumetric textures, and/or surface textures that can be used to provide a two dimensional or three dimensional representation of an object.
  • the object may be of any type, a real object, virtual object, a measurement data set, etc.
  • the concepts "three dimensional data”, “three dimensional model data”, “three dimensional model”, “model data”, and “model” may be used interchangeably in this document.
  • a real object such as for example a historical artifact, may be captured as three dimensional data using for example a known 3D camera or a 3D scanner.
  • a three dimensional model of an object can also be a collection of measurement data, for example medical data such as from a CT scan, an MRI scan, or a fusion of various types of scans.
  • Three dimensional models can also be created manually, for example by a graphical artist.
  • a three-dimensional model can comprise more information than can be seen in a single representation on a two dimensional or even an three dimensional screen.
  • a three dimensional model of an object may provide texture data representing the outer surface of a closed object, and also provide data on parts contained within the closed object, which are normally not rendered visible.
  • data originating from more than one type of probe or measuring equipment may be provided, so different sets of data can be rendered.
  • Medical image fusion data sets form an example of three dimensional models having multiple data sets.
  • the three dimensional model comprises temporal data, and can thus be said to have a time dependence.
  • a three dimensional model may comprise model data sampled at different points in time, so that the rendering unit can render view data of the three dimensional model at different points in time. This makes it possible to show the model at different points in time in succession, as a real-time or not real-time animation.
  • the time dependence of a three dimensional model can also be implemented by applying a mathematical formula or a body of computer programming. In this document, it is implied that the mentioned three dimensional models, and the rendered view data thereof, may have a time dependence.
  • a three dimensional model of an object can be rendered, which means that view data is created based on the three dimensional data as would be captured or "seen” by a virtual camera placed at a certain position, having a certain vector called “up", a certain viewing direction, lens properties, and a field of view.
  • the position, viewing direction, up vector or up direction, and general properties of the virtual camera are input parameters, or model rendering parameters, of the rendering unit.
  • Model rendering parameter sets comprise the input needed for a renderer to render a view of a three dimensional model, not including the three dimensional model data itself.
  • the model rendering parameter set comprises the virtual camera location, camera view direction, camera up direction, and camera field of view.
  • the three dimensional model data has more than one data set, and the selection of which data set to render is also a model rendering parameter.
  • the model rendering parameter set comprises parameters related to the lighting of the virtual object represented by the three dimensional model.
  • View data, as output by the rendering unit may be a two-dimensional image, or it may be a stereo image suitable for a(n) (auto)stereoscopic display. It may also be in another viewing format suitable for a two dimensional, stereoscopic, or three dimensional display system.
  • a display unit may comprise a screen for displaying view data. It may further comprise additional devices, such as for example a display projector for projecting an image on the screen, or glasses for certain types of stereoscopic displays.
  • the phrase "displaying a three dimensional model” generally indicates rendering view data of a three dimensional model of an object, using input parameters as described above, and displaying the view data on a display unit.
  • the display unit comprises a touch screen, arranged to offer a user interface to control aspects of the display system. For example, a user might use such a user interface to select a three dimensional model of an object for viewing.
  • the processing unit can be the central processing unit (CPU), and the rendering unit can be the graphics processing unit (GPU).
  • CPU central processing unit
  • GPU graphics processing unit
  • the invention is not limited to such a strict separation of processing unit and rendering unit. Both units may be implemented on the same piece of hardware, such as a semiconductor device.
  • the rendering unit may be implemented
  • the state of the input device comprises measurable quantities such as position and orientation of the device (in particular relative to the input device interface), but in other embodiments the state of the input devices also comprises derived values from these quantities, such as velocity (rate of change of position) or angular momentum (rate of change of orientation along an axis of the object), and even second derivative quantities, such as acceleration.
  • the input device is equipped with buttons, joystick controls, or other input sensors such as a grip sensor, temperature sensor, etc.
  • the state of the input device also comprises the state of these buttons, joystick controls, and any other sensors.
  • An advantage of the display system as described above is that it provides two input devices, for example one for each hand of the user, thus providing more degrees of freedom than a single such input device.
  • a further example of the system is that it renders two sets of view data based on three dimensional model data of the same object, thus offering two views of the object instead of one.
  • the first view data provides an overview of the object, while the second view data provides a view of a part of the object. By combining both views in an overlapping manner, the user can see they are views of the same object.
  • said two views are combined in such an overlapping manner, that from the position of the second view data with respect to the object shown in the first view data, it is clear which part of the object is shown in the second view data, which is advantageous since it allows a user to see at a single glance which part of the object is shown in the second view.
  • Combining two views in an overlapping manner does not necessarily mean that the rendering unit has to render the (invisible) part of the one view that is overlapped by the other view.
  • first view data is representative of substantially the entire object
  • second view data is representative of said second part of the object
  • the second view data is combined with the first view data in such a manner, that the second view data is displayed at the display unit at a second view data display position relative to the first view data display position that essentially corresponds to the position of the second part of the object relative to the position of the first part of the object.
  • the part of the object represented by the second view data substantially corresponds to the part of the same object that would be represented by the part of the first view data if the second view data did not overlap said part of the first view.
  • the two views are thus combined in a such an overlapping manner that the part of the object as shown in the second view data corresponds to the same part of the same object as would be shown in the first view data if the second view data did not overlap it.
  • the processing unit is arranged to determine the first model rendering parameter set independent of the second input signal. This advantageously allows a user to manipulate the first view data, generally representing an overview of the three dimensional model of an object, using a single input device.
  • the processing unit is arranged to determine the second model rendering parameter set based on the second input signal relative to the first input signal. In an embodiment, the processing unit is arranged to determine the second model rendering parameter set based on a signal representing the relative position of the second input device with respect to the first input device.
  • the first input device and/or the second input device is formed as a passive input device that is designed to be used while holding the input device in one hand. This is advantageous, since passive input devices require no power supply and no cable towards a central part, making them easier to handle. Designing an input device to be held in one hand, means that both input devices can be easily handled
  • the second view data shows a part of the object at a higher level of detail or a higher magnification factor than the first view data.
  • the second view data is rendered using three dimensional model data having a higher level of detail than the three dimensional model data used for rendering the first view data.
  • rendering units generally have a limited amount of resources for rendering three dimensional data, it is advantageous to only render a highly detailed three dimensional model when necessary, and to render a somewhat less detailed three dimensional model otherwise. In the display system, it can thus be advantageous to render the second view data using a more detailed three dimensional model than the first view data. Overall, rendering unit resources are spared, resulting in a faster performance of the display system.
  • the first and second view data are rendered using different data sets of the three dimensional model.
  • the three dimensional model comprises multiple data sets, for example one data set modeling the exterior of an object and one data set modeling parts within the object, it is advantageous to allow the user to view both data sets intuitively. This is most advantageously achieved by selecting the more recognizable data set, for example representing the object's exterior, for rendering the first view data, and another data set, for example representing a measurement of the interior of the object, for rendering the second view data.
  • the second view data is provided with a displayed symbol indicative of the difference between the second model rendering parameter set and the first model rendering parameter set.
  • a symbol might represent a magnification glass, if the second view data shows a more detailed or enlarged part of the object, as compared to the first view data. This advantageously gives the user a visual clue of the function of the second input device and the character of the second view data.
  • the symbol is a representation of the second input device.
  • a representation of the second input device is displayed on the display, and the second view data is displayed substantially inside the area occupied and encompassed by the representation of the second input device, and the first view data is displayed substantially outside said area. This advantageously reinforces the appearance of a functional link between the second input device and the second view data.
  • the input device interface comprises an object tracking unit for tracking the position and/or orientation of the first and/or the second input device. This advantageously allows an implementation using passive input devices.
  • the object tracking unit comprises an optical sensor and the first and/or second input device comprise optical reflecting markers. Measuring the reflection of light on a marker using an optical sensor allows the object tracking unit to determine the position of the markers, which, combined with a knowledge of the position of the markers on the input devices, makes it possible to work out a relative position and/or orientation of the input devices.
  • the three dimensional model of an object represents a physical object.
  • this object can be a historical artifact that is too precious or vulnerable to be handled by members of the public.
  • the display system advantageously allows members of the public to manipulate and analyze a three dimensional representation of the object as if it were the physical object.
  • the three dimensional model of an object is based on medical imaging data. In an embodiment the three dimensional model of an object is based on seismic data.
  • a display system according the invention can advantageously be used to examine and analyze data sets as obtained in for example medical imaging or seismic measurements.
  • the state represented by first input signal is the orientation and position of the first input device and the state represented by the second input signal is the orientation and position of the second input device.
  • measuring a signal representing the relative position and orientation of the input devices allows these signals to be used as a basis for determining the rendering parameters of the three dimensional model. For example, by transferring relative translation and rotation applied by the user to the input devices to the rendered three dimensional model data, intuitive manipulation of the three dimensional model representation is achieved.
  • an input device is formed by a hand of the user.
  • an object tracking unit comprising a camera and a tracking processing unit arranged to determine a hand position and/or orientations based on optical data from the camera.
  • this removes the need for separate input devices.
  • the display system is arranged to set a current position and/or orientation of the first input device as the initial position and/or orientation of the first input device, corresponding to an initial first model rendering parameter set for rendering an initial, default, first view data of the three dimensional model of an object.
  • the state represented by first input signal is the orientation and position of the first input device and the state represented by the second input signal is the orientation and position of the second input device. Mapping the relative orientation and position of an input device to an orientation and position of a displayed virtual object, provides an intuitive way to manipulate the position and orientation of said virtual object.
  • the first input device has a substantially dodecahedron shape. The dodecahedron shape has many sides to place markers on, making optical detection of the device's position and orientation easier.
  • the first input device is a replica of the object represented by the three dimensional model of an object. Linking the form of the first input device to the object shown on the display makes it easier for the user to understand that manipulating the first input device will have an effect on the displayed object.
  • the second input device is shaped like a magnifying glass. Linking form of the second input device to it's function makes understanding it's function easier.
  • the display system comprises a plurality of first input devices, and the system is arranged to detect which first input device is selected by a user.
  • the rendering unit has access to a plurality of three dimensional models, and is arranged to select a three dimensional model of an object for rendering based on the user selection of a first input device. This lets a user pick up one of a plurality of first input devices, automatically leading to a corresponding three dimensional model of an object to be shown. This removes the need for a manual selection of a three dimensional model.
  • the display system comprises a plurality of second input devices, and the system is arranged to detect which second input device is picked up by a user.
  • the rendering unit has access to a plurality of second model rendering parameter sets, and is arranged to select a second model rendering parameter set based on the user selection of a second input device. This allows a user to select a second view rendering mode by selecting one of a plurality of second input devices. For example, the user may pick up a second input device resembling a magnification glass to enable the second model rendering parameter set that gives more detailed and enlarged second view data.
  • the display unit comprises a display device capable of conveying a depth impression or a three dimensional display device. This can advantageously reinforce the illusion that the user is manipulating a real object.
  • the display system comprises at least an additional display unit, wherein said additional display unit is arranged to display the same three dimensional model of an object as the display unit. Having a second display can allow multiple people to examine the object as it is being manipulated and analyzed by a user.
  • the display system comprises the object which is represented by the three dimensional model of an object.
  • the invention provides a method for displaying a three dimensional model of an object, comprising the steps of
  • Figure 1 schematically shows a display system according the invention.
  • Figure 2A schematically shows a user interacting with a display system according the invention.
  • Figure 2B schematically shows a displayed image on a display system according the invention.
  • Figure 2C schematically shows a further displayed image on a display system according the invention.
  • Figure 3 schematically shows a first input device.
  • Figure 4 schematically shows a second input device.
  • Figure 5 schematically shows a first input device that is shaped as a replica of modeled data.
  • FIG. 1 schematically shows an embodiment of a display system 50 according the invention.
  • the shown display system 50 comprises a first input device 51 , a second input device 52, and an input device interface 53.
  • the input devices 51 and 52 are wireless, graspable devices. With wireless is meant that they are not physically connected to the input device interface 53. With graspable is meant that they are designed to be held in hand.
  • the input device interface 53 generates first and second input signals corresponding to the state of the first and second input devices 51 and 52 respectively.
  • the input device interface 53 comprises an object tracking unit (not shown).
  • the object tracking unit may comprise a scanning infrared source for emitting infrared light, an infrared sensor for detecting reflections of the emitted infrared light, and a tracking processing unit arranged to control the infrared source and to read and interpret the signals from the infrared sensor.
  • the input devices may comprise infrared reflecting markers or retro -reflective markers, with particular surfaces of the input device being provided with particular marker patterns. A skilled person will know how to arrange the before mentioned components in order to arrange a tracking unit that can be used to generate signals corresponding to the relative position and orientation of one, two, three, or more input devices. It is preferred to use infrared light, since it is not visible to the human eye.
  • infrared light instead of infrared light, another type of light or another type of electromagnetic radiation may be used.
  • An advantage of the infrared based tracking unit as described here is that the input devices can be passive, i.e. not requiring a battery or other source of power, which simplifies the use and maintenance of the input devices.
  • Alternative implementations may comprise active markers or inertial sensors, which within reach of a skilled person. Inertial sensors measure a quantity representing change of movement, which can be converted into a signal representing position and/or orientation.
  • a skilled person will know that other implementations comprising one or more input devices and an input device interface which generates signals corresponding to for example the positions and orientations of the input devices can be provided.
  • Such alternative systems are also within reach of a skilled person, and may for example involve Bluetooth, irDa, acoustic signals, or other wireless or wired interfacing standards.
  • Such a system may also involve a camera coupled to a computer vision device for measuring for example a position and orientation of an input device observed by the camera, which advantageously can remove the need to place markers on the input device.
  • the input device interface 53 makes the first and second input signals representing the state of the first and second input devices 51 and 52 respectively available to the processing unit 54.
  • the processing unit 54 is arranged to determine rendering parameters for the rendering unit 55 based on the received input signals.
  • the processing unit 54 accepts two input signals, corresponding to the state of two input devices, and makes two model rendering parameter sets available to the rendering unit 55.
  • a different number of input signals may be received, and/or a different number of model rendering parameter sets may be generated.
  • the processing unit can be a general purpose processing unit, such as a PC compatible central processing unit (CPU), running a software program that is stored on an internal or external memory (not shown).
  • the rendering unit 55 is connected to a memory unit (not shown). In the memory unit, three dimensional data can be stored, so that the rendering unit 55 can read it.
  • the rendering unit 55 is arranged to receive two model rendering parameter sets from the processing unit 54.
  • the rendering unit 55 is arranged to render view data of a three dimensional model of an object based on the render parameters as received from the processing unit 54.
  • the rendering unit generates view data, comprising the one or more rendered views, which it makes available to a display unit.
  • the three dimensional data is received from an external storage device, for example a storage device connected to a computer network.
  • a storage device may be part of a Picture Archiving and Communication System (PACS).
  • PACS Picture Archiving and Communication System
  • not all three dimensional model data is read before rendering begins. This technique is also known as procedural rendering.
  • the display unit 56 is arranged to receive view data from the rendering unit, and to make said view data visible on a user viewable part, such as a display screen.
  • the display unit 56 is an LCD monitor, and the view data conforms to an industry standard video signal, such as for example an HDTV 1920xl080p HDMI signal.
  • the display unit 56 comprises a display projector.
  • the display unit is an LCD monitor comprising a touch screen, arranged to offer a user interface to control aspects of the display system. For example, a user might use such a user interface to select a three dimensional model of an object for viewing.
  • the display unit 56 is a display capable of conveying a depth impression, for example a stereoscopic display (with for example autoshutter or polarized glasses), an autostereoscopic display (for example based on barrier or lenticular technology), or another three dimensional display, and the view data is in a format suitable for the chosen display, for example stereo format, or image-plus-depth format.
  • the display unit may comprise a general three dimensional display device, such as a holographic display.
  • the rendering unit is used to render first view data from the three dimensional model data, where in the first view data substantially the entire object that the three dimensional model of an object represents, is visible.
  • the virtual camera is placed so that the viewing angle and the viewing distance are representative of the relative orientation and the relative position of the first input device. That is, if the user holding the first input device rotates the first input device counter clockwise, the rendered object in the view will also rotate counter clockwise.
  • the view is refreshed at a frequency that is suitable for video material, such as for example 60 times per second (60 Hz).
  • a rotation of the first input device around any axis of the device will result in a rotation of the rendered object around a corresponding axis.
  • a translation of the first input device in any direction will result in a translation of the rendered object in a similar direction.
  • the system is arranged to automatically detect and set a default orientation and position of the first input device which results in view data representing the entire object in what can be called a standard upright position, in other words, placing the virtual camera in a sensible starting position.
  • the system might detect the event of a user picking up the input device, and setting the subsequent position and orientation of the input device as corresponding to this starting position.
  • the advantage of this automatic detection is that the user is not required to move the input device to a pre-determined position and into a pre-determined orientation in order to see the object in the manner described, giving the user freedom to move around.
  • the rendering unit is used to render second view data from the three dimensional model data.
  • This second view can be a modified view of a part of the object, where the selected part and the details of the modification depend on the position and orientation of the first input device and the position and orientation of the second input device.
  • the second model rendering parameter set also describing the modification of the second view, are at least partially based on the position and orientation of the second input device relative to the position and orientation of the first input device.
  • the modification is a magnification or enlargement of a part of the object, optionally showing extra details not visible in the first view data.
  • the first input device can be said to represent the three dimensional object and the second input device can be said to represent a magnifying glass.
  • corresponding virtual magnifying glass is used in the rendering calculations of the rendering unit, resulting in a second view of the three dimensional model data which corresponds to what a user looking through a real magnifying glass at the real object would see, if a real magnifying glass were positioned and oriented with respect to the real object like the second device is positioned and oriented with respect to the first input device.
  • the first view data and the second view data are combined together to form view data for displaying.
  • This view data may be formed by taking the first view as a basis and superimposing the second view, with the modified or magnified view of the data, on top of a part of it. It is advantageous to let the second view which is superimposed on the first view have a circular circumference, resembling the shape of the magnifying glass.
  • the frame of a magnifying glass may be rendered as well.
  • some kind of transition area is created between the first view and the second view, so that the first will blend seamlessly into the second, creating a pleasing visual effect.
  • FIG 2A schematically shows a user 10 interacting with a display system 11 according the invention.
  • the user is holding a first input device 12 in his left hand, and a second input device 13 in his right hand.
  • the display system 11 comprises several visible parts: a house 15, a display 14, and an object tracking unit 16 for tracking the first and second input devices 12, 13.
  • the non visible parts of the display system which have been discussed in relation with figure 1, are in this embodiment located inside the display system house 15.
  • the display unit is external to the display system house 15.
  • Lines of sight a and b in figure 2A illustrate a possible way of interacting with the display system.
  • the user looks through the second input device 13 (here represented by an object shaped like a magnifying glass) at the first input device 12 (here represented by a multi-sided object).
  • the user looks at the display without input devices 12, 13 being in the line of sight b.
  • the user can manipulate the input devices and magnify or otherwise show modified parts of interesting parts of the object.
  • Use of the display system according the invention is not limited to looking along two lines of sight a and b as described in the above.
  • the second view data as shown on the display data is determined by the relative position and orientation of the second input device 13 with respect to the first input device 12.
  • the house 15 of the display system 11 also offers a display case for one or more physical objects for which the display system has three dimensional models. Said display case may also be placed somewhere else in the vicinity of the display system 11. This has several advantageous effects. In a museum context, for example, a visitor can scrutinize the original object, looking for details of interest, and then use the display system to examine close-ups of these details.
  • FIG. 2B shows an example of view data as displayed on the display unit 14.
  • the first part of the view data shows the rendered first view data 17, representing the object in the position and orientation representative of the position and orientation of the first input device.
  • second view data 18 is displayed surrounded by the first view data 17.
  • the second view data 18 represents a view of the object rendered with a larger magnification factor than the first view data is rendered with.
  • the view data of figure 2B can be created by superimposing second view data 18 on top of first view data 17 which fills substantially the entire display area, whereby the second view data 18 surrounded by the displayed magnifying glass 19 is placed at a relative position on the display so that the illusion is created that second view data 18 is seen through a magnifying glass that is approximately represented by the displayed magnifying glass 19.
  • a rendering unit can realize a magnifying glass effect as shown in figure 2B. Examples are simply rendering two views and combining them, possibly superimposing one on the other. An alternative is to actually add a three dimensional model of a magnifying glass, including lens, to the three dimensional model data and rendering the view in one go using a suitable technique such as raytracing.
  • the notion of a first and a second view, where the second view shows a modified version of the first view also applies to said ray-tracing technique, even if it appears that both views are rendered simultaneously as a single view.
  • Beside superimposing views and raytracing, other techniques to obtain the effect will be known to a skilled person.
  • the invention is not limited to any of the techniques described in this document.
  • Figure 2C shows an alternative display of view data on a display unit 14.
  • the first view 17 shows the object as described in relation to figure 2B, but the second view 20, surrounded by a displayed circle to clearly separate the first and second view data, shows a cross section of the object.
  • three dimensional model data from (3D) Ultrasound, X-Ray, CT Scan, or Magnetic Resonance Imaging (MRI) may be used, as is commonly done for certain types of historical artifacts.
  • MRI Magnetic Resonance Imaging
  • This also an example of having multiple modalities in the three dimensional model data.
  • the outside appearance of the object, including its outer shape and surface texture, is shown in the first view 17 (modality one)
  • the, for example, MRI data is used in the second view (modality two).
  • Figure 3 shows a first input device 12 according to an embodiment of the invention.
  • the first input device 12 has multiple sides, which are each provided with markers 21 in certain pattern.
  • the first input device 12 shown here is shaped like a dodecahedron, but the invention is not limited to similarly shaped first input devices. Essentially any object to which markers can be attached can be used as a first input device.
  • the object tracking unit of the display system is arranged to detect the markers on the first input device which are visible to the tracking unit's sensor.
  • the implementation may comprise an infrared light source, infrared reflecting markers, and an infrared sensor.
  • the tracking processing unit has access to stored data regarding the locations of the markers on the first input device.
  • the tracking processing unit is arranged to match this knowledge against the pattern of markers currently being detected, and from that work out the current position and orientation of the first input device. This is a known technique to determine a orientation and position of an object, and a skilled person will be able to implement this. A skilled person will also know how to make this system robust, for example, by handling the event of a marker being partially or fully covered by the hand of the user holding the first input device.
  • the invention is not limited to the use of markers as discussed in relation with figure 3 and before.
  • an alternative is a computer vision approach where a visible light camera follows the input device, and a processing unit determines the location and position of the input device from the tracked shape.
  • Figure 4 shows a second input device 13 according to an embodiment of the invention.
  • the second input device 13 is provided with a number of markers in a specific pattern. The determination of the orientation and position of the second input device 13 is done as it is done for the first input device 12.
  • the second input device has a shape that resembles a magnifying glass. This will make it distinct from the first input device 12, and also acts as a visual clue for the user regarding the function of the second input device.
  • the display system provides a number of different second input devices.
  • One might be shaped like a magnifying glass 13 as in figure 4, whereas another can be shaped like for example a triangular MRI warning symbol.
  • the system will use a different second view rendering mode. For example, if the user picks up the magnifying glass, the magnification mode as discussed in relation with figure 2B is used, whereas if the user picks up the MRI symbol, the cross-section mode as discussed in relation with figure 2C is used. Detection of which second input devices is picked by the user, might be achieved by applying a different pattern of markers to each of the second input devices.
  • Figure 5 shows an alternative form for a first input device according the invention, where the first input device is not shaped like the multi-sided object of figure 3, but actually shaped to resemble the original object of which a three dimensional model is displayed.
  • This has the advantage that it is more intuitive, and thus easier, for the user to manipulate an object, the rotations and translations of which are applied to the viewed object representation, when that object resembles the viewed object
  • a single display system may have access to three dimensional model data for a number of original objects.
  • the display system may also be provided with a number of first input devices, each first input device being shaped like one of the original objects.
  • first input devices for example removing it from a tray on the display system
  • the display system will register this event and load the three dimensional model data corresponding to the object which said first input device resembles into the rendering unit. Detection of which first input device was picked up, and therefore which model to load, might be accomplished, for example, by applying a different marker pattern to each of the first input devices.
  • the display system may be provided with at least one additional display unit.
  • Such an additional display unit can be used in various ways. For example in “slave mode", the additional display unit may receive the same view data as the (main) display unit. An advantage of such a setup is that it can be used to give presentations, where the viewers of the presentation watch the images on an additional display unit, and the (main) display unit is used by the presenter.
  • the additional display unit may receive additional view data, based on the state of additional first and second input devices, but representing the same three dimensional model of an object as the model visible on the main display unit.

Abstract

L'invention concerne un système d'affichage permettant d'afficher un modèle tridimensionnel d'un objet et un procédé permettant d'afficher un modèle tridimensionnel d'un objet. Dans un mode de réalisation, le système d'affichage présente deux dispositifs d'entrée. Dans un mode de réalisation, le système fournit deux jeux de données de visualisation fondés sur des données de modèle tridimensionnel du même objet, et propose ainsi deux vues de l'objet. Dans un mode de réalisation, lesdites deux vues sont affichées de telle sorte que, de la position des deuxièmes données de visualisation par rapport à l'objet montré dans les premières données de visualisation, la partie de l'objet montrée dans les deuxièmes données de visualisation est clairement définie, ce qui avantageux, car cela permet à un utilisateur de voir d'un seul coup d'œil la partie de l'objet montrée dans la seconde vue.
PCT/NL2009/050607 2009-10-08 2009-10-08 Système d'affichage et procédé d'affichage d'un modèle tridimensionnel d'un objet WO2011043645A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/NL2009/050607 WO2011043645A1 (fr) 2009-10-08 2009-10-08 Système d'affichage et procédé d'affichage d'un modèle tridimensionnel d'un objet

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/NL2009/050607 WO2011043645A1 (fr) 2009-10-08 2009-10-08 Système d'affichage et procédé d'affichage d'un modèle tridimensionnel d'un objet

Publications (1)

Publication Number Publication Date
WO2011043645A1 true WO2011043645A1 (fr) 2011-04-14

Family

ID=41572416

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/NL2009/050607 WO2011043645A1 (fr) 2009-10-08 2009-10-08 Système d'affichage et procédé d'affichage d'un modèle tridimensionnel d'un objet

Country Status (1)

Country Link
WO (1) WO2011043645A1 (fr)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2624117A3 (fr) * 2012-02-06 2014-07-23 Honeywell International Inc. Système et procédé fournissant un curseur d'affichage tridimensionnel visualisable
WO2014180797A1 (fr) * 2013-05-07 2014-11-13 Commissariat à l'énergie atomique et aux énergies alternatives Procede de commande d'une interface graphique pour afficher des images d'un objet tridimensionnel
US9443446B2 (en) 2012-10-30 2016-09-13 Trulnject Medical Corp. System for cosmetic and therapeutic training
US9792836B2 (en) 2012-10-30 2017-10-17 Truinject Corp. Injection training apparatus using 3D position sensor
US9922578B2 (en) 2014-01-17 2018-03-20 Truinject Corp. Injection site training system
US10235904B2 (en) 2014-12-01 2019-03-19 Truinject Corp. Injection training tool emitting omnidirectional light
US10269266B2 (en) 2017-01-23 2019-04-23 Truinject Corp. Syringe dose and position measuring apparatus
US10290231B2 (en) 2014-03-13 2019-05-14 Truinject Corp. Automated detection of performance characteristics in an injection training system
US10500340B2 (en) 2015-10-20 2019-12-10 Truinject Corp. Injection system
US10648790B2 (en) 2016-03-02 2020-05-12 Truinject Corp. System for determining a three-dimensional position of a testing tool
US10650703B2 (en) 2017-01-10 2020-05-12 Truinject Corp. Suture technique training system
US10743942B2 (en) 2016-02-29 2020-08-18 Truinject Corp. Cosmetic and therapeutic injection safety systems, methods, and devices
US10849688B2 (en) 2016-03-02 2020-12-01 Truinject Corp. Sensory enhanced environments for injection aid and social training

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040046736A1 (en) * 1997-08-22 2004-03-11 Pryor Timothy R. Novel man machine interfaces and applications
US20060008119A1 (en) * 2004-06-01 2006-01-12 Energid Technologies Visual object recognition and tracking

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040046736A1 (en) * 1997-08-22 2004-03-11 Pryor Timothy R. Novel man machine interfaces and applications
US20060008119A1 (en) * 2004-06-01 2006-01-12 Energid Technologies Visual object recognition and tracking

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
CUTLER L D ET AL: "TWO-HANDED DIRECT MANIPULATION ON THE RESPONSIVE WORKBENCH", PROCEEDINGS OF 1997 SYMPOSIUM ON INTERACTIVE 3 D GRAPHICS 27-30 APRIL 1997 PROVIDENCE, RI, USA; [PROCEEDINGS OF THE SYMPOSIUM ON INTERACTIVE 3D GRAPHICS], PROCEEDINGS 1997 SYMPOSIUM ON INTERACTIVE 3D GRAPHICS ACM NEW YORK, NY, USA LNKD- DOI:10.1145/2, 27 April 1997 (1997-04-27), pages 107 - 114, XP000725362, ISBN: 978-0-89791-884-8 *
HINCKLEY K ET AL: "Two Handed Virtual Manipulation", ACM TRANSACTIONS ON COMPUTER-HUMAN INTERACTION, ACM, NEW YORK, NY, US LNKD- DOI:10.1145/292834.292849, vol. 5, no. 3, 1 September 1998 (1998-09-01), pages 260 - 302, XP002226128, ISSN: 1073-0516 *
JI-SUN KIM ET AL: "A Tangible User Interface System for CAVE Applicat", VIRTUAL REALITY, 2006. IEEE ALEXANDRIA, VA, USA 25-29 MARCH 2006, PISCATAWAY, NJ, USA,IEEE LNKD- DOI:10.1109/VR.2006.21, 25 March 2006 (2006-03-25), pages 261 - 264, XP010933836, ISBN: 978-1-4244-0224-3 *
LOOSER J ET AL: "Through the looking glass: The use of lenses as an interface tool for augmented reality interfaces", PROCEEDINGS GRAPHITE 2004 - 2ND INTERNATIONAL CONFERENCE ON COMPUTER GRAPHICS AND INTERACTIVE TECHNIQUES IN AUSTRALASIA AND SOUTHEAST ASIA PROCEEDINGS GRAPHITE 2004 - 2ND INTERNATIONAL CONFERENCE ON COMPUTER GRAPHICS AND INTERACTIVE TECHNIQUES IN AUS, 15 June 2004 (2004-06-15), pages 204 - 211, XP007913305 *
PETRIDIS P ET AL: "USABILITY EVALUATION OF THE EPOCH MULTIMODAL USER INTERFACE: DESIGNING 3D TANGIBLE INTERACTIONS", VRST'06. ACM SYMPOSIUM ON VIRTUAL REALITY SOFTWARE & TECHNOLOGY. LIMASSOL, CYPRUS, NOV. 1 - 3, 2006; [ACM SYMPOSIUM ON VIRTUAL REALITY SOFTWARE AND TECHNOLOGY], NEW YORK, NY : ACM, US LNKD- DOI:10.1145/1180495.1180521, 1 November 2006 (2006-11-01), pages 116 - 122, XP001505743, ISBN: 978-1-59593-321-8 *

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2624117A3 (fr) * 2012-02-06 2014-07-23 Honeywell International Inc. Système et procédé fournissant un curseur d'affichage tridimensionnel visualisable
US10643497B2 (en) 2012-10-30 2020-05-05 Truinject Corp. System for cosmetic and therapeutic training
US9443446B2 (en) 2012-10-30 2016-09-13 Trulnject Medical Corp. System for cosmetic and therapeutic training
US9792836B2 (en) 2012-10-30 2017-10-17 Truinject Corp. Injection training apparatus using 3D position sensor
US11854426B2 (en) 2012-10-30 2023-12-26 Truinject Corp. System for cosmetic and therapeutic training
US11403964B2 (en) 2012-10-30 2022-08-02 Truinject Corp. System for cosmetic and therapeutic training
US10902746B2 (en) 2012-10-30 2021-01-26 Truinject Corp. System for cosmetic and therapeutic training
WO2014180797A1 (fr) * 2013-05-07 2014-11-13 Commissariat à l'énergie atomique et aux énergies alternatives Procede de commande d'une interface graphique pour afficher des images d'un objet tridimensionnel
FR3005517A1 (fr) * 2013-05-07 2014-11-14 Commissariat Energie Atomique Procede de commande d'une interface graphique pour afficher des images d'un objet tridimensionnel
US9912872B2 (en) 2013-05-07 2018-03-06 Commissariat A L'energie Atomique Et Aux Energies Alternatives Method for controlling a graphical interface for displaying images of a three-dimensional object
US9922578B2 (en) 2014-01-17 2018-03-20 Truinject Corp. Injection site training system
US10896627B2 (en) 2014-01-17 2021-01-19 Truinjet Corp. Injection site training system
US10290232B2 (en) 2014-03-13 2019-05-14 Truinject Corp. Automated detection of performance characteristics in an injection training system
US10290231B2 (en) 2014-03-13 2019-05-14 Truinject Corp. Automated detection of performance characteristics in an injection training system
US10235904B2 (en) 2014-12-01 2019-03-19 Truinject Corp. Injection training tool emitting omnidirectional light
US10500340B2 (en) 2015-10-20 2019-12-10 Truinject Corp. Injection system
US10743942B2 (en) 2016-02-29 2020-08-18 Truinject Corp. Cosmetic and therapeutic injection safety systems, methods, and devices
US10648790B2 (en) 2016-03-02 2020-05-12 Truinject Corp. System for determining a three-dimensional position of a testing tool
US10849688B2 (en) 2016-03-02 2020-12-01 Truinject Corp. Sensory enhanced environments for injection aid and social training
US11730543B2 (en) 2016-03-02 2023-08-22 Truinject Corp. Sensory enhanced environments for injection aid and social training
US10650703B2 (en) 2017-01-10 2020-05-12 Truinject Corp. Suture technique training system
US10269266B2 (en) 2017-01-23 2019-04-23 Truinject Corp. Syringe dose and position measuring apparatus
US11710424B2 (en) 2017-01-23 2023-07-25 Truinject Corp. Syringe dose and position measuring apparatus

Similar Documents

Publication Publication Date Title
WO2011043645A1 (fr) Système d'affichage et procédé d'affichage d'un modèle tridimensionnel d'un objet
US9201568B2 (en) Three-dimensional tracking of a user control device in a volume
CN105074617B (zh) 三维用户界面装置和三维操作处理方法
KR101823182B1 (ko) 동작의 속성을 이용한 디스플레이 상의 3차원 사용자 인터페이스 효과
CN104471511B (zh) 识别指点手势的装置、用户接口和方法
US20100259610A1 (en) Two-Dimensional Display Synced with Real World Object Movement
CN101779460B (zh) 电子镜装置
US7965304B2 (en) Image processing method and image processing apparatus
Tomioka et al. Approximated user-perspective rendering in tablet-based augmented reality
US20160343166A1 (en) Image-capturing system for combining subject and three-dimensional virtual space in real time
CN105659295A (zh) 用于在移动设备上的真实环境的视图中表示兴趣点的方法以及用于此方法的移动设备
Jia et al. 3D image reconstruction and human body tracking using stereo vision and Kinect technology
TW201246088A (en) Theme-based augmentation of photorepresentative view
EP2847616B1 (fr) Appareil de surveillance comportant une caméra de portée
CN109564703B (zh) 信息处理装置、信息处理方法及计算机可读存储介质
KR101892735B1 (ko) 직관적인 상호작용 장치 및 방법
KR20140081840A (ko) 모션으로 제어되는 리스트 스크롤 방법
US11562545B2 (en) Method and device for providing augmented reality, and computer program
CN115335894A (zh) 用于虚拟和增强现实的系统和方法
CN109844600A (zh) 信息处理设备、信息处理方法和程序
JP2004272515A (ja) インタフェース方法、装置、およびプログラム
JPH0628452A (ja) 3次元画像処理装置
JP4493082B2 (ja) Cg提示装置及びそのプログラム、並びに、cg表示システム
Hashimoto et al. Three-dimensional information projection system using a hand-held screen
CN115953557A (zh) 一种基于虚拟现实技术的产品展示方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09741028

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09741028

Country of ref document: EP

Kind code of ref document: A1