WO2011072456A1 - Desktop display apparatus - Google Patents

Desktop display apparatus Download PDF

Info

Publication number
WO2011072456A1
WO2011072456A1 PCT/CN2009/075706 CN2009075706W WO2011072456A1 WO 2011072456 A1 WO2011072456 A1 WO 2011072456A1 CN 2009075706 W CN2009075706 W CN 2009075706W WO 2011072456 A1 WO2011072456 A1 WO 2011072456A1
Authority
WO
WIPO (PCT)
Prior art keywords
model representation
image
function
user interface
model
Prior art date
Application number
PCT/CN2009/075706
Other languages
French (fr)
Inventor
Huanglingzi Liu
Lei Xu
Original Assignee
Nokia Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Corporation filed Critical Nokia Corporation
Priority to PCT/CN2009/075706 priority Critical patent/WO2011072456A1/en
Publication of WO2011072456A1 publication Critical patent/WO2011072456A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1684Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
    • G06F1/1694Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675 the I/O peripheral being a single or a set of motion sensors for pointer control or gesture input obtained by sensing movements of the portable computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2200/00Indexing scheme relating to G06F1/04 - G06F1/32
    • G06F2200/16Indexing scheme relating to G06F1/16 - G06F1/18
    • G06F2200/163Indexing scheme relating to constructional details of the computer
    • G06F2200/1637Sensing arrangement for detection of housing movement or orientation, e.g. for controlling scrolling or cursor movement on the display of an handheld computer

Definitions

  • the present appJication relates to a method and apparatus.
  • the method and apparatus relate to desktop display technology and in particular, but not exclusively limited to, some further embodiments relate to desktop display technology in user equipment,
  • Digital equipment such as personal computers, audio players and video players is supplied with a user Interface which comprises an input typically a touch sensor and/or keyboard and an output such as an audio display or video display configured to output information to the user.
  • the display typically has a "desktop" element, a widely used metaphor for an image which enables the user to interact and control the functions of the apparatus for example by controlling functions of software programs and applications.
  • This invention proceeds from the configuration, that by visualising images known to the user, and assigning context and applications to elements within the Image, an improved user experience may be generated for users and further may produce a more natural interaction with the device.
  • a method comprising: Identifying at least one object within an Image; associating the at least one object with a function; selecting a model representation of the at least one object; and displaying a user interface image comprising the model representation, wherein the function is associated with the model representation.
  • the method may further comprise capturing the image using a camera.
  • Identifying at least one object within an Image may comprise: segmenting the image into at least two image blocks; and identifying at least one object within at least one of the at least two image blocks.
  • the method may further comprise: identifying at least one further object within at least one further image block; selecting a further model representation of the at least one further object, and wherein displaying a user interface image further comprises displaying the further model representation with a geometrical relationship consistent with the geometrical relationship between the at least one object and the at least one further object within the image.
  • the method may further comprise editing the model representation of the at least one object.
  • the method may further comprise adding to the user interface image at least one further model representation, wherein a further function is associated with the model representation,
  • the method may further comprise deleting from the user interface image at least one model representation.
  • the method may further comprise editing the function associated with the model representation displayed in the user interface image.
  • the method may further comprise: displaying a further user interface image comprising at least one another model representation, wherein at least one another function is associated with the another mode! representation.
  • the method may further comprise: identifying at least one another object within another image; associating the at least one another object with the at least one another function; and selecting the another model representation of the at least one another object,
  • the at least one function associated with the model representation may comprise selecting at least one further user interface image for display.
  • the mode! representation may comprise at least two sub-model components, wherein each of the model components is preferably associated with at least one function.
  • the method may further comprise: selecting the model representation within the user interface image; and performing the function associated with the model representation selected.
  • the method may further comprise: changing the displayed model representation on selecting the model representation within the user interface image.
  • the model representation may comprise at least one of: an animate object; and an inanimate object.
  • the function may comprise at least one of: a file access function; a time function; a calendar function; and a status display function.
  • an apparatus comprising at least one processor and at least one memory including computer program code the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to perform: identifying at least one object within an image; associating the at least one object with a function; selecting a model representation of the at least one object; and displaying a user interface image comprising the model representation, wherein the function Is associated with the model representation.
  • the at least one memory and the computer program code is configured to, with the at least one processor, cause the apparatus to preferably further perform capturing the image using a camera.
  • Identifying at least one object within an image cause the apparatus at least to preferably perform: segmenting the image into at least two image blocks; identifying at least one object within at least one of the at least two image blocks.
  • the apparatus wherein the at least one memory and the computer program code is configured to, with the at least one processor, cause the apparatus to preferably further perform: identifying at least one further object within at least one further image block; and selecting a further model representation of the at least one further object, and wherein displaying a user interface image further comprises displaying the further model representation with a geometrical relationship consistent with the geometrical relationship between the at least one object and the at least one further object within the image.
  • the apparatus wherein the at least one memory and the computer program code is configured to, with the at least one processor, cause the apparatus to preferably further perform editing the model representation of the at least one object.
  • the at least one memory and the computer program code is configured to, with the at least one processor, cause the apparatus to preferably further perform adding to the user interface image at least one further model representation, wherein a further function is associated with the model representation.
  • the apparatus wherein the at least one memory and the computer program code is configured to, with the at least one processor, cause the apparatus to preferably further perform deleting from the user interface image at least one model representation.
  • the apparatus wherein the at least one memory and the computer program code Is configured to, with the at least one processor, cause the apparatus to preferably further perform editing the function associated with the model representation displayed in the user interface image.
  • the at least one memory and the computer program code is configured to, with the at least one processor, cause the apparatus to preferably further perform displaying a further user interface image comprising at least one another model representation, wherein at least one another function is associated with the another model representation.
  • the apparatus wherein the at least one memory and the computer program code is configured to, with the at least one processor, cause the apparatus to preferably further perform: Identifying at least one another object within another Image; associating the at least one another object with the at least one another function; and selecting the another model representation of the at least one another object.
  • the apparatus wherein the at least one function associated with the model representation cause the apparatus to preferably further perform selecting at least one further user interface image for display.
  • the model representation may comprise at least two sub-model components, wherein each of the model components is preferably associated with at least one function.
  • the apparatus wherein the at least one memory and the computer program code is configured to, with the at least one processor, cause the apparatus to preferably further perform: selecting the model representation within the user interface image; and performing the function associated with the model representation selected.
  • the apparatus wherein the at least one memory and the computer program code is configured to, with the at least one processor, cause the apparatus to preferably further perform: changing the displayed model representation on selecting the mode! representation within the user interface image.
  • the model representation may comprise at least one of: an animate object; and an inanimate object.
  • the function may comprise at least one of: a file access function; a time function; a calendar function; and a status display function.
  • an apparatus comprising: an Identifier configured to identify at least one object within an image; a linker configured to associate the at least one object with a function; a selector configured to select a model representation of the at least one object; and a display configured to display a user interface image comprising the model representation, wherein the function is associated with the model representation.
  • the apparatus may further comprise: a camera configured to capture the image.
  • the identifier may comprise: an image segmenter configured to segment the Image into at least two image blocks; and a segment identifier configured to identify at least one object within at least one of the at least two image blocks.
  • the segment identifier is preferably further configured to identify at least one further object within at least one further image block; the selector is preferably further configured to select a further model representation of the at least one further object, and wherein the display is preferably further configured to display the further model representation with a geometrical relationship consistent with the geometrical relationship between the at least one object and the at (east one further object within the image.
  • the apparatus may further comprise a model editor configured to edit the model representation of the at least one object.
  • the apparatus may further comprise a model inserter configured to add to the user interface image at least one further model representation.
  • the apparatus may further comprise a model remover configured to delete from the user interface image at least one model representation.
  • the apparatus may further comprise a function editor configured to edit the function associated with the model representation displayed in the user interface image.
  • the display is preferably further configured to display a further user interface image comprising at least one another model representation, wherein at least one another function is associated with the another model representation.
  • the identifier is preferably further configured to identify at least one another object within another image; the linker is preferably further configured to associate the at least one another object with the at least one another function; and the selector is preferabiy further configured to select the another model representation of the at least one another object.
  • the at least one function associated with the model representation may comprise selecting at least one further user interface image for display.
  • the model representation may comprise at least two sub-model components, wherein the linker is further preferably configured to associate each of the model components with at least one function.
  • the apparatus may further comprise: an input determiner configured to determine a selection of at least one model representation within the user interface image; and a function processor configured to perform a function associated with the model representation selected.
  • the display is preferably further configured to change the displayed model representation selected on selecting the mode! representation within the user interface image.
  • the model representation may comprise at least one of: an animate object; and an inanimate object.
  • the function may comprise at least one of; a file access function; a time function; a calendar function; and a status display function.
  • an apparatus comprising: identifying means for identifying at (east one object within an image; linking means for associating the at least one object with a function; selection means for selecting a model representation of the at least one object; and display means for displaying a user interface image comprising the model representation, wherein the function is associated with the model representation.
  • a computer- readable medium encoded with instructions that, when executed by a computer perform: identifying at ieast one object within an image; associating the at least one object with a function; selecting a model representation of the at Ieast one object; and displaying a user interface image comprising the model representation, wherein the function is associated with the model representation.
  • An electronic device may comprise apparatus as described above.
  • a chipset may comprise apparatus as described above.
  • FIG 1 shows schematically an electronic device employing embodiments of the application
  • Figure 2 shows schematically a 3D user Interface desktop Interaction operation
  • Figure 3 shows schematically an electronic device as shown in Figure 1 In further detail with respect to the generation of the user Interface desktop as implemented in some embodiments;
  • Figure 4 shows a flow diagram featuring the operations of generation of the virtual 3D desktop as shown in Figures 2 and 3 according to some embodiments;
  • Figure 5 shows schematically the electronic device as shown in Figure 1 with respect to the interaction and editing of the 3D desktop user interface according to some embodiments;
  • Figure 6a shows an overview of the operation and interaction with the 3D virtual desktop user interface according to some embodiments
  • Figure 6b shows a flow diagram for the operation of interaction with the 3D desktop user interface according to some embodiments
  • Figure 7a shows a schematic overview of editing the 3D user interface desktop according to some embodiments
  • Figure 7b shows a flow diagram of the addition of new context/applications to the 3D desktop user interface according to some embodiments
  • Figure 8a shows a schematic overview of the "change of view” interactions within the 3D desktop user interface according to some embodiments
  • Figure 8b shows a flow diagram of the operations of performing the "change of view" within the 3D virtual desktop according to some embodiments; and Figure 8c shows a flow diagram of the operations for linking different 3D desktop user interface images according to some embodiments.
  • Figure 1 discloses a schematic block diagram of an exemplary electronic device 10 or apparatus, The electronic device is configured to perform improved U
  • the apparatus 10 is in some embodiments a mobile terminal, mobile phone or user equipment for operation in a wireless communication system, in other embodiments, the electronic device Is a digital camera.
  • the apparatus 10 comprises an integrated camera module 11, which is linked to a processor 15.
  • the processor 15 is further linked to a display 12.
  • the processor 15 is further linked to a transceiver (TX/RX) 13, to a user interface (Ui) 15 and to a memory 16.
  • TX/RX transceiver
  • Ui user interface
  • the camera module 11 and / or the display 12 is separate from the electronic device and the processor receives signals from the camera module 11 via the transceiver 13 or another suitable interface.
  • the processor 15 may be configured to execute various program codes 17.
  • the implemented program codes 17, in some embodiments, comprise image capture digital processing or configuration code.
  • the implemented program codes 17 in some embodiments further comprise additionaf code for further processing of images/desktop.
  • the implemented program codes 17 may in some embodiments be stored for example in the memory 16 for retrieval by the processor 15 whenever needed.
  • the memory 15 in some embodiments may further provide a section 18 for storing data, for example data that has been processed in accordance with the application.
  • the camera module 11 comprises a camera 19 having a lens for focussing an image on to a digital image capture means such as a charged coupled device (CCD).
  • the camera module 11 further comprises a flash lamp 20 for illuminating an object before capturing an image of the object.
  • the flash lamp is linked to the camera processor.
  • the camera 19 Is also linked to a camera processor 21 for processing signals received from the camera.
  • the camera processor 21 is linked to camera memory 22 which may store program codes for the camera processor 21 to execute when capturing an image.
  • the implemented program codes (not shown) may in some embodiments be stored for example in the camera memory 22 for retrieval by the camera processor 21 whenever needed.
  • the camera processor 21 and the camera memory 22 are the processor 15 and the memory 16 respectively.
  • the apparatus capable of implementing improved user interface techniques in some embodiments may be Implemented in at least partially in hardware without the need of software or firmware.
  • the user interface 14 in some embodiments enables a user to input commands to the electronic device 10, for example via a keypad, and/or to obtain information from the apparatus 10, for example via the display 12.
  • the transceiver 13 enables a communication with other electronic devices, for example via a wireless communication network.
  • a user of the apparatus 10 may use the camera module 11 for capturing an image that is to be transmitted to some other electronic device, for example by using the transceiver 13 or that is to be stored in the data section 18 of the memory 16.
  • a corresponding application in some embodiments may be activated to this end by the user via the user interface 14.
  • This application which may in some embodiments be run by the processor 15, causes the processor 15 to execute the code stored in the memory 16.
  • the digital image couid be stored in the data section 18 of the memory 16, for instance for a later transmission or for a later processing by the same apparatus 10.
  • the apparatus 10 may in some embodiments also receive a digital image for processing from another apparatus via its transceiver 13.
  • the processor 15 executes the processing program code stored in the memory 16.
  • the processor 15 may then in these embodiments process the received image in the same way as described with reference to Figures 2 to 5, 6a, 6b, 7a, 7b, and 8c. Execution of the processing program code could in some embodiments be triggered as well by an application that has been called by the user via the user interface 14.
  • this process thus enables the three dimensional desktop to be automatically created from the main "interesting objects" in the image.
  • the user of the apparatus may therefore simply create the personal desktop by capturing one picture which is personal to the user, for example a picture of a room in the house or office.
  • the generated three dimensional desktop may be regarded as the avatar for the user's real life working with the apparatus.
  • This living environment may be used and in some embodiments thus enable the user to feel at ease with the use of the desktop image as the layout and style are known to the user.
  • a better user experience may be provided especially for inexperienced or older users of the apparatus.
  • a set of drawers or filing cabinet may be associated with file management
  • a photo frame may be associated with photo images
  • a TV may be associated with playing videos, etc.
  • multiple three dimensional images or scenes may be linked and each associated with a different use context.
  • Portals such as doors or windows may be used to connect the different scenes so that selecting or "opening" a door or window would cause the desktop to change from one scene to another.
  • the scenes may be context aware or controlled.
  • the apparatus may be configured with sensors to determine absolute or relative position and select or display the three dimensional desktop image according to the sensor information, so that the user may have a "home" themed three dimensional desktop scene, a "office” themed three dimensional desktop scene and the apparatus may comprise position sensors for controlling the selection of the "home” scene when the sensor determines that the apparatus is at a predetermined "home” location and the "office” theme when the sensor determines that the apparatus is at a predetermined "office” location.
  • the three dimensional theme may be processed or interacted with according to the orientation of the apparatus, for example where the apparatus comprises a compass then the desktop may be orientated dependent upon the compass direction.
  • the three dimensional desktop images may be shared with others using the transceiver so that scenes may be imported/exported from friends and family, Opening a door or window (or other portal) may therefore switch the user 3D desktop to another person's 3D desktop in order to visit the other desktop user interface and determine if the other user is available to talk or message.
  • Figure 2 shows an overview of the actions which in some embodiments may generate a 3D desktop scene.
  • an example room 151 which comprises a set of drawers (or filing cabinet with drawers) 153, a TV 155, a computer monitor/picture frame 157, a printer 159, a message board 161 and a coffee table 163.
  • the room may be photographed using the camera module 11 to capture a 2D room image 171.
  • the two dimensional ro'om Image 171 may then be modelled to generate the three dimensional user interface desktop scene 181.
  • the apparatus 10 may comprise a camera module 11 as described previously.
  • the camera module may pass image data such as the captured room 2D image 171 to the image segmenter 101.
  • the image segmenter 101 may be further connected to output segmented image data to a classifier/trainer 103 (and furthering some embodiments also to a segment categoriser 105).
  • the classifier/trainer 103 may further communicate with the segment categoriser 105.
  • the segment categoriser 105 may further communicate with the context assigner 107.
  • the content assigner 107 may further communicate data with the context selector 106.
  • the context assigner 107 may be further connected to the desktop image generator 109.
  • the desktop image generator 109 may further communicate image mode! data with the image model storage 108.
  • all except the camera module may be implemented . within the processor as shown in Figure 1. However, in some embodiments, all of the features described hereafter may be Implemented at least partially within the processor and partially as elements in their own right.
  • At (east one of the classifier/trainer 103, the segment categoriser 105 and the image model storage 108 may be implemented at a further apparatus, such as a server, which is in communication with the apparatus via the transceiver 13.
  • the camera module 11 may capture the image of the room in a manner as described previously.
  • the image data may be passed in some embodiments to the image segmenter 101.
  • step 201 The operation of capturing the image is shown in Figure 4 by step 201.
  • the image segmenter 101 in some embodiments after receiving the 2D image from the camera module 11 then performs a segmentation of the image.
  • a mean shift algorithm may be applied to the image to generate a set of image patches, Each image patch may then be sent to the segment categoriser 105.
  • step 203 The operation of image segmentation of the 2D room image 171 is shown in Figure 4 by step 203.
  • a classifier/trainer 103 may be connected to the segment categoriser 105.
  • the c!assier/trainer 103 defines common object categories (for example televisions, clocks, draws and file furniture) in order to generate a set of sample pool elements.
  • the c!assifier/trainer 103 trains a pattern recognition application which in some embodiments may be implemented for example as an artificial neural network (ANN) or Gaussian mixture model (GMM). By training the pattern recognition application to identify from a set of images various objects taken at various angles the classifier/trainer may then when exposed to an image patch comprising a partial or complete object which may "recognise" and identify the object.
  • ANN artificial neural network
  • GMM Gaussian mixture model
  • the pattern recognition may be trained automatically whereas in some embodiments the pattern recognlser may be assisted.
  • the common objects may be linked or associated with a label. For example, where an image of a TV is used to train the pattern recogniser, the labei of "television" is associated with the image.
  • more than one type of common object may be trained.
  • the ciassifier/trainer 103 may be trained to recognise different types of TV and so identify images with both cathode ray tube (CRT) and flat screen technology (FST) television images.
  • CTR cathode ray tube
  • FST flat screen technology
  • the user may train the classier/trainer 103 by passing to it captured images with identification labels.
  • the camera module may capture an image of a TV and send it to the ciassifier/trainer with the label "TV" to be learnt.
  • the operation of training is shown in Figure 4 by step 205.
  • the definition of the common object categories such as "TV” or “dock” and training of the classifier may be carried out “offline", in other words the training of the classifier is carried out prior to the capturing of the image and the segmentation of the image.
  • the training of the classifier operation may be carried out as described earlier in apparatus separate from the current apparatus.
  • the definition and training operations may be carried out in a "remote" server and the database of trained categories also stored remotely from a handset user equipment where the apparatus transceiver communicating to the "remote” server.
  • the segment categoriser 105 having received the segmented image patches attempts to identify if an Image patch comprises an object of interest.
  • the segment categoriser 105 may therefore, in some embodiments using the pattern recognition application search a database or artificial neural network to determine whether or not the image patch contains an object of interest for example a TV, or other furniture type and Identify any of the objects.
  • the segment categoriser 105 in some embodiments may request this information from the classifier/trainer 103, which may then output the label associated with the near match to the segment categoriser so that the segment categoriser 105 may associated the Image patch with the label.
  • the segment categoriser 105 may output this data to the context assigner 107.
  • step 207 The operation of segment categorisation is shown in Figure 4 by step 207.
  • the context selector 106 is further configured in some embodiments to select a series of context or applications which the user wishes to implement within the user interface desktop. For example the context selector 106 may determine that the user wishes to be able to carry out certain basic file selection and manipulation tasks, view video files, and view, as well as having a clock/calendar function image data.
  • the context selector 106 dependent on the embodiments may select context/applications automatically, select context/applications memory by the user inputting the context/applications they wish to implement within the user interface, or select context/applications semi-automatically for example an initial selection being generated by the operating system which is further configured by the user.
  • 106 may in some embodiments provide a series of application links associated with the context or application, for example the context selector may provide a link to a specific image viewing program or code.
  • the content assigner 107 having received the selected context/applications the apparatus wishes to implement within the 3D desktop image may furthermore receive from the segment categoriser 105 Identification of determined objects of interest within the image patches.
  • 107 may then assign in some embodiments a content/application to an object of interest.
  • this assignment is carried out using preconfigured rules. For example any TV identified may be associated with, video viewing software, a file viewing programme may be assigned to the drawers in the filing cabinet, etc. in some embodiments the user of the apparatus may manually or semi-automatically configure the assignment of the object of interest to the context/application.
  • the assignment of context/applications to objects of interest information is then passed to the desktop generator 109.
  • the desktop generator 109 furthermore is configured to receive data from an image model storage 108.
  • the image model storage 108 in some embodiments is associated with the training/classifier 103 so that each of the classified images has a three dimensional model image associated with it such that when the segment categoriser 105 determines an object of interest which is passed to the desktop generator, the desktop generator may enquire from the image model storage 108 the associated three dimensional model image.
  • the selection of images from the image model storage is shown in Figure 4 by step 211.
  • each image patch object of interest may have an associated colour, shape, layout and texture which may also be used to generate or be looked-up from the image model storage 108. In some embodiments where an identical mode!
  • a near match of the object of interest from image patch may be selected, in some embodiments there may be some degree of manual selection from the model pool or image model storage 108 so that the user may select, for example, a different television, or other furniture type to replace the television, other furniture types captured within the segment or image patch.
  • the desktop generator 109 may then generate a three dimensional model comprising each of the identified objects of interest and furthermore any of the objects of interest assigned context/applications.
  • the desktop generated by the desktop generator 09 may then be output to the display.
  • the selection of the objects of interest from the three dimensional models enables the personalised desktop to be generated through the technology of three dimensional rendering of the models. Furthermore the links between the objects and corresponding applications may also be built.
  • the operation of generating the 3D scene from the image models selected is shown in Figure 4 by step 213.
  • the apparatus 10 is shown in further detail with respect to interaction with the generated 3D desktop user image.
  • the user interface 14 as described previously is configured to provide an input such as the detection of a point of contact of finger against a touch screen or the detection of a key or mouse movement to a "pointer" or the display.
  • This information may be passed to at least one of a desktop monitor selection processor 401, a desktop monitor interaction processor 403, a desktop monitor editing processor 405 and a desktop “monitor” processor 407.
  • the operation desktop monitor selection processor 401 and desktop monitor interaction processor 403 will be described in further detail with respect to Figures 6a and 6b hereafter.
  • Figure 6a shows the 3D desktop user interface image 181 generated by the desktop generator 109 as described previously.
  • the first object of interest labelled is the television 503 and the second object of interest labelled is the set of drawers (or filing cabinet) 501.
  • the desktop monitor selection processor 401 may monitor the input from the user interface 14 and determine and identify when there is any user interface 14 input.
  • the desktop monitor selection processor 401 may in some embodiments monitor whether the user input is attempting to select an object of interest with an associated application (a context related image). Where the desktop monitor selection processor 401 does not determine that the input is selecting a context related image, then the operation passes back to the step of determining of whether or not there is a user input. Where the desktop monitor selection processor 403 does determine that the input Is selecting a context image then the desktop monitor selection processor 401 may be configured to carry out further monitoring operations.
  • an object of interest with an associated application may be associated with more than one application or context.
  • the set of drawers 501 may further comprise three separate drawers or files.
  • the desk or set of drawers has three drawers, a first drawer 51 1, a second drawer 513 and a third drawer 515,
  • the desktop monitor selection processor 401 may adapt the three dimensional image to zoom into the object of interest in order to show or display the multiple objects of Interest.
  • each object of interest may be modelled and constructed from multiple sub-object models.
  • the desk/filing cabinet/set of drawers may be an object comprising the three sub-objects of first drawer 11 , second drawer 513 and third drawer 515.
  • the zoom-in to display the multiple context options Is also shown in Figure 6b by step 559.
  • the desktop monitor selection processor 401 may restart the process to determine if one of the multiple context options associations have been selected.
  • the desktop monitor selection processor 401 may further when detecting that there is only one context application association perform that context application association. Thus the desktop monitor selection processor 401 may activate the desktop monitor Interaction processor 403 which determines or recovers the application link and performs the task selected by the user.
  • the desktop monitor interaction processor 403 may determine that a video display program is to be operated. Furthermore the action of selection may further animate the object of interest. Thus selecting the TV may zoom in on the display the TV model and show only the edge of the TV model (or no model display at all). Similarly a further selection may switch off the application and restore the "scene" to how it was before selection,
  • the set of drawers 501 as described previously has three separate drawers, and furthermore as shown in Figure 6a, the top first drawer 511 may further have multiple context application associations such as the viewing or opening of documents, such as document A 523, document B 525 and document C 527 which when selected opens the document viewing application for the specified document as well as the "calculator" function or application associated with the image of the calculator 521.
  • multiple context application associations such as the viewing or opening of documents, such as document A 523, document B 525 and document C 527 which when selected opens the document viewing application for the specified document as well as the "calculator" function or application associated with the image of the calculator 521.
  • an associated function such as closing the file or drawer may in some embodiments unzoom the three dimensional [mage. For example "switching off the television would disable the video program and return the view highlight to the centre of the scene.
  • the desktop monitor, context editing processor 405 is configured in some embodiments to enable the user or apparatus to add, delete or edit any objects of interest within the three dimensional desktop user interface scenes in order to further personalise the 3D desktop and further add functionality to the desktop which may not be provided by the Image captured by the apparatus alone.
  • the 3D desktop user interface image 181 comprising the objects of interest with context applications associated with them of the set of drawers 501 and the TV 503 may not contain all of the functionality required by the operating system or the user.
  • the desktop monitor context editing processor 405 may then depending on the user interface 14 customise the three dimensional desktop user interface scene to add functionality. For example the desktop monitor context editing processor 405 may detect from the input from the user interface 14 that the apparatus or user wishes to add a new context or application to the desktop.
  • the desktop monitor context editing processor 405 may in some embodiments then further select a new image model to be added to the three dimensional virtual desktop which is to be associated with the selected new context or application.
  • the selection of a new Image model is shown in Figure 7b by step 653,
  • the selection of an Image model may in some embodiments be automatic, in other words there may be a preconfigured or predetermined association between specific tasks or applications and an image model, may be semi-automatic in that a series of suggestions are provided and the user selects one of the selections, or fully manual in that the user may generate their own model to be inserted.
  • desktop monitor context editing processor 405 in some embodiments may Insert the Image model with the context application associated into the three dimensional desktop user interface image 81 which is then displayed via the display 12.
  • the first image model Is a clock image model 601 which may either provide a link to a time/date/alarm clock/calendar application or may actually display the current time/date to the user using an updated three dimensional clock model.
  • Figure 7a shows a second example, a business card holder 603 which may be placed on the empty coffee table and provide a link to a contacts application.
  • the user or application may add images not only of inanimate objects but also "animated" objects. For example 3D models of friends or family of the user of the apparatus may be added which may display the status of the other party when selected may provide a "first call" application when the user aviator is selected.
  • a 3D image model of a friend may be "awake” if they are available, “asleep” if unavailable or shown using a model of a phone if busy on the phone.
  • the user Interface may comprise multiple linked or separate 3D scenes
  • a room may comprise a portal such as a door or window which links together two or more three dimensional desktop images or scenes.
  • the selection of the portal which in some embodiments may be animated by the opening of the door or window, would enable the user to "move" between the three dimensional desktop images and thus produce the experience of different areas.
  • the use of multiple 3D desktop Images may In some embodiments thus prevent any one desktop image from becoming too cluttered or "busy” and thus requiring the user to use over accurate and delicate selections in order to select a particular operation or application to be carried out.
  • the desktop monitor editing processor 405 may detect that a particular object of Interest or associated application is to be deleted from the 3D desktop user interface scene. Similarly in some embodiments each object of interest may be edited to change some feature or aspect of the object of interest three dimensional model or the associated application. In such embodiments the desktop monitor context editing processor 405 may determine the apparatus selecting a particular object and a change of the aspect (for example colour, shape, or associated application) dependent on the selection of the apparatus or user.
  • the aspect for example colour, shape, or associated application
  • the desktop monitor context editing processor 405 may receive images to be added to become further three dimensional desktop scenes. For example a friend or family member may send a photo of their house or room in a house within which their virtual "avatar" may be generated and the desktop monitor context editing processor 405 may add a portal to the apparatus "home" virtual 3D desktop user interface scene to connect to the "home" scene the new 3D scene.
  • the new 3D scene or image may provide indications of the status of the originator of the Image or scene.
  • the room scene may be darkness if the user is unavailable or busy.
  • a further indication of such status may be provided by the portal linking the scenes in some embodiments.
  • the window may have its curtains/blinds drawn or the door may be closed or locked when the person Is busy and the window or door open when the user is available.
  • the 3D desktop user interface image 181 shown in Figure 8a comprises the objects of interest such as the set of drawers 501 and the television 503 as described previously.
  • the desktop motion processor 407 may determine from a motion sensor, that the apparatus has been moved, for example the orientation of the apparatus may be detected using a solid state compass. Otherwise the desktop motion processor 407 may determine a "motion" of the 3D desktop user Interface via the user Interface input 14, for example the user may drag their finger across the screen indicating a relative motion panning of the desktop.
  • the desktop motion processor 407 may detect or determine the motion of apparatus as shown in Figure 8b by step 751. Furthermore the desktop motion processor 407 may determine a relative motion of the 3D desktop user interface scene either from the motion of the apparatus or the interaction with the user interface.
  • desktop motion processor 407 may then "move" the three dimensional desktop user interface image in the relative motion direction.
  • the image 181 may be moved to the left either by moving the physical apparatus to the right or by dragging the finger of the user from right to left across the screen. As shown in Figure 8a, the displacement may now expose further objects of interest such as a portal (door) 701.
  • the senor determines not only the orientation or motion of the apparatus but also the physical position or location of the apparatus.
  • a satellite positioning sensor may determine from timing various signals from orbiting satellites the position of the apparatus, similar location sensors may be implemented using radio frequency timing differences or signal strength values such as using cellular timing distances or cellular signal strengths.
  • the desktop motion processor 407 in such embodiments may determine the position of the apparatus. The determination of the position of the apparatus is shown In Figure 771.
  • the desktop motion processor 407 may determine whether or not the current position of the apparatus is linked or associated with a particular 3D desktop user interface scene.
  • the apparatus may have more than one 3D desktop user interface Image as described previously, each may be linked to a particular location so that a "office" desktop image may be linked to a first location such as the usual place of work and similarly the "home" 3D desktop user interface Image may be linked to a second location such as the home of the user.
  • the desktop motion processor 407 may then display the location linked 3D desktop user Interface scene, in other words the desktop motion processor 407 may switch the three dimensional image shown to the user of the apparatus dependent on the location of the apparatus.
  • At least one embodiment may be summarized as the operations of a method comprising: identifying at least one object within an image; associating the at least one object with a function; selecting a model representation of the at least one object; and displaying a user interface image comprising the model representation, wherein the function is associated with the model representation.
  • the scenes or objects of interest within the scenes may be interactive and change their appearance dependent on their current status. For example as described previously an avatar of a further user may indicate whether or not the user is busy or available, an image of a answer phone may indicate whether or not the user has any messages waiting on the answer phone system, etc.
  • the procedure of creating the desktop is furthermore greatly simplified over the current modelling processes requiring only a single image to implement a basic user interface.
  • the embodiments as shown above may allow pre-calculating and computing on servers other than the apparatus In order to prevent the apparatus being overloaded,
  • At least one embodiment may comprise an apparatus comprising: an identifier configured to identify at least one object within an image; a linker configured to associate the at least one object with a function; a selector configured to select a model representation of the at least one object; and a display configured to display a user interface image comprising the model representation, wherein the function is associated with the model representation.
  • user equipment is intended to cover any suitable type of wireless user equipment, such as mobile telephones, portable data processing devices or portable web browsers.
  • the various embodiments of the invention may be implemented in hardware or special purpose circuits, software, logic or any combination thereof.
  • some aspects may be implemented In hardware, while other aspects may be implemented In firmware or software which may be executed by a controller, microprocessor or other computing device, although the invention is not limited thereto.
  • firmware or software which may be executed by a controller, microprocessor or other computing device, although the invention is not limited thereto. While various aspects of the invention may be illustrated and described as block diagrams, flow charts, or using some other pictorial representation, it is well understood that these blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-iimiting examples,, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
  • the application may be summarized as being an apparatus comprising at least one processor and at least one memory including computer program code the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to perform: identifying at least one object within an image; associating the at least one object with a function; selecting a model representation of the at least one object; and displaying a user interface image comprising the model representation, wherein the function is associated with the model representation.
  • the embodiments of this invention may be implemented by computer software executable by a data processor of the mobile device, such as in the processor entity, or by hardware, or by a combination of software and hardware.
  • any blocks of the logic flow as in the Figures may represent program steps, or Interconnected logic circuits, blocks and functions, or a combination of program steps and logic circuits, blocks and functions.
  • the software may be stored on such physical media as memory chips, or memory blocks implemented within the processor, magnetic media such as hard disk or floppy disks, and optical media such as for example DVD and the data variants thereof, or CD.
  • a computer-readable medium encoded with instructions that, when executed by a computer perform: Identifying at least one object within an Image; associating the at least one object with a function; selecting a model representation of the at least one object; and displaying a user interface image comprising the model representation, wherein the function is associated with the model representation.
  • the memory may be of any type suitable to the local technical environment and may be implemented using any suitable data storage technology, such as semiconductor-based memory devices, magnetic memory devices and systems, optical memory devices and systems, fixed memory and removable memory,
  • the data processors may be of any type suitable to the local technical environment, and may include one or more of general purpose computers, special purpose computers, microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASIC), gate level circuits and processors based on multi-core processor architecture, as non-limiting examples.
  • Embodiments of the inventions may be practiced in various components such as integrated circuit modules.
  • the design of integrated circuits is by and large a highly automated process.
  • Complex and powerful software tools are available for converting a logic level design into a semiconductor circuit design ready to be etched and formed on a semiconductor substrate.
  • Programs such as those provided by Synopsys, Inc. of Mountain View, California and Cadence Design, of San Jose, California automatically route conductors and locate components on a semiconductor chip using well established rules of design as well as libraries of pre-stored design modules.
  • the resultant design in a standardized electronic format (e.g., Opus, GDSfl, or the like) may be transmitted to a semiconductor fabrication facility or "fab" for fabrication.
  • circuitry or circuit may refer to all of the following: (a) hardware-only circuit implementations (such as implementations in only analogue and/or digital circuitry) and (b) to combinations of circuits and software (and/or firmware), such as and where applicable: (i) to a combination of processors) or (ii) to portions of processor(s)/software (including digital signal processor(s)), software, and memory(ies) that work together to cause an apparatus, such as a mobile phone or server, to perform various functions) and (c) to circuits, such as a microprocessor(s) or a portion of a microprocessor(s), that require software or firmware for operation, even if the software or firmware is not physically present.
  • circuitry would also cover an implementation of merely a processor (or multiple processors) or portion of a processor and its (or their) accompanying software andfor firmware.
  • circuitry would also cover, for example and if applicable to the particular claim element, a baseband Integrated circuit or applications processor integrated circuit for a mobile phone or a similar integrated circuit in server, a cellular network device, or other network device.
  • processor and memory may comprise but are not limited to in this application: (1 ) one or more microprocessors, (2) one or more processors) with accompanying digital signal processor(s), (3) one or more processor(s) without accompanying digital signal processor(s), (3) one or more special- purpose computer chips, (4) one or more field-programmable gate arrays (FPGAS), (5) one or more controllers, (6) one or more application-specific integrated circuits (ASICS), or detector(s), processor(s) (including dual-core and multiple-core processors), digital signal processors), controllers), receiver, transmitter, encoder, decoder, memory (and memories), software, firmware, RAM, ROM, display, user interface, display circuitry, user interface circuitry, user Interface software, display software, circult(s), antenna, antenna circuitry, and circuitry.

Abstract

A method comprising identifying at least one object within an image; associating the at least one object with a function; selecting a model representation of the at least one object; and displaying a user interface image comprising the model representation, wherein the function is associated with the model representation.

Description

DESKTOP DISPLAY APPARATUS
The present appJication relates to a method and apparatus. In some embodiments the method and apparatus relate to desktop display technology and in particular, but not exclusively limited to, some further embodiments relate to desktop display technology in user equipment,
Digital equipment such as personal computers, audio players and video players is supplied with a user Interface which comprises an input typically a touch sensor and/or keyboard and an output such as an audio display or video display configured to output information to the user. The display typically has a "desktop" element, a widely used metaphor for an image which enables the user to interact and control the functions of the apparatus for example by controlling functions of software programs and applications.
There have been many different graphic desktop configurations designed for example common operating system desktops. These are typically abstract constructions bearing little resemblance to a user's actual environment. Although the user may personalise a desktop such as by the use of personalised background images as "wall paper" over which various Icons may be placed. Furthermore these icons may be personalised or chosen by the user themselves. However, these configurations typically bear little resemblance to the way the user would typically interact with their environment. For example the average user would prefer to interact with their environment. There has been further work on creating more vivid three dimensional (3D) Interfaces, for example by mapping a traditional two dimensional (2D) user interface desktop containing a wallpaper and icons to a surface of a three dimensional cube or to a further object such as a three dimensional representation of a teapot (which is then displayed using the display to present the image to the user and which the user may rotate to then change the viewer operate the desktop). However these types of interfaces sti!i fail to provide an intuitive or familiar interaction allowing ordinary users of diverse backgrounds to operate the device easily.
"Real-World" interfaces, for example where the user interface with a generic room where applications are displayed as real-worid objects has also been researched, however these user interfaces are difficult to personalise and typically are fixed and as such may not be modified.
This invention proceeds from the configuration, that by visualising images known to the user, and assigning context and applications to elements within the Image, an improved user experience may be generated for users and further may produce a more natural interaction with the device.
In a first aspect of the present invention there is provided a method comprising: Identifying at least one object within an Image; associating the at least one object with a function; selecting a model representation of the at least one object; and displaying a user interface image comprising the model representation, wherein the function is associated with the model representation.
The method may further comprise capturing the image using a camera.
Identifying at least one object within an Image may comprise: segmenting the image into at least two image blocks; and identifying at least one object within at least one of the at least two image blocks.
The method may further comprise: identifying at least one further object within at least one further image block; selecting a further model representation of the at least one further object, and wherein displaying a user interface image further comprises displaying the further model representation with a geometrical relationship consistent with the geometrical relationship between the at least one object and the at least one further object within the image. The method may further comprise editing the model representation of the at least one object.
The method may further comprise adding to the user interface image at least one further model representation, wherein a further function is associated with the model representation,
The method may further comprise deleting from the user interface image at least one model representation.
The method may further comprise editing the function associated with the model representation displayed in the user interface image.
The method may further comprise: displaying a further user interface image comprising at least one another model representation, wherein at least one another function is associated with the another mode! representation.
The method may further comprise: identifying at least one another object within another image; associating the at least one another object with the at least one another function; and selecting the another model representation of the at least one another object,
The at feast one function associated with the model representation may comprise selecting at least one further user interface image for display.
The mode! representation may comprise at feast two sub-model components, wherein each of the model components is preferably associated with at least one function.
The method may further comprise: selecting the model representation within the user interface image; and performing the function associated with the model representation selected. The method may further comprise: changing the displayed model representation on selecting the model representation within the user interface image.
The model representation may comprise at least one of: an animate object; and an inanimate object.
The function may comprise at least one of: a file access function; a time function; a calendar function; and a status display function.
According to a second aspect of the Invention there is provided an apparatus comprising at least one processor and at least one memory including computer program code the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to perform: identifying at least one object within an image; associating the at least one object with a function; selecting a model representation of the at least one object; and displaying a user interface image comprising the model representation, wherein the function Is associated with the model representation.
The at least one memory and the computer program code is configured to, with the at least one processor, cause the apparatus to preferably further perform capturing the image using a camera.
Identifying at least one object within an image cause the apparatus at least to preferably perform: segmenting the image into at least two image blocks; identifying at least one object within at least one of the at least two image blocks.
The apparatus, wherein the at least one memory and the computer program code is configured to, with the at least one processor, cause the apparatus to preferably further perform: identifying at least one further object within at least one further image block; and selecting a further model representation of the at least one further object, and wherein displaying a user interface image further comprises displaying the further model representation with a geometrical relationship consistent with the geometrical relationship between the at least one object and the at least one further object within the image.
The apparatus, wherein the at least one memory and the computer program code is configured to, with the at least one processor, cause the apparatus to preferably further perform editing the model representation of the at least one object.
The apparatus, wherein the at least one memory and the computer program code is configured to, with the at least one processor, cause the apparatus to preferably further perform adding to the user interface image at least one further model representation, wherein a further function is associated with the model representation.
The apparatus, wherein the at least one memory and the computer program code is configured to, with the at least one processor, cause the apparatus to preferably further perform deleting from the user interface image at least one model representation.
The apparatus, wherein the at least one memory and the computer program code Is configured to, with the at least one processor, cause the apparatus to preferably further perform editing the function associated with the model representation displayed in the user interface image.
The apparatus, wherein the at least one memory and the computer program code is configured to, with the at least one processor, cause the apparatus to preferably further perform displaying a further user interface image comprising at least one another model representation, wherein at least one another function is associated with the another model representation.
The apparatus, wherein the at least one memory and the computer program code is configured to, with the at least one processor, cause the apparatus to preferably further perform: Identifying at least one another object within another Image; associating the at least one another object with the at least one another function; and selecting the another model representation of the at least one another object.
The apparatus, wherein the at least one function associated with the model representation cause the apparatus to preferably further perform selecting at least one further user interface image for display.
The model representation may comprise at least two sub-model components, wherein each of the model components is preferably associated with at least one function.
The apparatus, wherein the at least one memory and the computer program code is configured to, with the at least one processor, cause the apparatus to preferably further perform: selecting the model representation within the user interface image; and performing the function associated with the model representation selected.
The apparatus, wherein the at least one memory and the computer program code is configured to, with the at least one processor, cause the apparatus to preferably further perform: changing the displayed model representation on selecting the mode! representation within the user interface image.
The model representation may comprise at least one of: an animate object; and an inanimate object.
The function may comprise at least one of: a file access function; a time function; a calendar function; and a status display function.
According to a third aspect of the invention there is provided an apparatus comprising: an Identifier configured to identify at least one object within an image; a linker configured to associate the at least one object with a function; a selector configured to select a model representation of the at least one object; and a display configured to display a user interface image comprising the model representation, wherein the function is associated with the model representation.
The apparatus may further comprise: a camera configured to capture the image.
The identifier may comprise: an image segmenter configured to segment the Image into at least two image blocks; and a segment identifier configured to identify at least one object within at least one of the at least two image blocks.
The segment identifier is preferably further configured to identify at least one further object within at least one further image block; the selector is preferably further configured to select a further model representation of the at least one further object, and wherein the display is preferably further configured to display the further model representation with a geometrical relationship consistent with the geometrical relationship between the at least one object and the at (east one further object within the image.
The apparatus may further comprise a model editor configured to edit the model representation of the at least one object.
The apparatus may further comprise a model inserter configured to add to the user interface image at least one further model representation.
The apparatus may further comprise a model remover configured to delete from the user interface image at least one model representation.
The apparatus may further comprise a function editor configured to edit the function associated with the model representation displayed in the user interface image.
The display is preferably further configured to display a further user interface image comprising at least one another model representation, wherein at least one another function is associated with the another model representation. The identifier is preferably further configured to identify at least one another object within another image; the linker is preferably further configured to associate the at least one another object with the at least one another function; and the selector is preferabiy further configured to select the another model representation of the at least one another object.
The at least one function associated with the model representation may comprise selecting at least one further user interface image for display.
The model representation may comprise at least two sub-model components, wherein the linker is further preferably configured to associate each of the model components with at least one function.
The apparatus may further comprise: an input determiner configured to determine a selection of at least one model representation within the user interface image; and a function processor configured to perform a function associated with the model representation selected.
The display is preferably further configured to change the displayed model representation selected on selecting the mode! representation within the user interface image.
The model representation may comprise at least one of: an animate object; and an inanimate object.
The function may comprise at least one of; a file access function; a time function; a calendar function; and a status display function.
According to a fourth aspect of the invention there is provided an apparatus comprising: identifying means for identifying at (east one object within an image; linking means for associating the at least one object with a function; selection means for selecting a model representation of the at least one object; and display means for displaying a user interface image comprising the model representation, wherein the function is associated with the model representation.
According to a fifth aspect of the invention there is provided a computer- readable medium encoded with instructions that, when executed by a computer perform: identifying at ieast one object within an image; associating the at least one object with a function; selecting a model representation of the at Ieast one object; and displaying a user interface image comprising the model representation, wherein the function is associated with the model representation.
An electronic device may comprise apparatus as described above.
A chipset may comprise apparatus as described above.
For a better understanding of the present application and as to how the same may be carried into effect, reference will now be made by way of example to the accompanying drawings in which:
Figure 1 shows schematically an electronic device employing embodiments of the application;
Figure 2 shows schematically a 3D user Interface desktop Interaction operation;
Figure 3 shows schematically an electronic device as shown in Figure 1 In further detail with respect to the generation of the user Interface desktop as implemented in some embodiments;
Figure 4 shows a flow diagram featuring the operations of generation of the virtual 3D desktop as shown in Figures 2 and 3 according to some embodiments;
Figure 5 shows schematically the electronic device as shown in Figure 1 with respect to the interaction and editing of the 3D desktop user interface according to some embodiments;
Figure 6a shows an overview of the operation and interaction with the 3D virtual desktop user interface according to some embodiments; Figure 6b shows a flow diagram for the operation of interaction with the 3D desktop user interface according to some embodiments;
Figure 7a shows a schematic overview of editing the 3D user interface desktop according to some embodiments;
Figure 7b shows a flow diagram of the addition of new context/applications to the 3D desktop user interface according to some embodiments;
Figure 8a shows a schematic overview of the "change of view" interactions within the 3D desktop user interface according to some embodiments;
Figure 8b shows a flow diagram of the operations of performing the "change of view" within the 3D virtual desktop according to some embodiments; and Figure 8c shows a flow diagram of the operations for linking different 3D desktop user interface images according to some embodiments.
The following describes apparatus and methods for the provision of improved user interface techniques. In this regard reference is first made to Figure 1 which discloses a schematic block diagram of an exemplary electronic device 10 or apparatus, The electronic device is configured to perform improved U| techniques according to some embodiments of the application.
The apparatus 10 is in some embodiments a mobile terminal, mobile phone or user equipment for operation in a wireless communication system, in other embodiments, the electronic device Is a digital camera.
The apparatus 10 comprises an integrated camera module 11, which is linked to a processor 15. The processor 15 is further linked to a display 12. The processor 15 is further linked to a transceiver (TX/RX) 13, to a user interface (Ui) 15 and to a memory 16. In some embodiments, the camera module 11 and / or the display 12 is separate from the electronic device and the processor receives signals from the camera module 11 via the transceiver 13 or another suitable interface.
The processor 15 may be configured to execute various program codes 17. The implemented program codes 17, in some embodiments, comprise image capture digital processing or configuration code. The implemented program codes 17 in some embodiments further comprise additionaf code for further processing of images/desktop. The implemented program codes 17 may in some embodiments be stored for example in the memory 16 for retrieval by the processor 15 whenever needed. The memory 15 in some embodiments may further provide a section 18 for storing data, for example data that has been processed in accordance with the application.
The camera module 11 comprises a camera 19 having a lens for focussing an image on to a digital image capture means such as a charged coupled device (CCD). The camera module 11 further comprises a flash lamp 20 for illuminating an object before capturing an image of the object. The flash lamp is linked to the camera processor. The camera 19 Is also linked to a camera processor 21 for processing signals received from the camera. The camera processor 21 is linked to camera memory 22 which may store program codes for the camera processor 21 to execute when capturing an image. The implemented program codes (not shown) may in some embodiments be stored for example in the camera memory 22 for retrieval by the camera processor 21 whenever needed. In some embodiments the camera processor 21 and the camera memory 22 are the processor 15 and the memory 16 respectively.
The apparatus capable of implementing improved user interface techniques in some embodiments may be Implemented in at least partially in hardware without the need of software or firmware.
The user interface 14 in some embodiments enables a user to input commands to the electronic device 10, for example via a keypad, and/or to obtain information from the apparatus 10, for example via the display 12. The transceiver 13 enables a communication with other electronic devices, for example via a wireless communication network.
A user of the apparatus 10 may use the camera module 11 for capturing an image that is to be transmitted to some other electronic device, for example by using the transceiver 13 or that is to be stored in the data section 18 of the memory 16. A corresponding application in some embodiments may be activated to this end by the user via the user interface 14. This application, which may in some embodiments be run by the processor 15, causes the processor 15 to execute the code stored in the memory 16. in some embodiments the digital image couid be stored in the data section 18 of the memory 16, for instance for a later transmission or for a later processing by the same apparatus 10.
The apparatus 10 may in some embodiments also receive a digital image for processing from another apparatus via its transceiver 13. In these embodiments, the processor 15 executes the processing program code stored in the memory 16. The processor 15 may then in these embodiments process the received image in the same way as described with reference to Figures 2 to 5, 6a, 6b, 7a, 7b, and 8c. Execution of the processing program code could in some embodiments be triggered as well by an application that has been called by the user via the user interface 14.
It is to be understood again that the structure of the apparatus 10 could be supplemented and varied In many ways, it would be further appreciated that the schematic structures described in figures 3 and 5 and the operations in figures 2, 4, 6a, 6b, 7a, 7b, 8a, 8b, and 8c represent only a part of the operation of a complete system comprising some embodiments of the application as shown implemented in the application shown in figure 1.
With respect to Figure 2, the generation of a three dimensional personalised desktop from a two dimensional image is shown in overview. In the following embodiments, rather than constructing a true three dimensional theme of realistic forms, key context information is extracted from the [mage and the desktop generated by picking the corresponding material from a three dimensional mode! pool.
It will be described further that this process thus enables the three dimensional desktop to be automatically created from the main "interesting objects" in the image. The user of the apparatus may therefore simply create the personal desktop by capturing one picture which is personal to the user, for example a picture of a room in the house or office.
Furthermore, as will be described in some embodiments, the generated three dimensional desktop may be regarded as the avatar for the user's real life working with the apparatus. This living environment may be used and in some embodiments thus enable the user to feel at ease with the use of the desktop image as the layout and style are known to the user.
Furthermore in some embodiments, as the three dimensional objects within the desktop intuitively link to applications with similar real life meaning, a better user experience may be provided especially for inexperienced or older users of the apparatus. For example a set of drawers or filing cabinet may be associated with file management, a photo frame may be associated with photo images, a TV may be associated with playing videos, etc,
Furthermore in some embodiments, multiple three dimensional images or scenes may be linked and each associated with a different use context. Portals such as doors or windows may be used to connect the different scenes so that selecting or "opening" a door or window would cause the desktop to change from one scene to another.
Furthermore in some embodiments, the scenes may be context aware or controlled. For example the apparatus may be configured with sensors to determine absolute or relative position and select or display the three dimensional desktop image according to the sensor information, so that the user may have a "home" themed three dimensional desktop scene, a "office" themed three dimensional desktop scene and the apparatus may comprise position sensors for controlling the selection of the "home" scene when the sensor determines that the apparatus is at a predetermined "home" location and the "office" theme when the sensor determines that the apparatus is at a predetermined "office" location. Furthermore in other embodiments, as described further the three dimensional theme may be processed or interacted with according to the orientation of the apparatus, for example where the apparatus comprises a compass then the desktop may be orientated dependent upon the compass direction.
In some embodiments the three dimensional desktop images may be shared with others using the transceiver so that scenes may be imported/exported from friends and family, Opening a door or window (or other portal) may therefore switch the user 3D desktop to another person's 3D desktop in order to visit the other desktop user interface and determine if the other user is available to talk or message.
Figure 2 shows an overview of the actions which in some embodiments may generate a 3D desktop scene.
As is shown in Figure 2, an example room 151 is shown which comprises a set of drawers (or filing cabinet with drawers) 153, a TV 155, a computer monitor/picture frame 157, a printer 159, a message board 161 and a coffee table 163. As is shown In Figure 2, the room may be photographed using the camera module 11 to capture a 2D room image 171. The two dimensional ro'om Image 171 may then be modelled to generate the three dimensional user interface desktop scene 181.
With respect to Figure 3 apparatus to implement the generation of the 3D desktop user interface scene 181 from the room 151 via the captured room 2D image 171 according to some embodiments of the application are shown.
The apparatus 10 may comprise a camera module 11 as described previously. The camera module may pass image data such as the captured room 2D image 171 to the image segmenter 101. The image segmenter 101 may be further connected to output segmented image data to a classifier/trainer 103 (and furthering some embodiments also to a segment categoriser 105). The classifier/trainer 103 may further communicate with the segment categoriser 105. The segment categoriser 105 may further communicate with the context assigner 107. The content assigner 107 may further communicate data with the context selector 106. The context assigner 107 may be further connected to the desktop image generator 109. The desktop image generator 109 may further communicate image mode! data with the image model storage 108.
It would be understood that in some embodiments, all except the camera module may be implemented . within the processor as shown in Figure 1. However, In some embodiments, all of the features described hereafter may be Implemented at least partially within the processor and partially as elements in their own right.
In some embodiments at (east one of the classifier/trainer 103, the segment categoriser 105 and the image model storage 108 may be implemented at a further apparatus, such as a server, which is in communication with the apparatus via the transceiver 13.
With respect to Figure 4, the operations carried out in some embodiments of the application to generate the three dimensional desktop user interface Image scene from the two dimensional image as implemented by the elements shown in Figure 3 are shown.
The camera module 11 may capture the image of the room in a manner as described previously.
The image data may be passed in some embodiments to the image segmenter 101.
The operation of capturing the image is shown in Figure 4 by step 201.
The image segmenter 101 in some embodiments after receiving the 2D image from the camera module 11 then performs a segmentation of the image. In some embodiments a mean shift algorithm may be applied to the image to generate a set of image patches, Each image patch may then be sent to the segment categoriser 105.
The operation of image segmentation of the 2D room image 171 is shown in Figure 4 by step 203.
Furthermore a classifier/trainer 103 may be connected to the segment categoriser 105. In some embodiments the c!assier/trainer 103 defines common object categories (for example televisions, clocks, draws and file furniture) in order to generate a set of sample pool elements. In some embodiments the c!assifier/trainer 103 trains a pattern recognition application which in some embodiments may be implemented for example as an artificial neural network (ANN) or Gaussian mixture model (GMM). By training the pattern recognition application to identify from a set of images various objects taken at various angles the classifier/trainer may then when exposed to an image patch comprising a partial or complete object which may "recognise" and identify the object. In some embodiments the pattern recognition may be trained automatically whereas in some embodiments the pattern recognlser may be assisted. Furthermore in some embodiments the common objects may be linked or associated with a label. For example, where an image of a TV is used to train the pattern recogniser, the labei of "television" is associated with the image. In some embodiments, more than one type of common object may be trained. For example, the ciassifier/trainer 103 may be trained to recognise different types of TV and so identify images with both cathode ray tube (CRT) and flat screen technology (FST) television images.
In some embodiments the user may train the classier/trainer 103 by passing to it captured images with identification labels. For example the camera module may capture an image of a TV and send it to the ciassifier/trainer with the label "TV" to be learnt.
The operation of training is shown in Figure 4 by step 205. In some embodiments the definition of the common object categories such as "TV" or "dock" and training of the classifier may be carried out "offline", in other words the training of the classifier is carried out prior to the capturing of the image and the segmentation of the image. In some further embodiments, the training of the classifier operation may be carried out as described earlier in apparatus separate from the current apparatus. For example the definition and training operations may be carried out in a "remote" server and the database of trained categories also stored remotely from a handset user equipment where the apparatus transceiver communicating to the "remote" server.
The segment categoriser 105 having received the segmented image patches attempts to identify if an Image patch comprises an object of interest. The segment categoriser 105 may therefore, in some embodiments using the pattern recognition application search a database or artificial neural network to determine whether or not the image patch contains an object of interest for example a TV, or other furniture type and Identify any of the objects. The segment categoriser 105 in some embodiments may request this information from the classifier/trainer 103, which may then output the label associated with the near match to the segment categoriser so that the segment categoriser 105 may associated the Image patch with the label.
When the segment categoriser 105 determines an object of Interest within the image patch, the segment categoriser may output this data to the context assigner 107.
The operation of segment categorisation is shown in Figure 4 by step 207.
The context selector 106 is further configured in some embodiments to select a series of context or applications which the user wishes to implement within the user interface desktop. For example the context selector 106 may determine that the user wishes to be able to carry out certain basic file selection and manipulation tasks, view video files, and view, as well as having a clock/calendar function image data. The context selector 106 dependent on the embodiments may select context/applications automatically, select context/applications memory by the user inputting the context/applications they wish to implement within the user interface, or select context/applications semi-automatically for example an initial selection being generated by the operating system which is further configured by the user. The context selector
106 may in some embodiments provide a series of application links associated with the context or application, for example the context selector may provide a link to a specific image viewing program or code.
The content assigner 107 having received the selected context/applications the apparatus wishes to implement within the 3D desktop image may furthermore receive from the segment categoriser 105 Identification of determined objects of interest within the image patches. The content assigner
107 may then assign in some embodiments a content/application to an object of interest. In some embodiments, this assignment is carried out using preconfigured rules. For example any TV identified may be associated with, video viewing software, a file viewing programme may be assigned to the drawers in the filing cabinet, etc. in some embodiments the user of the apparatus may manually or semi-automatically configure the assignment of the object of interest to the context/application.
The selection and assignment of the context/applications to the objects of interest are shown in Figure 4 by step 209.
The assignment of context/applications to objects of interest information is then passed to the desktop generator 109. The desktop generator 109 furthermore is configured to receive data from an image model storage 108.
The image model storage 108 in some embodiments is associated with the training/classifier 103 so that each of the classified images has a three dimensional model image associated with it such that when the segment categoriser 105 determines an object of interest which is passed to the desktop generator, the desktop generator may enquire from the image model storage 108 the associated three dimensional model image. The selection of images from the image model storage is shown in Figure 4 by step 211. in some embodiments each image patch object of interest may have an associated colour, shape, layout and texture which may also be used to generate or be looked-up from the image model storage 108. In some embodiments where an identical mode! may not be selected from the image model storage 108, a near match of the object of interest from image patch may be selected, in some embodiments there may be some degree of manual selection from the model pool or image model storage 108 so that the user may select, for example, a different television, or other furniture type to replace the television, other furniture types captured within the segment or image patch.
The desktop generator 109 may then generate a three dimensional model comprising each of the identified objects of interest and furthermore any of the objects of interest assigned context/applications. The desktop generated by the desktop generator 09 may then be output to the display.
Thus the selection of the objects of interest from the three dimensional models enables the personalised desktop to be generated through the technology of three dimensional rendering of the models. Furthermore the links between the objects and corresponding applications may also be built. The operation of generating the 3D scene from the image models selected is shown in Figure 4 by step 213.
With respect to Figure 5 the apparatus 10 is shown in further detail with respect to interaction with the generated 3D desktop user image. The user interface 14 as described previously is configured to provide an input such as the detection of a point of contact of finger against a touch screen or the detection of a key or mouse movement to a "pointer" or the display. This information may be passed to at least one of a desktop monitor selection processor 401, a desktop monitor interaction processor 403, a desktop monitor editing processor 405 and a desktop "monitor" processor 407. The operation desktop monitor selection processor 401 and desktop monitor interaction processor 403 will be described in further detail with respect to Figures 6a and 6b hereafter. With respect to Figures 6a and 6b, an example of the operation carried out by the desktop monitor selection processor 401 and the desktop monitor interaction processor 403 in light of a sensed input from the user interface 14 according to some embodiments is described in further detail. Figure 6a, for example shows the 3D desktop user interface image 181 generated by the desktop generator 109 as described previously. Within the 3D image or scene there are two objects of interest with associated labels shown in this example. The first object of interest labelled is the television 503 and the second object of interest labelled is the set of drawers (or filing cabinet) 501.
The desktop monitor selection processor 401 may monitor the input from the user interface 14 and determine and identify when there is any user interface 14 input.
The determination of the user interface input is shown in Figure 6b by step 551.
Furthermore the desktop monitor selection processor 401 may in some embodiments monitor whether the user input is attempting to select an object of interest with an associated application (a context related image). Where the desktop monitor selection processor 401 does not determine that the input is selecting a context related image, then the operation passes back to the step of determining of whether or not there is a user input. Where the desktop monitor selection processor 403 does determine that the input Is selecting a context image then the desktop monitor selection processor 401 may be configured to carry out further monitoring operations.
The operation of determining whether or not the input is selecting a context related image is shown In Figure 6b by step 553.
In some embodiments of the application an object of interest with an associated application may be associated with more than one application or context. For example the set of drawers 501 may further comprise three separate drawers or files. Thus as shown in Figure 6a the desk or set of drawers has three drawers, a first drawer 51 1, a second drawer 513 and a third drawer 515,
In these embodiments when the desktop monitor selection processor 401 determines that the input is selecting a context associated image which has a multiple context application association, the desktop monitor selection processor 401 may adapt the three dimensional image to zoom into the object of interest in order to show or display the multiple objects of Interest. In some embodiments thus each object of interest may be modelled and constructed from multiple sub-object models. As described above the desk/filing cabinet/set of drawers may be an object comprising the three sub-objects of first drawer 11 , second drawer 513 and third drawer 515.
The determination of multiple context application associations Is shown in Figure 6b by step 555,
The zoom-in to display the multiple context options Is also shown in Figure 6b by step 559. Once the display has been zoomed, the desktop monitor selection processor 401 may restart the process to determine if one of the multiple context options associations have been selected.
In some embodiments, the desktop monitor selection processor 401 may further when detecting that there is only one context application association perform that context application association. Thus the desktop monitor selection processor 401 may activate the desktop monitor Interaction processor 403 which determines or recovers the application link and performs the task selected by the user.
The performance of the context application association is shown in Figure 6b by step 557.
Thus, for example with respect to Figure 6a, when the user selects the television screen, the desktop monitor interaction processor 403 may determine that a video display program is to be operated. Furthermore the action of selection may further animate the object of interest. Thus selecting the TV may zoom in on the display the TV model and show only the edge of the TV model (or no model display at all). Similarly a further selection may switch off the application and restore the "scene" to how it was before selection,
Furthermore as also shown in Figure 6a, there may be more than level of multiple context application association. The set of drawers 501 as described previously has three separate drawers, and furthermore as shown in Figure 6a, the top first drawer 511 may further have multiple context application associations such as the viewing or opening of documents, such as document A 523, document B 525 and document C 527 which when selected opens the document viewing application for the specified document as well as the "calculator" function or application associated with the image of the calculator 521.
It would be understood that an associated function such as closing the file or drawer may in some embodiments unzoom the three dimensional [mage. For example "switching off the television would disable the video program and return the view highlight to the centre of the scene.
With respect to Figures 7a and 7b the editing of the scene generated is described in further detail below. The desktop monitor, context editing processor 405 is configured in some embodiments to enable the user or apparatus to add, delete or edit any objects of interest within the three dimensional desktop user interface scenes in order to further personalise the 3D desktop and further add functionality to the desktop which may not be provided by the Image captured by the apparatus alone.
For example, as shown in Figure 7a, the 3D desktop user interface image 181 comprising the objects of interest with context applications associated with them of the set of drawers 501 and the TV 503 may not contain all of the functionality required by the operating system or the user. The desktop monitor context editing processor 405 may then depending on the user interface 14 customise the three dimensional desktop user interface scene to add functionality. For example the desktop monitor context editing processor 405 may detect from the input from the user interface 14 that the apparatus or user wishes to add a new context or application to the desktop.
The determination of the selection of the new context/application is shown in Figure 7b by step 651.
The desktop monitor context editing processor 405 may in some embodiments then further select a new image model to be added to the three dimensional virtual desktop which is to be associated with the selected new context or application. The selection of a new Image model is shown in Figure 7b by step 653, The selection of an Image model may in some embodiments be automatic, in other words there may be a preconfigured or predetermined association between specific tasks or applications and an image model, may be semi-automatic in that a series of suggestions are provided and the user selects one of the selections, or fully manual in that the user may generate their own model to be inserted.
Furthermore the desktop monitor context editing processor 405 in some embodiments may Insert the Image model with the context application associated into the three dimensional desktop user interface image 81 which is then displayed via the display 12.
The insertion of the image model with the context application associated is shown in Figure 7b by step 655,
For example as shown in Figure 7a, two examples new added context applications with associated image models are shown. The first image model Is a clock image model 601 which may either provide a link to a time/date/alarm clock/calendar application or may actually display the current time/date to the user using an updated three dimensional clock model. Furthermore Figure 7a shows a second example, a business card holder 603 which may be placed on the empty coffee table and provide a link to a contacts application. In some embodiments the user or application may add images not only of inanimate objects but also "animated" objects. For example 3D models of friends or family of the user of the apparatus may be added which may display the status of the other party when selected may provide a "first call" application when the user aviator is selected.
Thus in these embodiments a 3D image model of a friend may be "awake" if they are available, "asleep" if unavailable or shown using a model of a phone if busy on the phone.
Furthermore although the above examples have been shown using only a single room image or scene in some embodiments the user Interface may comprise multiple linked or separate 3D scenes, In such embodiments a room may comprise a portal such as a door or window which links together two or more three dimensional desktop images or scenes. The selection of the portal, which in some embodiments may be animated by the opening of the door or window, would enable the user to "move" between the three dimensional desktop images and thus produce the experience of different areas. The use of multiple 3D desktop Images may In some embodiments thus prevent any one desktop image from becoming too cluttered or "busy" and thus requiring the user to use over accurate and delicate selections in order to select a particular operation or application to be carried out.
Although the above example has been described with regards to the task of adding a context associate model Image, it would be appreciated that a similar deletion operation may be implemented In some embodiments of the application. In these embodiments the desktop monitor editing processor 405 may detect that a particular object of Interest or associated application is to be deleted from the 3D desktop user interface scene. Similarly in some embodiments each object of interest may be edited to change some feature or aspect of the object of interest three dimensional model or the associated application. In such embodiments the desktop monitor context editing processor 405 may determine the apparatus selecting a particular object and a change of the aspect (for example colour, shape, or associated application) dependent on the selection of the apparatus or user.
In some embodiments the desktop monitor context editing processor 405 may receive images to be added to become further three dimensional desktop scenes. For example a friend or family member may send a photo of their house or room in a house within which their virtual "avatar" may be generated and the desktop monitor context editing processor 405 may add a portal to the apparatus "home" virtual 3D desktop user interface scene to connect to the "home" scene the new 3D scene. In such embodiments the new 3D scene or image may provide indications of the status of the originator of the Image or scene. For example the room scene may be darkness if the user is unavailable or busy. A further indication of such status may be provided by the portal linking the scenes in some embodiments. For example the window may have its curtains/blinds drawn or the door may be closed or locked when the person Is busy and the window or door open when the user is available.
With respect to Figure 8, further interactions with the 3D desktop user interface scenes are shown in detail. The 3D desktop user interface image 181 shown in Figure 8a comprises the objects of interest such as the set of drawers 501 and the television 503 as described previously.
The desktop motion processor 407 may determine from a motion sensor, that the apparatus has been moved, for example the orientation of the apparatus may be detected using a solid state compass. Otherwise the desktop motion processor 407 may determine a "motion" of the 3D desktop user Interface via the user Interface input 14, for example the user may drag their finger across the screen indicating a relative motion panning of the desktop.
Thus the desktop motion processor 407 may detect or determine the motion of apparatus as shown in Figure 8b by step 751. Furthermore the desktop motion processor 407 may determine a relative motion of the 3D desktop user interface scene either from the motion of the apparatus or the interaction with the user interface.
The determination of the relative motion of the 3D desktop user interface scene is shown in Figure 8b by step 753.
Furthermore the desktop motion processor 407 may then "move" the three dimensional desktop user interface image in the relative motion direction.
The application of the relative displacement change is shown in Figure 8b by step 755.
As shown in Figure 8a, the image 181 may be moved to the left either by moving the physical apparatus to the right or by dragging the finger of the user from right to left across the screen. As shown in Figure 8a, the displacement may now expose further objects of interest such as a portal (door) 701.
With respect to Figure 8c, a further context "motion" related change to the 3D desktop user interface scene is described in detail. In such embodiments the sensor determines not only the orientation or motion of the apparatus but also the physical position or location of the apparatus. For example a satellite positioning sensor may determine from timing various signals from orbiting satellites the position of the apparatus, similar location sensors may be implemented using radio frequency timing differences or signal strength values such as using cellular timing distances or cellular signal strengths.
The desktop motion processor 407 in such embodiments may determine the position of the apparatus. The determination of the position of the apparatus is shown In Figure 771.
Furthermore the desktop motion processor 407 may determine whether or not the current position of the apparatus is linked or associated with a particular 3D desktop user interface scene. In other words where In some embodiments the apparatus may have more than one 3D desktop user interface Image as described previously, each may be linked to a particular location so that a "office" desktop image may be linked to a first location such as the usual place of work and similarly the "home" 3D desktop user interface Image may be linked to a second location such as the home of the user.
The determination of the destination desktop associated with the physical position of the apparatus is shown in Figure 8b by step 773.
The desktop motion processor 407 may then display the location linked 3D desktop user Interface scene, in other words the desktop motion processor 407 may switch the three dimensional image shown to the user of the apparatus dependent on the location of the apparatus.
The application of the linked desktop image is shown In Figure 8c by step 775.
In summary at least one embodiment may be summarized as the operations of a method comprising: identifying at least one object within an image; associating the at least one object with a function; selecting a model representation of the at least one object; and displaying a user interface image comprising the model representation, wherein the function is associated with the model representation. in some embodiments, the scenes or objects of interest within the scenes may be interactive and change their appearance dependent on their current status. For example as described previously an avatar of a further user may indicate whether or not the user is busy or available, an image of a answer phone may indicate whether or not the user has any messages waiting on the answer phone system, etc.
The advantages of such a system are such that the user may personalise the user interface with items and physical locations which they are familiar with and that elements which are associated with the user's daily life habits may easily be implemented so that the user can simply find and use applications. Furthermore the interaction provides a very familiar and intuitive user interface.
The procedure of creating the desktop is furthermore greatly simplified over the current modelling processes requiring only a single image to implement a basic user interface. Furthermore the embodiments as shown above may allow pre-calculating and computing on servers other than the apparatus In order to prevent the apparatus being overloaded,
Thus in summary at least one embodiment may comprise an apparatus comprising: an identifier configured to identify at least one object within an image; a linker configured to associate the at least one object with a function; a selector configured to select a model representation of the at least one object; and a display configured to display a user interface image comprising the model representation, wherein the function is associated with the model representation.
It shall be appreciated that the term user equipment is intended to cover any suitable type of wireless user equipment, such as mobile telephones, portable data processing devices or portable web browsers.
In general, the various embodiments of the invention may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. For example, some aspects may be implemented In hardware, while other aspects may be implemented In firmware or software which may be executed by a controller, microprocessor or other computing device, although the invention is not limited thereto. While various aspects of the invention may be illustrated and described as block diagrams, flow charts, or using some other pictorial representation, it is well understood that these blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-iimiting examples,, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof. Thus in at least one embodiment the application may be summarized as being an apparatus comprising at least one processor and at least one memory including computer program code the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to perform: identifying at least one object within an image; associating the at least one object with a function; selecting a model representation of the at least one object; and displaying a user interface image comprising the model representation, wherein the function is associated with the model representation.
The embodiments of this invention may be implemented by computer software executable by a data processor of the mobile device, such as in the processor entity, or by hardware, or by a combination of software and hardware. Further in this regard it should be noted that any blocks of the logic flow as in the Figures may represent program steps, or Interconnected logic circuits, blocks and functions, or a combination of program steps and logic circuits, blocks and functions. The software may be stored on such physical media as memory chips, or memory blocks implemented within the processor, magnetic media such as hard disk or floppy disks, and optical media such as for example DVD and the data variants thereof, or CD.
Therefore in at least one embodiment there may be a computer-readable medium encoded with instructions that, when executed by a computer perform: Identifying at least one object within an Image; associating the at least one object with a function; selecting a model representation of the at least one object; and displaying a user interface image comprising the model representation, wherein the function is associated with the model representation.
The memory may be of any type suitable to the local technical environment and may be implemented using any suitable data storage technology, such as semiconductor-based memory devices, magnetic memory devices and systems, optical memory devices and systems, fixed memory and removable memory, The data processors may be of any type suitable to the local technical environment, and may include one or more of general purpose computers, special purpose computers, microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASIC), gate level circuits and processors based on multi-core processor architecture, as non-limiting examples.
Embodiments of the inventions may be practiced in various components such as integrated circuit modules. The design of integrated circuits is by and large a highly automated process. Complex and powerful software tools are available for converting a logic level design into a semiconductor circuit design ready to be etched and formed on a semiconductor substrate.
Programs, such as those provided by Synopsys, Inc. of Mountain View, California and Cadence Design, of San Jose, California automatically route conductors and locate components on a semiconductor chip using well established rules of design as well as libraries of pre-stored design modules. Once the design for a semiconductor circuit has been completed, the resultant design, in a standardized electronic format (e.g., Opus, GDSfl, or the like) may be transmitted to a semiconductor fabrication facility or "fab" for fabrication.
As used in this application, the term circuitry or circuit may refer to all of the following: (a) hardware-only circuit implementations (such as implementations in only analogue and/or digital circuitry) and (b) to combinations of circuits and software (and/or firmware), such as and where applicable: (i) to a combination of processors) or (ii) to portions of processor(s)/software (including digital signal processor(s)), software, and memory(ies) that work together to cause an apparatus, such as a mobile phone or server, to perform various functions) and (c) to circuits, such as a microprocessor(s) or a portion of a microprocessor(s), that require software or firmware for operation, even if the software or firmware is not physically present. This definition of circuitry applies to all uses of this term In this application, including in any claims. As a further example, as used in this application, the term circuitry would also cover an implementation of merely a processor (or multiple processors) or portion of a processor and its (or their) accompanying software andfor firmware. The term circuitry would also cover, for example and if applicable to the particular claim element, a baseband Integrated circuit or applications processor integrated circuit for a mobile phone or a similar integrated circuit in server, a cellular network device, or other network device.
The term processor and memory may comprise but are not limited to in this application: (1 ) one or more microprocessors, (2) one or more processors) with accompanying digital signal processor(s), (3) one or more processor(s) without accompanying digital signal processor(s), (3) one or more special- purpose computer chips, (4) one or more field-programmable gate arrays (FPGAS), (5) one or more controllers, (6) one or more application-specific integrated circuits (ASICS), or detector(s), processor(s) (including dual-core and multiple-core processors), digital signal processors), controllers), receiver, transmitter, encoder, decoder, memory (and memories), software, firmware, RAM, ROM, display, user interface, display circuitry, user interface circuitry, user Interface software, display software, circult(s), antenna, antenna circuitry, and circuitry.
The foregoing description has provided by way of exemplary and non-limiting examples a full and informative description of the exemplary embodiment of this invention. However, various modifications and adaptations may become apparent to those skilled in the relevant arts in view of the foregoing description, when read In conjunction with the accompanying drawings and the appended claims. However, ail such and similar modifications of the teachings of this invention will still fail within the scope of this invention as defined in the appended claims.

Claims

Claims:
1. A method comprising:
identifying at least one object within an image;
associating the at (east one object with a function;
selecting a model representation of the at ieast one object; and displaying a user Interface image comprising the model representation, wherein the function is associated with the model representation.
2. The method as claimed in claim 1 , further comprising;
capturing the image using a camera.
3. The method as claimed in claims 1 and 2, wherein identifying at Ieast one object within an image comprises:
segmenting the image into at Ieast two image blocks;
identifying at least one object within at Ieast one of the at least two image blocks,
4. The method as claimed in claim 3, further comprising:
identifying at Ieast one further object within at least one further image block;
selecting a further model representation of the at least one further object, and wherein displaying a user interface image further comprises displaying the further model representation with a geometrical relationship consistent with the geometrical relationship between the at Ieast one object and the at Ieast one further object within the image.
5. The method as claimed in claims 1 to 4, further comprising editing the model representation of the at Ieast one object.
6. The method as claimed in claims 1 to 5, further comprising adding to the user interface Image at least one further model representation, wherein a further function is associated with the model representation.
7. The method as claimed in claims 1 to 6, further comprising deleting from the user interface image at least one model representation.
8. The method as claimed in claims 1 to 7, further comprising editing the function associated with the model representation displayed in the user interface image.
9. The method as claimed in claims 1 to 8, further comprising:
displaying a further user interface image comprising at least one another model representation, wherein at least one another function is associated with the another model representation.
10. The method as claimed in claim 9, further comprising:
identifying at least one another object within another image;
associating the at least one another object with the at least one another function;
selecting the another model representation of the at least one another object.
11. The method as claimed in claims 9 and 10, wherein the at least one function associated with the model representation comprises selecting at least one further user interface image for display.
12. The method as claimed In claims 1 to 11, wherein the model representation comprises at least two sub-model components, wherein each of the mode! components is associated with at least one function.
13. The method as claimed in claims 1 to 12, further comprising:
selecting the model representation within the user interface image; and performing the function associated with the model representation selected.
14. The method as claimed in claim 13, further comprising: changing the displayed model representation on selecting the model representation within the user interface image.
15. The method as claimed in claims 1 to 14, wherein the model representation comprises at least one of:
an animate object; and
an inanimate object.
1Θ. The method as claimed in claims 1 to 15, wherein the function comprises at least one of:
a file access function;
a time function;
a calendar function; and
a status display function.
17. An apparatus comprising at least one processor and at least one memory Including computer program code the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to perform;
Identifying at least one object within an image;
associating the at least one object with a function;
selecting a model representation of the at least one object; and displaying a user interface image comprising the model representation, wherein the function is associated with the model representation.
18. The apparatus as claimed in claim 17, wherein the at least one memory and the computer program code is configured to, with the at least one processor, cause the apparatus to further perform capturing the image using a camera.
19. The apparatus as claimed in claims 17 and 18, wherein identifying at least one object within an image cause the apparatus at least to perform:
segmenting the image into at least two image blocks; identifying at least one object within at least one of the at least two image blocks.
20. The apparatus as claimed in claim 19, wherein the at least one memory and the computer program code is configured to, with the at least one processor, cause the apparatus to further perform:
identifying at least one further object within at least one further image block; and
selecting a further model representation of the at least one further object, and wherein displaying a user interface image further comprises displaying the further model representation with a geometrical relationship consistent with the geometrical relationship between the at least one object and the at least one further object within the image.
21. The apparatus as claimed in claims 17 to 20, wherein the at least one memory and the computer program code is configured to, with the at least one processor, cause the apparatus to further perform editing the model representation of the at least one object.
22. The apparatus as claimed in claims 17 to 21, wherein the at least one memory and the computer program code is configured to, with the at least one processor, cause the apparatus to further perform adding to the user interface image at least one further model representation, wherein a further function is associated with the model representation.
23. The apparatus as claimed in claims 17 to 22, wherein the at least one memory and the computer program code is configured to, with the at least one processor, cause the apparatus to further perform deleting from the user interface image at least one model representation.
24. The apparatus as claimed in claims 17 to 23, wherein the at least one memory and the computer program code is configured to, with the at least one processor, cause the apparatus to further perform editing the function associated with the model representation displayed in the user interface image.
25. The apparatus as claimed in claims 1 to 24, wherein the at least one memory and the computer program code is configured to, with the at least one processor, cause the apparatus to further perform displaying a further user interface image comprising at least one another model representation, wherein at least one another function is associated with the another model representation.
26. The apparatus as claimed in claim 25, wherein the at least one memory and the computer program code is configured to, with the at least one processor, cause the apparatus to further perform:
identifying at least one another object within another image;
associating the at least one another object with the at least one another function;
selecting the another model representation of the at least one another object.
27. The apparatus as claimed in claims 25 and 26, wherein the at least one function associated with the model representation cause the apparatus to further perform selecting at least one further user interface image for display.
28. The apparatus as claimed in claims 17 to 27, wherein the model representation comprises at least two sub-model components, wherein each of the model components is associated with at least one function.
29. The apparatus as claimed in claims 17 to 28, wherein the at feast one memory and the computer program code is configured to, with the at least one processor, cause the apparatus to further perform:
selecting the model representation within the user interface image; and performing the function associated with the model representation selected.
30. The apparatus as claimed in claim 29, wherein the at least one memory and the computer program code is configured to, with the at least one processor, cause the apparatus to further perform:
changing the displayed model representation on selecting the model representation within the user interface image.
31. The apparatus as claimed in claims 17 to 30, wherein the model representation comprises at least one of:
an animate object; and
an inanimate object.
32. The apparatus as claimed in claims 17 to 31, wherein the function comprises at least one of:
a file access function;
a time function;
a calendar function; and
a status display function.
33. An apparatus comprising:
an identifier configured to identify at least one object within an image; a linker configured to associate the at least one object with a function; a selector configured to select a model representation of the at least one object; and
a display configured to display a user interface image comprising the model representation, wherein the function is associated with the model representation.
34. The apparatus as claimed in ciaim 33, further comprising:
a camera configured to capture the image.
35. The method as claimed in claims 33 and 34, wherein the identifier comprises:
an image segmenter configured to segment the image into at least two image blocks; a segment identifier configured to identify at least one object within at least one of the at least two image blocks.
36. The apparatus as claimed in claim 35, wherein the segment identifier is further configured to identify at least one further object within at least one further image block;
the selector is further configured to select a further model representation of the at least one further object, and wherein
the display is further configured to display the further model representation with a geometrical relationship consistent with the geometrical relationship between the at least one object and the at least one further object within the image.
37. The apparatus as claimed In claims 33 to 36, further comprising an model editor configured to edit the model representation of the at least one object.
38. The apparatus as claimed in claims 33 to 37, further comprising a model Inserter configured to add to the user Interface Image at least one further model representation.
39. The apparatus as claimed in claims 33 to 38, further comprising a model remover configured to delete from the user interface image at least one model representation.
40. The apparatus as claimed in claims 33 to 39, further comprising a function editor configured to edit the function associated with the model representation displayed in the user interface image.
41. The apparatus as claimed in claims 33 to 40, wherein the display is further configured to display a further user interface image comprising at least one another model representation, wherein at least one another function is associated with the another model representation.
42. The apparatus as claimed in claim 41 , wherein
the identifier is further configured to identify at least one another object within another image;
the linker is further configured to associate the at least one another object with the at least one another function; and
the selector is further configured to select the another model representation of the at least one another object.
43. The apparatus as claimed in claims 41 and 42, wherein the at least one function associated with the model representation comprises selecting at least one further user interface image for display.
44. The apparatus as claimed in claims 33 to 43, wherein the model representation comprises at least two sub-model components, wherein the [inker is further configured to associate each of the model components with at least one function,
45. The apparatus as claimed In claims 33 to 44, further comprising:
an input determiner configured to determine a selection of at least one model representation within the user Interface image; and
a function processor configured to perform a function associated with the model representation selected.
46. The apparatus as claimed in claim 45, wherein the display is further configured to change the displayed model representation selected on selecting the model representation within the user interface image.
47. The apparatus as claimed in claims 33 to 46, wherein the model representation comprises at least one of:
an animate object; and
an inanimate object.
48. The apparatus as claimed in claims 33 to 47, wherein the function comprises at least one of: a file access function;
a time function;
a calendar function; and
a status display function.
49. An apparatus comprising:
identifying means for Identifying at least one object within an Image; linking means for associating the at least one object with a function; selection means for selecting a model representation of the at least one object; and
display means for displaying a user interface image comprising the model representation, wherein the function is associated with the model representation.
50. A computer-readable medium encoded with instructions that, when executed by a computer perform:
identifying at least one object within an image;
associating the at least one object with a function;
selecting a model representation of the at least one object; and displaying a user interface Image comprising the model representation, wherein the function is associated with the model representation.
51. An electronic device comprising apparatus as claimed in claims 17 to 48.
52. A chipset comprising apparatus as claimed in claims 17 to 48.
PCT/CN2009/075706 2009-12-18 2009-12-18 Desktop display apparatus WO2011072456A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2009/075706 WO2011072456A1 (en) 2009-12-18 2009-12-18 Desktop display apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2009/075706 WO2011072456A1 (en) 2009-12-18 2009-12-18 Desktop display apparatus

Publications (1)

Publication Number Publication Date
WO2011072456A1 true WO2011072456A1 (en) 2011-06-23

Family

ID=44166738

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2009/075706 WO2011072456A1 (en) 2009-12-18 2009-12-18 Desktop display apparatus

Country Status (1)

Country Link
WO (1) WO2011072456A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103064617A (en) * 2012-12-18 2013-04-24 中兴通讯股份有限公司 Implementation method and system of three-dimensional scenarized desktop

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5894308A (en) * 1996-04-30 1999-04-13 Silicon Graphics, Inc. Interactively reducing polygon count in three-dimensional graphic objects
WO2000028478A1 (en) * 1998-11-05 2000-05-18 Computer Associates Think, Inc. Method and apparatus for interfacing with intelligent three-dimensional components
CN1409218A (en) * 2002-09-18 2003-04-09 北京航空航天大学 Virtual environment forming method
US20050166163A1 (en) * 2004-01-23 2005-07-28 Chang Nelson L.A. Systems and methods of interfacing with a machine
CN101300621A (en) * 2005-09-13 2008-11-05 时空3D公司 System and method for providing three-dimensional graphical user interface

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5894308A (en) * 1996-04-30 1999-04-13 Silicon Graphics, Inc. Interactively reducing polygon count in three-dimensional graphic objects
WO2000028478A1 (en) * 1998-11-05 2000-05-18 Computer Associates Think, Inc. Method and apparatus for interfacing with intelligent three-dimensional components
CN1409218A (en) * 2002-09-18 2003-04-09 北京航空航天大学 Virtual environment forming method
US20050166163A1 (en) * 2004-01-23 2005-07-28 Chang Nelson L.A. Systems and methods of interfacing with a machine
CN101300621A (en) * 2005-09-13 2008-11-05 时空3D公司 System and method for providing three-dimensional graphical user interface

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103064617A (en) * 2012-12-18 2013-04-24 中兴通讯股份有限公司 Implementation method and system of three-dimensional scenarized desktop

Similar Documents

Publication Publication Date Title
US11663785B2 (en) Augmented and virtual reality
CN205788149U (en) Electronic equipment and for showing the device of image
US9880640B2 (en) Multi-dimensional interface
CN110865708B (en) Interaction method, medium, device and computing equipment of virtual content carrier
KR20170105444A (en) Configuration and operation of display devices including content curation
CN112954210B (en) Photographing method and device, electronic equipment and medium
CN103631768A (en) Collaborative data editing and processing system
CN110636354A (en) Display device
CN102469242A (en) Imaging apparatus, imaging method, and program
KR20140122054A (en) converting device for converting 2-dimensional image to 3-dimensional image and method for controlling thereof
US11706485B2 (en) Display device and content recommendation method
US20170243611A1 (en) Method and system for video editing
CN102469261A (en) Imaging apparatus, imaging display control method and program
CN104268150A (en) Method and device for playing music based on image content
CN114095776B (en) Screen recording method and electronic equipment
CN112073798B (en) Data transmission method and equipment
CN114296949A (en) Virtual reality equipment and high-definition screen capturing method
EP3619641A1 (en) Real time object surface identification for augmented reality environments
CN113766296A (en) Live broadcast picture display method and device
CN115115740A (en) Thinking guide graph recognition method, device, equipment, medium and program product
CN110868632B (en) Video processing method and device, storage medium and electronic equipment
CN106716501A (en) Visual decoration design method, apparatus therefor, and robot
WO2011072456A1 (en) Desktop display apparatus
CN117692552A (en) Wallpaper display method, electronic equipment and storage medium
CN115499577A (en) Image processing method and terminal equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09852194

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09852194

Country of ref document: EP

Kind code of ref document: A1