WO2012016379A1 - Apparatus and associated methods - Google Patents

Apparatus and associated methods Download PDF

Info

Publication number
WO2012016379A1
WO2012016379A1 PCT/CN2010/075699 CN2010075699W WO2012016379A1 WO 2012016379 A1 WO2012016379 A1 WO 2012016379A1 CN 2010075699 W CN2010075699 W CN 2010075699W WO 2012016379 A1 WO2012016379 A1 WO 2012016379A1
Authority
WO
WIPO (PCT)
Prior art keywords
scribed
user
composite image
delineations
delineation
Prior art date
Application number
PCT/CN2010/075699
Other languages
French (fr)
Inventor
Ying Zhou
Benny Iskov
Original Assignee
Nokia Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Corporation filed Critical Nokia Corporation
Priority to PCT/CN2010/075699 priority Critical patent/WO2012016379A1/en
Publication of WO2012016379A1 publication Critical patent/WO2012016379A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text

Definitions

  • a user scribed delineation may comprise a two-dimensional array of pixels. Different pixels may or may not have the same pixel-size.
  • the user scribed delineation data may comprise a list of pixel positions, the corresponding size and/or corresponding colour.
  • a user scribed delineation is a delineation, which has been scribed/hand-written/hand- drawn by a user.
  • a user scribed delineation may be a delineation which has a shape/form corresponding to the movement and/or position of a scriber under the control of a user when the user is scribing/writing/drawing the delineation.
  • the signalling used to transmit the data of the composite image may encompass any energy transmitting disturbance, including electromagnetic radiation (electromagnetic radiation encompasses ultraviolet light, visible light, infrared and radio waves), sound waves and ultrasound (with any frequency in the sound spectrum), current or voltage signalling (i.e. along a wire).
  • Transmission signalling may be broadcast or narrowcast. Transmission signalling be wired (e.g. voltage in a metal or other conductor, fibre optic cable) or wireless (e.g. radio waves).
  • a display configured to display of the plurality of delineations corresponding to the user scribed delineation data
  • the apparatus or processor may be incorporated into an electronic device.
  • the apparatus may be the (portable) electronic device.
  • the portable electronic device may provide one or more of audio/text/video communication functions (e.g. telecommunication, video-communication, and/or text transmission (Short Message Service (SMS)/ Multimedia Message Service (MMS)/emailing) functions), interactive/non- interactive viewing functions (e.g. web-browsing, navigation, TV/program viewing functions), music recording/playing functions (e.g. MP3 or other format and/or (FM/AM) radio broadcast recording/playing), downloading/sending of data functions, image capture function (e.g. using a in-built digital camera), and gaming functions.
  • the electronic device which may or may not be portable may comprise, for example, a computer (including a laptop), a phone, a monitor, a mobile phone, and/or a personal digital assistant.
  • Figure 2 illustrates an embodiment comprising a touch-screen.
  • Figure 3a-i illustrates a series of display screens of the device when the device is in use.
  • the user wants to write a message comprising the sentence 'This is Nokia' and send it, via a network (e.g. mobile phone network, internet, LAN or Ethernet), to a friend.
  • a network e.g. mobile phone network, internet, LAN or Ethernet
  • this embodiment has a scribing mode wherein the touch-screen (205) is divided into three regions.
  • the middle region (212) of the screen is where the user can scribe a delineation by moving his finger (i.e. the user's finger is the scriber in this example) across the screen.
  • the first screen in the series illustrates the device when the user has inputted the first user scribed delineation of the message (in the scribing mode), where in this case the first delineation is the word 'This * .
  • the embodiment enables the user to scribe a delineation by recognising if and where the user's finger is touching the screen (e.g. by using stress, strain, conductivity, surface acoustic waves, capacitance and/or heat).
  • the user scribes a delineation by touching the screen within the middle (scribing) region
  • the output size (which may be dictated by the amount of data required to store/transmit a delineation and/or the size which is comfortable to read) may be different from the scribing/input size. Enabling a plurality of individual user delineations to be arranged to form a composite image allows a large composite image to be composed using a small display.
  • Figure 4b depicts a variation of the embodiment discussed with reference to figure 2 wherein composite image data, which includes information on the order but not on the two-dimensional position of the individual user scribed delineations, has been rendered according to a predetermined standard. That is, the arrangement data of the composite image data sets the order of the individual user scribed delineations (i.e. 'This' (450) is before 'is' (451), which is before 'Joe.' (452), which is before 'Where' (453), which is before 'are' (454), which is before 'you' (455), which is before '?' (456)).
  • the process is repeated and the user scribes and accepts the delineations 'iuck'.
  • the user can indicate to the device that this is the final delineation by clicking the 'finish' button (522) at the bottom of the screen. This accepts the delineation and changes the mode of the device to an arranging mode.
  • the device is then configured to be in an arranging mode.
  • this embodiment enables the user to arrange the delineations. Initially, as a default, all of the individual user scribed delineations (i.e.
  • a particular mentioned apparatus/device may be preprogrammed with the appropriate software to carry out desired operations, and wherein the appropriate software can be enabled for use by a user downloading a "key", for example, to unlock/enable the software and its associated functionality.
  • Advantages associated with such embodiments can include a reduced requirement to download data when further functionality is required for a device, and this can be useful in examples where a device is perceived to have sufficient capacity to store such pre-programmed software for functionality that may not be enabled by a user.
  • the any mentioned apparatus/circuttry/e!ements/processor may have other functions in addition to the mentioned functions, and that these functions may be performed by the same apparatus/circuitry/elements/processor.
  • One or more disclosed aspects may encompass the electronic distribution of associated computer programs and computer programs (which may be source/transport encoded) recorded on an appropriate carrier (e.g. memory, signal).
  • any "computer” described herein can comprise a collection of one or more individual processors/processing elements that may or may not be located on the same circuit board, or the same region/position of a circuit board or even the same device. In some embodiments one or more of any mentioned processors may be distributed over a plurality of devices. The same or different processor/processing elements may perform one or more functions described herein.

Abstract

An apparatus comprises: at least one processor; and at least one memory including computer program code. The at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus to perform at least the following: receiving user scribed delineation data corresponding to each of a plurality of individual user scribed delineations produced using a user interface of an electronic device; enabling display of the plurality of delineations corresponding to the user scribed delineation data; enabling arrangement of the plurality of individual user scribed delineations to form a corresponding composite image; generating composite image data corresponding to the composite image; and transmitting/storing the composite image data.

Description

APPARATUS AND ASSOCIATED METHODS
Technical Field The present disclosure relates to the field of user interfaces for scribed input/output, associated methods, computer programs and apparatus. Certain disclosed aspects/embodiments relate to portable electronic devices, in particular, so-called hand- portable electronic devices which may be hand-held in use (although they may be placed in a cradle in use). Such hand-portable electronic devices include so-called Personal Digital Assistants (PDAs).
The portable electronic devices/apparatus according to one or more disclosed aspects/embodiments may provide one or more audio/text/video communication functions (e.g. tele-communication, video-communication, and/or text transmission (Short Message Service (SMS)/ Multimedia Message Service (MMS)/emailing) functions), interactive/non-interactive viewing functions (e.g. web-browsing, navigation, TV/program viewing functions), music recording/playing functions (e.g. MP3 or other format and/or (FM/AM) radio broadcast recording/playing), downloading/sending of data functions, image capture function (e.g. using a (e.g. in-built) digital camera), and gaming functions.
Background
Some electronic devices permit a user to input data by 'writing' (e.g. using a stylus and touch sensitive pad). When users enter data in this manner, the data is often converted into standard font characters by a handwriting recognition algorithm that runs on the device. This conversion can be slow and take a lot of processing, and may result in unintended output (e.g. if the device does not recognise the handwriting correctly). Handwriting recognition algorithms allow a user to input text using handwriting to generate output which is standard computer font text (i.e. the output looks similar to that of a conventional electronic device).
Other devices may enable a standard font to be created using a users own handwriting. However this does not permit a person to change the output style to reflect, for example, the recipient, his mood or the purpose of his message.
The solution described herein may address one or more of these issues. The listing or discussion of a prior-published document or any background in this specification should not necessarily be taken as an acknowledgement that the document or background is part of the state of the art or is common general knowledge. One or more aspects/embodiments of the present disclosure may or may not address one or more of the background issues.
Summary In a first aspect, there is provided an apparatus comprising:
at least one processor; and
at least one memory including computer program code,
the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following:
receive user scribed delineation data corresponding to each of a plurality of individual user scribed delineations produced using a user interface of an electronic device;
enable display of the plurality of delineations corresponding to the user scribed delineation data;
enable arrangement of the plurality of individual user scribed delineations to form a corresponding composite image;
generate composite image data corresponding to the composite image; and transmit/store the composite image data. Many people like to be able to write, send and receive messages written and displayed in handwriting. Such messages may be personal in a way that messages rendered in standard computer fonts are not. There is therefore provided a way of emulating handwriting using an electronic device (either for input or output). A user scribed delineation may be a two dimensional shape which has been scribed/drawn/handwritten by the user.
A user scribed delineation may comprise a combination of one or more of, for example, a word, a letter character (e.g. from the Roman, Greek, Arabic or Cyrillic alphabets), a graphic character (e.g. a sinograph, Japanese kana or Korean delineation), a drawing, a phrase, a syllable, a punctuation mark and a sentence. Each user scribed delineation in a composite image may be unique. For example, if a composite image comprised two delineations both representing the word 'house', these two delineations may be distinct in that they may have a different form or shape. This may allow a user to personalise the image by scribing individual delineations in a distinctive way. All individual user scribed delineations of a composite image may be scribed separately.
The composite image may represent a textual message (e.g. if some of the constituent user scribed delineations represented words which were arranged in a particular order to form a message). As the user scribed delineations may be used to create a textual message (e.g. by using user scribed delineation which represent words), the solution described herein may lower the processing/memory requirements of the electronic device by mitigating the need for fonts to be embedded and/or textual characters to be recognised.
As the user may use received user scribed delineations to create a composite image representing a textual message, the proposed solution may not require language-specific character support to compose textual messages. For example, delineations representing Chinese and/or English characters could be processed and rendered electronically without the need for a Chinese and/or Roman alphabet character set. The composite image may be considered to be a two dimensional arrangement of distinct elements/blocks/layers (i.e. wherein the elements/blocks/layers are the individual user scribed delineations). The shape/form of these individual elements/blocks/layers may be fixed but they may be moved with respect to each other or reordered to form different composite images.
A user scribed delineation may comprise a two-dimensional array of pixels. Different pixels may or may not have the same pixel-size. The user scribed delineation data may comprise a list of pixel positions, the corresponding size and/or corresponding colour. A user scribed delineation is a delineation, which has been scribed/hand-written/hand- drawn by a user. A user scribed delineation may be a delineation which has a shape/form corresponding to the movement and/or position of a scriber under the control of a user when the user is scribing/writing/drawing the delineation. A scribed delineation may be created by a user moving/positioning a scriber, the motion and/or position of which is recognised by the user interface and wherein the resulting delineation corresponds to the motion/position of the scriber. The scriber may, for example, comprise a stylus, a wand, a finger, a hand or a mouse. As a composite image may be a representation of the writer's personal hand-writing, it may also allow for robust 'electronic signature' authentication.
An individual user scribed delineation/composite image may be coloured, either in a single colour or in a plurality of colours. For example, the composite image may comprise three individual user scribed delineations: one in red; one in blue; and one in green. Alternatively/additionally a user scribed delineation may comprise different colours. For example, a user scribed delineation representing the word 'hat* may have each of the individual letters coloured differently.
The apparatus may comprise a user interface, wherein the user interface is configured to detect motion/position of the user to generate user scribed delineations. In other words, the user interface may be configured to detect the motion of the user whilst the user is interacting with the user interface. For example, the user interface may comprise a touchpad which is configured to detect the motion/position of the users hand when the user is in contact with the touch pad. Alternatively/additionally, the user interface may comprise a mouse. The mouse may be configured to detect motion of itself with respect to the surface it is resting on (e.g. by a rollerball or LED and sensor). As the motion of the mouse corresponds to the motion of the users hand when controlling the mouse, the mouse may be considered to be detecting motion of the user.
As the user interface may be configured to use the motion of the user to create an individual user scribed delineation the solution described herein may allow more intuitive entering of a message.
The user interface may comprise, for example, a wand (e.g. from Nintendo Wii™), a touchpad, a touch-screen, a mouse, a motion detector, a position detector and/or an accelerometer, Transmission of the composite image data may be via a network. The network may be, for example, the internet, a mobile phone network, a wireless network, LAN or Ethernet. The apparatus may comprise a transmitter and or receiver to interact with a network. The transmitter/receiver may comprise, for example, an antenna, an Ethernet port, a LAN connection, a USB port, a radio antenna, Bluetooth connector, infrared port, fibre optic detector/transmitter. Transmission signalling (i.e. the signalling used to transmit the data of the composite image) may encompass any energy transmitting disturbance, including electromagnetic radiation (electromagnetic radiation encompasses ultraviolet light, visible light, infrared and radio waves), sound waves and ultrasound (with any frequency in the sound spectrum), current or voltage signalling (i.e. along a wire). Transmission signalling may be broadcast or narrowcast. Transmission signalling be wired (e.g. voltage in a metal or other conductor, fibre optic cable) or wireless (e.g. radio waves).
The receiving device (the device which receives a transmitted composite image) may comprise another user electronic device and/or a peripheral device. A peripheral device may comprise a combination of one or more of, for example, a printer, a fax machine, a disk drive, a computer and a projector.
Storage of the composite image data may comprise storing a file in a predetermined format in a memory. Memory may comprise, for example, a CD, a DVD, flash memory, a floppy disk, a hard disk, volatile memory, non-volatile memory or Random Access Memory.
The apparatus, processor and/or memory may be configured to change the resolution of an individual user scribed delineations and/or composite image. The resolution may be considered to be the number of distinct pixels in each dimension that make up the individual user scribed delineation or composite image. For example, the resolution may be reduced to reduce the amount of data to be transmitted. Reducing resolution may be performed, for example, by increasing the pixel size (whilst keeping the overall size of the delineation the same) and resampling or taking an average colour value of the original pixels which correspond to each new pixel. Increasing the resolution may be performed by reducing the pixel size (whilst keeping the overall size of the delineation the same) and interpolating the colours of the corresponding original pixels (e.g. by applying a Gaussian blur). This may reduce the effects of jaggy edges/aliasing when displayed on a larger/higher resolution screen.
The apparatus, processor and/or memory may be configured to change the size of the scribed delineation and/or composite image by changing the pixel size of an individual user scribed delineations and/or composite image.
The apparatus, processor and/or memory may be configured to perform run-length encoding on the received user scribed delineation data corresponding to at least one user scribed delineation. Similarly, run-length encoding could also be performed on the resulting composite image data. Run-length encoding may allow the amount of data required to be stored/transmitted to be reduced. The composite image data may comprise information on the position and colour of each pixel, the position and/or order of each individual user scribed delineation and/or which pixels are associated with each individual user scribed delineation.
The composite image data stored/transmitted may not completely describe the two- dimensional position of an individual user scribed character within the original composite image. For example, if a composite image comprises a hand written textual message wherein a constituent user scribed delineation is used to represent each word, a two dimensional arrangement of the words may not be required. In this case the composite image data may include arrangement data comprising information on the order of the words rather than on their two-dimensional position.
It will be appreciated that if composite image data comprising the order of the constituent delineations were to be displayed on a device, the displaying device may provide additional arrangement information to render the composite image. This additional information may comprise a predetermined standard. For example, the device may be configured to place the individual user scribed delineations (i.e. those making up the composite image) in their order from left to right in a single row. Alternatively, the device may be configured to place the received individual user scribed delineations in a number of rows such that the order {as dictated by the composite image data) runs left to right along each row, and the rows are ordered from top to bottom.
Composite image data may include arrangement data comprising information on the position of the individual user scribed characters within the composite image, along two dimensions, (e.g. by its neighbour on the left and its neighbour above).
Each individual user scribed delineation may be a distinct layer/element within the composite image. The composite image data stored/transmitted corresponding to the composite image may contain information relating to the different layers/elements making up the composite image. Alternatively, the composite image may be a merged composite image comprising only a single layer (a merged composite image). In other words, the data stored/transmitted may or may not comprise separate identifiable data for each element/layer/user scribed delineation of the composite image and distinct data relating to how the individual elements/layers/user scribed delineations are arranged to form the composite image.
Having access to information which identifies each individual user scribed delineation may allow the individual user scribed delineations to be identified, edited, re-arranged or selectively deleted after storage/transmission. It will be appreciated that the composite image data could be stored in a single file or a plurality of files (e.g. a delineation file for each individual user scribed delineation and an arrangement data file giving the arrangement of the plurality of individual user scribed delineations (and where the delineation files are stored)).
Not having access to information which identifies each individual user scribed delineation (e.g. for a merged composite image) may prevent the individual user scribed delineations to be identified, edited, re-arranged or selectively deleted after storage/transmission. The data format corresponding to a merged composite image may allow the composite image to be rendered on a device/apparatus not configured to process data comprising information on a plurality of layers/elements. A merged composite image data file may comprise, for example, a png, a jpeg, a eps, a pdf, a tiff, a bmp or a svg file. It will be appreciated that these file formats may be similar to existing file formats and therefore the solution may also allow composite images to be created which are compatible with current messaging systems (e.g. conventional MMS).
Arrangement of the individual user scribed delineations may be performed automatically by the apparatus, processor and/or memory. For example, the apparatus, processor and/or memory may be configured to arrange the individual user scribed delineations according to a preset standard, such as positioning each newly received scribed character to the right of the most recently received character (i.e. arranged chronologically). Alternatively/additionally, the user may be enabled to arrange (i.e. change the position/order of) the individual user scribed delineations within the composite image.
The apparatus may comprise a user interface configured to enable scribing of user scribed delineations. The user interface may comprise, for example, a mouse, a touchpad, a stylus and pad, a touch-screen or a keyboard.
The apparatus may comprise a display configured to enable at least one of the plurality of individual user scribed delineations to be displayed and/or enable the composite image to be displayed. The apparatus may comprise a display configured to enable at least one of the plurality of individual user scribed delineations and the composite image to be displayed simultaneously. One or more parts of the display may allow for user scribe input. One or more parts of the display could be dedicated for user scribe input, and others for output.
In a second aspect, there is provided an apparatus comprising:
a receiver configured to receive user scribed delineation data corresponding to each of a plurality of individual user scribed delineations produced using a user interface of an electronic device;
a display configured to display of the plurality of delineations corresponding to the user scribed delineation data;
an arranger configured to enable arrangement of the plurality of individual user scribed delineations to form a corresponding composite image;
a generator configured to generate composite image data corresponding to the composite image; and
an output configured to transmit/store the composite image data.
In a third aspect, there is provided a processor, the processor configured to
receive user scribed delineation data corresponding to each of a plurality of individual user scribed delineations produced using a user interface of an electronic device;
enable display of the plurality of delineations corresponding to the user scribed delineation data;
enable arrangement of the plurality of individual user scribed delineations to form a corresponding composite image;
generate composite image data corresponding to the composite image; and transmit/store the composite image data. The apparatus or processor may be incorporated into an electronic device. The apparatus may be the (portable) electronic device. The portable electronic device may provide one or more of audio/text/video communication functions (e.g. telecommunication, video-communication, and/or text transmission (Short Message Service (SMS)/ Multimedia Message Service (MMS)/emailing) functions), interactive/non- interactive viewing functions (e.g. web-browsing, navigation, TV/program viewing functions), music recording/playing functions (e.g. MP3 or other format and/or (FM/AM) radio broadcast recording/playing), downloading/sending of data functions, image capture function (e.g. using a in-built digital camera), and gaming functions. The electronic device which may or may not be portable may comprise, for example, a computer (including a laptop), a phone, a monitor, a mobile phone, and/or a personal digital assistant.
In a forth aspect, there is provided method comprising
receiving user scribed delineation data corresponding to each of a plurality of individual user scribed delineations produced using a user interface of an electronic device;
enabling display of the plurality of delineations corresponding to the user scribed delineation data;
enabling arrangement of the plurality of individual user scribed delineations to form a corresponding composite image;
generating composite image data corresponding to the composite image; and transmitting/storing the composite image data.
In a fifth aspect, there is provided a computer program comprising code configured to receive user scribed delineation data corresponding to each of a plurality of individual user scribed delineations produced using a user interface of an electronic device;
enable display of the plurality of delineations corresponding to the user scribed delineation data;
enable arrangement of the plurality of individual user scribed delineations to form a corresponding composite image;
generate composite image data corresponding to the composite image; and transmit/store the composite image data.
The computer program may be stored on a storage media (e.g. on a CD, a DVD, a memory stick). The computer program may be configured to run on the device as an application. An application may be run by the device via an operating system.
The present disclosure includes one or more corresponding aspects, embodiments or features in isolation or in various combinations whether or not specifically stated (including claimed) in that combination or in isolation. Corresponding means for performing one or more of the discussed functions are also within the present disclosure. Corresponding computer programs for implementing one or more of the methods/apparatus disclosed are also within the present disclosure and encompassed by one or more of the described embodiments. As processing such a message as described herein may preclude the need to render/store a message in standard font, it may be more difficult for a third party to electronically query the message (e.g. to search for key words or for private details such as addresses, phone numbers, account details). This may make data transfer/storage more secure.
The apparatus, processor and/or memory may be configured to receive the user scribed delineation data, enable display and arrangement of the delineations, and allow for generation and transmission/storage of the composite image data within a messaging application (e.g. an MMS, email messaging application), notes or contacts application.
The apparatus, processor and/or memory may be configured to initiate a messaging, notes or contacts application upon receiving/detecting user scribe input.
The above summary is intended to be merely exemplary and non-limiting.
Brief Description of the Figures
A description is now given, by way of example only, with reference to the accompanying drawings, in which:-
Figure 1 depicts an embodiment comprising a number of electronic components, including memory, a processor and a communication unit.
Figure 2 illustrates an embodiment comprising a touch-screen.
Figure 3a-i illustrates the views of the touch-screen as a user inputs a message.
Figure 4a illustrates a number of individual user scribed delineations.
Figures 4b and 4c illustrates a composite image comprising the user scribed delineations of figure 4a as displayed on two embodiments wherein the composite image data comprises the order of the user scribed delineations.
Figures 4d and 4e illustrates a composite image comprising the user scribed delineations of figure 4a as displayed on two embodiments wherein the composite image data comprises the two dimensional position of each user scribed delineation. Figures 4f and 4g illustrates a composite image comprising the user scribed delineations of figure 4a as displayed on two embodiments wherein the composite image is a merged image.
Figure 5 illustrates a further embodiment.
Figure 6a-i depicts a series of screen-shots of the embodiment of figure 5 as a user creates and saves a composite image.
Figure 7 depicts a flow diagram describing the method used to store/transmit a composite image.
Figure 8 illustrates schematically a computer readable media providing a program according to an embodiment of the present invention.
Description of Example Aspects/Embodiments
For the sake of convenience, different embodiments depicted in the figures have been provided with reference numerals that correspond to similar features of earlier described embodiments. For example, feature number 1 can also correspond to numbers 101 , 201 , 301 etc. These numbered features may not have been directly referred to within the description of these particular embodiments. These have still been provided in the figures to aid understanding of the further embodiments, particularly in relation to the features of similar described embodiments.
It is common for a user to use an electronic device to transmit textual messages to other people (e.g. email, SMS, text messages) using a messaging application. In addition, it is common for a user to use an electronic device to store textual information as a memory aid (e.g. a stored document on a computer). Traditionally, the generation of textual documents (e.g. messages or memory aids) on electronic devices has utilised standard fonts, wherein a user interface comprising keys is used to generate each character or letter. However, outside the field of electronic devices, many people use conventional writing implements to perform corresponding functions (e.g. written letter instead of email, or scribbled notes instead of stored documents).
Various solutions have been proposed to allow a user to input a textual message using techniques which simulate conventional writing implements. For example, some portable digital assistants enable the user to write on the screen using a stylus and the PDA processes the resulting message to convert it to a standard font for storage. However letter/word recognition of hand-writing using electronic device has proved difficult in the past due to problems of providing a robust method for an electronic device to decipher human hand-writing. In addition the screen size may limit the size of message which can be written.
Another problem with hand-writing recognition is that it relies on having a character, within the standard fonts stored on the device, which corresponds to each hand written input character. If, for example, a person were to input Chinese into a portable electronic device configured to recognise only Roman characters (or only had a Roman character set available), the device could not 'translate' the hand-writing into standard font text. Even for devices with the required standard fonts, input is not always trivial. For example, in order to input Chinese characters using a keyboard, the user must also know the key strokes required to generate the desired character (using, for example, a standard system such as Pinyin). This is not intuitive for users and may require the keyboard 'mode' to be switched to facilitate the writing of a message using multiple scripts (e.g. Chinese and English), if such a facility is available in the device.
Figure 1 depicts an embodiment (101) of an apparatus, such as a mobile phone, comprising a display device (104) such as, for example, a Liquid Crystal Display (LCD). In other embodiments, the apparatus (101) may comprise a module for a mobile phone (or PDA or audio/video player), and may just comprise a suitably configured memory 107 and processor 108 (see below).
The apparatus (101) of Figure 1 is configured such that it may receive, include, and/or otherwise access data. For example, this embodiment (101) comprises a communications unit (103), such as a receiver, transmitter, and/or transceiver, in communication with an antenna (102) for connecting to a wireless network and/or a port (not shown) for accepting a physical connection to a network, such that data may be received via one or more types of networks. This embodiment comprises a memory (107) that stores data, possibly after being received via antenna (102) or port or after being generated at the user interface (105). The user interface may allow a user to scribe a plurality of individual user scribed delineations. The processor (108) may receive individual user scribed delineations from the user interface (105), from the memory (107), or from the communication unit ( 03). Regardless of the origin of the data, these data may be outputted to a user of apparatus (101) via the display device (104), and/or any other output devices provided with apparatus. The processor (108) may also store the data for later user in the memory (107). The memory (107) may store computer program code and/or applications which may be used to instruct/enable the processor (108) to perform functions (e.g. generate/delete or process data). Figure 2 depicts an embodiment of the apparatus comprising a portable electronic device (201), e.g. such as a mobile phone, with a user interface comprising a touch-screen (205), a memory (not shown), a processor (not shown) and an antenna (202) for transmitting data (e.g. a composite image). The portable electronic device is configured to allow the user to scribe a delineation by tracing the desired shape with his/her finger on the screen (when the device is configured to be in a scribing mode). It will be appreciated that in other suitably adapted embodiments the delineations may be scribed using a mouse, a stylus, a touch pad or a keyboard.
Figure 3a-i illustrates a series of display screens of the device when the device is in use. In this example, the user wants to write a message comprising the sentence 'This is Nokia' and send it, via a network (e.g. mobile phone network, internet, LAN or Ethernet), to a friend. To facilitate scribing such a message, this embodiment has a scribing mode wherein the touch-screen (205) is divided into three regions. The middle region (212) of the screen is where the user can scribe a delineation by moving his finger (i.e. the user's finger is the scriber in this example) across the screen. The top region of the screen
(211) is configured to display an arrangement of the individual user scribed delineations (265) already entered into the device (i.e. at least part of a composite image (291)). The bottom of the screen (213) displays a number of touch buttons which the user can actuate in order to control the device. In the scribing mode the buttons comprise an accept button (223) for accepting a user scribed delineation, a reject button (221) for rejecting a user scribed delineation and a message button (222) for accepting the composite image (291) (e.g. the scribed message comprising the accepted individual user scribed delineations).
The first screen in the series illustrates the device when the user has inputted the first user scribed delineation of the message (in the scribing mode), where in this case the first delineation is the word 'This*. The embodiment enables the user to scribe a delineation by recognising if and where the user's finger is touching the screen (e.g. by using stress, strain, conductivity, surface acoustic waves, capacitance and/or heat). The user scribes a delineation by touching the screen within the middle (scribing) region
(212) . This is recognised by the device and the sections of the middle (scribing) region which have been touched by the user are coloured to distinguish them from regions which have not been touched. The user can scribe a user scribed delineation by dragging his finger around the middle region of the screen, the sections (e.g. the pixels) being coloured as they are touched. It will be appreciated that the scribed delineation will have a form corresponding to the position of the user's finger when the user's finger was touching the device.
The user scribed delineation data corresponding to the user scribed delineation can be stored by recording which areas have been touched and/or which regions have not been touched. In this embodiment, the middle region is divided into a two-dimensional array of pixels. The stored user scribed delineation data corresponding to an individual user scribed delineation will comprise a list of pixels in the two-dimensional array and their corresponding colour. It will be appreciated that this information could also be represented by listing groups of pixels consecutive/neighbouring pixels with the same colour (e.g. by performing run-length encoding).
In other embodiments, there may be a colour palette provided on the display such that the user can choose which colour to scribe a delineation in, or to enable a user to generate multicoloured user scribed delineations.
When the user has completed the delineation, which in this case is a complete word, the user can press a touch button located in the bottom region of the screen (i.e. the accept button (213)) to indicate that he has completed entry of the first delineation. In this embodiment, the processor then assigns the user scribed delineation data (e.g. the pixels within the middle (scribing) region (212) which have been coloured) to memory (e.g. Random Access Memory) and prompts the user scribed delineation, corresponding to the user scribed delineation data, to be displayed in miniature at the top of the screen (211). The top of the screen (21 1) displays the completed user scribed delineations. The middle (scribing) region (212) of the screen, where the user had scribed the first delineation, is then wiped whereupon the user can scribe another delineation. This is depicted in figure 3b.
Figure 3c illustrates the device when the user has scribed the second delineation of the message which in this case is the word 'is'. Again, the user can accept this delineation as the next delineation in the message using the accept touch button (223) shown at the bottom of the screen (213). This prompts the processor to remove the accepted delineation from the middle (scribing) region (212) of the screen (205) and transfer it to the top of the screen (figure 3d).
If, for example, the user had made a mistake whilst scribing a delineation the user could reject the delineation by pressing the reject button (221) at the bottom of the screen. This would wipe the middle region of the screen and the user could start scribing the delineation again (i.e. without saving/storing the rejected delineation data or displaying it in the top region of the screen). Figure 3e illustrates the last user scribed delineation of the message. This is accepted and the complete composite image (291)(message) is displayed in the top of the screen region in figure 3f. The entire message reads 'This is Nokia'. If in some embodiments the complete composite image does not fit in the top of the screen, then at least part of it will be displayed, the rest available for viewing through scrolling (e.g. by placing the finger in the top of the screen, or by user actuation of a button/dial (not shown)). If the user is finished scribing new individual user scribed delineations for the composite image displayed at the top of the screen, the user can accept the composite image by pressing the message button (222) at the bottom of the screen.
In this embodiment the device is configured to automatically arrange the individual user scribed delineations to form the composite image (291) according to a preset standard. The preset standard, in this case, dictates that each newly accepted user scribed character will be placed to the right of the previously accepted user scribed characters (i.e. they are ordered chronologically left to right). In other embodiments, they may be ordered top to bottom, or right to left or any required combination thereof according to the linguistic characteristics of the user input. It will be appreciated that other embodiments may order the user scribed characters using a different standard (e.g. by size). It will be appreciated that in other embodiments the user may be able to arrange (i.e. change the position of) the individual user scribed delineations. It will be appreciated that the same message could have been created by scribing delineations representing individual letters. For example the word 'Nokia' could have been formed using individual user scribed delineations representing the letters 'Ν', Ό', 'k', Ϊ and 'a'. When the message has been accepted (i.e. by pressing the message button (222)), the mode of the device is changed (figure 3g). The user is prompted to select, from a list, how to process the composite image (291)(i.e. the message in this case). In this embodiment, the user can send the composite image message (291) by pressing the send button (224), save the composite image message (291) by pressing the save button (225), or return to the previous screen (i.e. to the composite image message in editing mode) by pressing the cancel button (226). In this example, the user wants to transmit the message (291), so he presses the send button (224). This changes the touch-screen display such that it shows a list (227) of possible recipients. The user can scroll through the list using navigation buttons (228) until the desired recipient is displayed. The user can then select the desired recipient. When a recipient has been selected the apparatus proceeds to generate the composite image data corresponding to the composite image, and to transmit the composite image data to the selected recipient (figure 3i). Whilst the device is processing the composite image (291)(e.g. preparing it for transmission) and transmitting the composite image data, the screen (205) displays an egg timer icon to indicate that the device is busy (261) and a sending icon (262) to indicate that the device is sending the composite image. Whilst this display is shown, the user may not enter information into the device. In other embodiments, the apparatus may be configured to enable processing/transmitting of the data simultaneously with user input. In this embodiment, when the device has completed sending the composite image, the display returns to a mode which allows user input.
For this embodiment, if the user pressed the save button (225), when presented with the display as shown in figure 3g after accepting the composite image, the device would have provided the user with access to the memory (e.g. hard drive, memory stick) such that the user could have decided where to store composite image data (e.g. within the file system of the apparatus). For this embodiment the composite image data contains information on each of the individual user scribed delineations and their order (but not their two dimensional positions). In this case the user scribed delineation "This" is first in the order, the user scribed delineation, "is", is second in the order and the user scribed delineation, "Nokia", is third in the order.
In this embodiment, the order of the individual user scribed delineations is dictated by the chronological order in which they are scribed. A subsequently scribed delineation is placed to the right of previously scribed delineations. It will be appreciated that in other embodiments, there may be provision for re-arranging the delineations after they have been scribed.
It wiil be appreciated that other embodiments may comprise physical buttons in place of or in addition to touch buttons provided on a touch-screen. In this embodiment, the area of the screen devoted to allowing the user to scribe a delineation (the middle of the screen (212)) is larger than the area of the screen which displays the accepted message (the top of the screen (211)). This is because the motor control of the user and the responsiveness of the touch-screen (205) may be such that it is easier to scribe a large delineation and then reduce its size (e.g. by electronically processing the image data) than to scribe the delineation directly at the desired size. Likewise a delineation may be enlarged relative to the size at which it was scribed before output. In other words, the output size (which may be dictated by the amount of data required to store/transmit a delineation and/or the size which is comfortable to read) may be different from the scribing/input size. Enabling a plurality of individual user delineations to be arranged to form a composite image allows a large composite image to be composed using a small display.
In this embodiment the device is configured to determine a rectangular boundary around the delineation (where the boundary is a dictated by the region touched by the user whilst scribing the delineation) and to scale the created delineation such that the height of each user scribed delineation is the same whilst keeping the same aspect ratio as when it was scribed. In other embodiments, each individual user scribed delineation may or may not be scaled to a pre-determined height or length and the aspect ratio may or may not be kept constant. In other embodiments data from the entire input region may form the individual user scribed delineation (the entire input region in this embodiment being the middle (scribing) region). In other embodiments, only those scribed regions (e.g. coloured region in this case) may be saved. Likewise, other embodiments may only save data corresponding to unscribed regions.
In this embodiment the device stores the accepted composite image data at the same resolution as it was scribed. Other embodiments may be configured to reduce or increase the resolution of the accepted delineation.
Although this message is in English, the device requires no English language/Roman script support to render the message (e.g. the same embodiment could be used to write/render Greek (or even Chinese) messages). It will therefore be appreciated that a user may input characters of any language or arbitrary user defined/created delineations without the need for specific additional support. It will be appreciated that other embodiments may allow standard font characters, images or drawings (i.e. not necessarily user scribed) to be used in conjunction with individual user scribed delineations.
The processor of this embodiment enables the storage/transmission the entire accepted message, comprising the (absolute/relative) size of each individual user scribed delineation, the colour of each pixel in each individual user scribed delineation and the order of the individual user scribed delineations in the message. This information makes up the composite image data. That is, in this case, the composite image data includes information for each pixel identifying to which individual user scribed delineation it belongs. This may allow the arrangement of the individual user scribed delineations within the composite image (or message) to be changed at a later time and/or after being received.
In the embodiment of figure 3, data is stored for each pixel of each individual user scribed delineation at the input resolution. In other embodiments, in order to save storage space, run-length encoding may be used on at least one individual user scribed delineation and/or on the composite image data to reduce the overall size of the data stored/transmitted. Alternatively/in addition, the resolution of the stored/transmitted composite image (and/or at least one individual user scribed delineation) may be changed. For example, in order to reduce the resolution of the individual user scribed delineation, the pixel size may be increased and the colour of each new larger pixel can be taken to be the weighted average of the original pixels which occupy the corresponding area of the user scribed delineation. In the example given in figure 3, the composite image data included arrangement data comprising information on the order of the individual user scribed delineations (as well as user scribed delineation data corresponding to each of the user scribed delineations). It will be appreciated that other embodiments may be configured to render the same composite image data according to different predetermined standards. In addition, saving the composite image data according to different format (e.g. merged composite image) may affect how the image is displayed by different embodiments. Using different formats and/or predetermined standards will be discussed below (with reference to figures 4a-4g). Figure 4a, depicts a plurality of individual user scribed delineations which could have been scribed using the embodiment discussed with reference to figure 2. Some comprise handwritten words (e.g. 450), some comprise a punctuation mark (456) and some comprise both a punctuation mark and a word (452).
Figure 4b depicts a variation of the embodiment discussed with reference to figure 2 wherein composite image data, which includes information on the order but not on the two-dimensional position of the individual user scribed delineations, has been rendered according to a predetermined standard. That is, the arrangement data of the composite image data sets the order of the individual user scribed delineations (i.e. 'This' (450) is before 'is' (451), which is before 'Joe.' (452), which is before 'Where' (453), which is before 'are' (454), which is before 'you' (455), which is before '?' (456)).
The predetermined standard used by the embodiment discussed with reference to figure 2 dictates that the device renders the corresponding composite image by placing the individual user scribed delineations side by side according to the order (as set by the composite image data) until the next user scribed delineation would not fit on the screen (205), with (pre-) defined spaces (dependent upon the justification and/or the display size) between the user scribed delineations. This next user scribed delineation is then placed on a new row below the top row such that the message can be read in the standard manner (for English) which is left to right, top to bottom. It will be appreciated that other embodiments may support alternative standards (e.g. delineations arranged top to bottom and right to left for Chinese, or arranged right to left and top to bottom for Arabic).
Figure 4c shows the same composite image (491) (i.e. having the same source composite image data) when rendered on a second embodiment (the display (405) of which is depicted in figure 4c). The display (405) of the embodiment of figure 4c has a different aspect ratio to that of the display (205) of the apparatus discussed with reference figure 2, and has a different predetermined standard for arranging an order of individual user scribed characters (i.e. composite image data). Unlike the embodiment of figure 2, the embodiment of figure 4c is configured to display the same composite image data (491) comprising an order of user scribed delineations (as shown in figure 4b) on a single row wherein only a portion of the composite image is displayed at a time. One or more buttons/dial (s) (not shown), e.g. on a keyboard, are provided to enable a user to navigate/scroll through the message. Arrows (430) are displayed on the screen to indicate in which directions the message is shown.
Figure 4d illustrates a composite image comprising the individual user scribed delineations of figure 4a. In this case the composite image data includes arrangement data comprising the two-dimensional position of each individual user scribed. For example, the two-dimensional position the user scribed delineation 'This' (450) is described in the composite image data as the first delineation on the first row, and the user scribed delineation 'you' (455) is described in the composite image data as the third delineation on the second row (i.e. in this case the rows are numbered consecutively from the top and the delineations on each row are numbered consecutively from the left).
In this case, the composite image data may comprise an image file for each individual user scribed delineation and an arrangement file comprising arrangement data. The arrangement data may give the location (i.e. within the file system of the apparatus) of each of the individual user scribed delineation data file, the size (i.e. height and breadth in number of pixels) of each individual user scribed delineation, and the position of each individual user scribed delineation within the composite image (i.e. row number and delineation number within each row). The device/apparatus, in this case, could read the arrangement data file and, for each individual user scribed delineation, assign an appropriately sized and positioned associated area within the display (for example, if the first character in a row was 100 pixels wide, the device could calculate that the second character in the row would start on the 101st pixel from the left (i.e. in this case the characters abut each other)). The device could then, for each assigned area, read the associated user scribed delineation data (comprising pixel position and colour) and colour the pixels, thereby rendering the entire composite image.
When the composite image data rendered in figure 4d is transmitted to the embodiment of figure 4c, a similar process occurs to render the composite image (figure 4e). The device reads the arrangement data file of the composite image data and after determining the size (number of pixels) and position (row number and row position) of each user scribed delineation, the device assigns an associated area within the device display for of each user scribed delineation. The device then colours each of the associated areas using the data from the associated user scribed delineation data files.As the arrangement data dictates, for each individual user scribed delineation, both the row number and the delineation order within the row, the composite image (figure 4e) as displayed on the embodiment of figure 4c looks similar to the composite image (figure 4d) as displayed on the embodiment of figure 4b. Figure 4f depicts the screen of the apparatus of figure 2 when rendering composite image data comprising a single merged layer, wherein the single merged layer comprises the individual user scribed delineations of figure 4a arranged and stitched together to form a single merged composite image. In this case the composite image data comprises the two-dimensional position of each pixel within the composite image and its corresponding colour. That is, each pixel is not identified with its corresponding user scribed delineation, nor is the absolute/relative position of each user scribed delineation stored (i.e. the image is merged and stored as a single layer). When the merged composite image data is transmitted to the second device the composite image (493) is rendered as a merged composite image (493), as shown in figure 4g. This may allow the amount of data required to transmit/store a composite image to be reduced. However, it may not allow the arrangement of the individual user scribed delineations to be changed after the merged composite image (493) has been stored/transmitted.
It will be appreciated that a composite image (e.g. a merged composite image saved as a pdf) may be readable by devices which are not configured to arrange a plurality of individual user scribed delineations. That is, a device configured to receive a composite image may not have the same processing/memory requirements as those of the generating/arranging device.
Figure 5 illustrates a further embodiment of an apparatus (501) such as a personal digital assistant device comprising a screen (505), a keyboard (506b), a touchpad (505a), a processor and memory (not shown). Unlike the apparatus of figure 2 in which the user interface and the display functions were both provided by a touch screen (205), in this embodiment these functions are provided by a screen (505), a keyboard (506b) and touchpad (506a). Figure 6a-i shows a series of screens displayed by the apparatus/device of figure 5 during a process wherein the user creates and saves a composite image comprising a plurality of individual user scribed delineations. In the first screen (figure 6a), the user has already configured the device to enable a delineation to be scribed (by, for example, running a user scribing computer program or computer application). The device may be considered to be in a scribing mode. In this case, the user has scribed a star shape, in this embodiment the device is configured to enable the user to scribe a delineation using the touchpad. When the user's finger is in contact with the touchpad (506a), the touchpad is configured to detect the position/motion of the user's finger. This information is used to control a cursor (540) displayed on the screen. By controlling the cursor (540) the user can scribe a delineation on the screen (i.e. when in the scribing mode). That is the cursor can be configured to colour a region on the screen (e.g. within an active area (512)) according to its position. When the delineation has been completed the user can indicate to the device that the user scribed delineation is completed by selecting the 'accept' option (523) at the bottom the screen (e.g. by clicking). This then stores the corresponding accepted delineation data and wipes the screen. The accepted delineation, unlike with the embodiment described with reference to figure 2, is not displayed on the screen but the corresponding data is stored for later use (for example, in Random Access Memory). The user can then scribe another delineation which in this case is the word 'Good'. The process is repeated and the user scribes and accepts the delineations 'iuck'. When the user has completed the final delineation, which in this case is the punctuation mark '!', the user can indicate to the device that this is the final delineation by clicking the 'finish' button (522) at the bottom of the screen. This accepts the delineation and changes the mode of the device to an arranging mode. The device is then configured to be in an arranging mode. Unlike the embodiment discussed with figure 2, in which the device automatically fixed the arrangement of the individual user scribed delineations according to a predetermined standard, this embodiment enables the user to arrange the delineations. Initially, as a default, all of the individual user scribed delineations (i.e. those scribed during the session) are displayed side by side in the order in which they were scribed. The user can select an individual user scribed delineation (e.g. by clicking on it using the cursor). For this embodiment, the selected delineation is visually distinguished from the non-selected delineations by having a border with a double line. When the delineation is selected the user can move and resize the selected delineation. The delineation can be deselected by clicking the deselect button (528) on the bottom of the screen or by selecting a different delineation. In this embodiment the background (i.e. the regions not coloured when the delineation was scribed) is configured to be transparent. The device/user may select which delineation is overlaid on which in the event that two (or more) delineations overlap.
When the user has finished arranging and sizing the delineations, and ordering the image layers the user can select 'finish' (529) to indicate that he is finished. In this embodiment, this removes the guidelines from around each individual user scribed delineation and enables the user to either save or transmit composite image data corresponding the final composite image (692). In this case, the user wants to re-edit the composite image later so the user selects the save option (525). Pressing the send button (524) would have allowed him to transmit the message. It will be appreciated that other embodiments may enable only transmission or only storage of composite image data.
Pressing the save button brings the user to a different screen which allows the user to save the composite image data corresponding to the composite image to the memory of the device. The user gives a file name (i.e. goodluck.file, in this case) by entering the file name in the filename field (554) and selects which format he wants the composite image data to be stored in. In this case he can choose to store either the two-dimensionai positions of the individual user scribed delineations (and data corresponding to the delineations themselves), by selecting the '2D' button (553), or to store a merged composite image, by selecting the 'merged' button (552).
The composite image data for a merged composite image, in this case, comprises only the absolute position and colour of each pixel as a single layer but does not associate identifiable information with each individual user scribed delineation and is, in this case, less useful for re-editing of the composite image.
The '2D' file format allows two-dimensional positional information (arrangement data) on each of the individual user scribed characters to be stored for each individual user scribed image. This format may allow a device to read the file containing the composite image data and detect how the individual user scribed delineations were arranged to make up the composite image. This would allow the user scribed delineations to be subsequently rearranged or resized. As the user wishes to re-edit the composite image he selects the '2D' button (553) and presses the continue button (555) to prompt the device to implement his choices. The device responds by creating a file with the corresponding data and saving it to the memory. When the device is processing the image in order to store in memory the screen (505) displays an egg timer icon (561) indicating that the device is busy and a saving icon (562) indicating that the device is saving the file.
If, when selecting how to save the file (figure 6h), the user had wanted to return to the previous screen he could have pressed the cancel button (554) at any time which would have brought him to the previous screen (figure 6g). It will be appreciated that another embodiment may provide more advanced storage options. For example, another embodiment may provide the user with the facility to reduce or increase the resolution of one or more of the individual user scribed delineation and/or the composite image.
Figure 7 shows a flow diagram illustrating how a composite image is generated and stored/transmitted.
Figure 8 illustrates schematically a computer/processor readable media 800 providing a program comprising computer code which implements one or more of the aforementioned embodiments. In this example, the computer/processor readable media is a disc such as a digital versatile disc (DVD) or a compact disc (CD). In other embodiments, the computer readable media may be any media that has been programmed in such a way as to carry out an inventive function.
It will be appreciated that the previously described embodiments may relate to allowing a plurality of hand scribed images/text elements/delineations to be arranged to form a composite image. This composite image may be converted to a data format and stored and/or transmitted.
It will be appreciated to the skilled reader that any mentioned apparatus/device and/or other features of particular mentioned apparatus/device may be provided by apparatus arranged such that they become configured to carry out the desired operations only when enabled, e.g. switched on, or the like. In such cases, they may not necessarily have the appropriate software loaded into the active memory in the non-enabled (e.g. switched off state) and only load the appropriate software in the enabled (e.g. on state). The apparatus may comprise hardware circuitry and/or firmware. The apparatus may comprise software loaded onto memory. Such software/computer programs may be recorded on the same memory/processor/functional units and/or on one or more memories/processors/functional units. In some embodiments, a particular mentioned apparatus/device may be preprogrammed with the appropriate software to carry out desired operations, and wherein the appropriate software can be enabled for use by a user downloading a "key", for example, to unlock/enable the software and its associated functionality. Advantages associated with such embodiments can include a reduced requirement to download data when further functionality is required for a device, and this can be useful in examples where a device is perceived to have sufficient capacity to store such pre-programmed software for functionality that may not be enabled by a user. It will be appreciated that the any mentioned apparatus/circuttry/e!ements/processor may have other functions in addition to the mentioned functions, and that these functions may be performed by the same apparatus/circuitry/elements/processor. One or more disclosed aspects may encompass the electronic distribution of associated computer programs and computer programs (which may be source/transport encoded) recorded on an appropriate carrier (e.g. memory, signal).
It will be appreciated that any "computer" described herein can comprise a collection of one or more individual processors/processing elements that may or may not be located on the same circuit board, or the same region/position of a circuit board or even the same device. In some embodiments one or more of any mentioned processors may be distributed over a plurality of devices. The same or different processor/processing elements may perform one or more functions described herein.
It will be appreciated that the term "signalling" may refer to one or more signals transmitted as a series of transmitted and/or received signals. The series of signals may comprise one, two, three, four or even more individual signal components or distinct signals to make up said signalling. Some or all of these individual signals may be transmitted/received simultaneously, in sequence, and/or such that they temporally overlap one another.
With reference to any discussion of any mentioned computer and/or processor and memory (e.g. including ROM, CD-ROM etc), these may comprise a computer processor, Application Specific Integrated Circuit (ASIC), field-programmable gate array (FPGA), and/or other hardware components that have been programmed in such a way to carry out the inventive function.
The applicant hereby discloses in isolation each individual feature described herein and any combination of two or more such features, to the extent that such features or combinations are capable of being carried out based on the present specification as a whole, in the light of the common general knowledge of a person skilled in the art, irrespective of whether such features or combinations of features solve any problems disclosed herein, and without limitation to the scope of the claims. The applicant indicates that the disclosed aspects/embodiments may consist of any such individual feature or combination of features. In view of the foregoing description it will be evident to a person skilled in the art that various modifications may be made within the scope of the disclosure.
While there have been shown and described and pointed out fundamental novel features of the invention as applied to preferred embodiments thereof, it will be understood that various omissions and substitutions and changes in the form and details of the devices and methods described may be made by those skilled in the art without departing from the spirit of the invention. For example, it is expressly intended that all combinations of those elements and/or method steps which perform substantially the same function in substantially the same way to achieve the same results are within the scope of the invention. Moreover, it should be recognized that structures and/or elements and/or method steps shown and/or described in connection with any disclosed form or embodiment of the invention may be incorporated in any other disclosed or described or suggested form or embodiment as a general matter of design choice. Furthermore, in the claims means-plus-function clauses are intended to cover the structures described herein as performing the recited function and not only structural equivalents, but also equivalent structures. Thus although a nail and a screw may not be structural equivalents in that a nail employs a cylindrical surface to secure wooden parts together, whereas a screw employs a helical surface, in the environment of fastening wooden parts, a nail and a screw may be equivalent structures.

Claims

What is claimed is:
1. An apparatus comprising:
at least one processor; and
at least one memory including computer program code,
the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following:
receive user scribed delineation data corresponding to each of a plurality of individual user scribed delineations produced using a user interface of an electronic device;
enable display of the plurality of delineations corresponding to the user scribed delineation data;
enable arrangement of the plurality of individual user scribed delineations to form a corresponding composite image;
generate composite image data corresponding to the composite image; and transmit/store the composite image data.
2. The apparatus of claim 1 wherein the apparatus comprises:
a user interface, wherein the user interface is configured to detect motion/position of a scriber to generate user scribed delineations.
3. The apparatus of claim 2 wherein the user interface comprises a wand, a touchpad, a touch-screen, a mouse, a motion detector, a position detector or an acceferometer.
4. The apparatus of claim 1 wherein the apparatus is configured to:
change the resolution of at least one of the received individual user scribed delineations/composite image.
5. The apparatus of claim 1 , wherein the apparatus is configured to:
perform run-length encoding on the user scribed delineation/composite image data corresponding to at least one of the respective received individual user scribed delineations/composite image.
6. The apparatus of claim 1 , wherein the composite image data comprises:
arrangement data comprising information on the order of the individual user scribed delineations making up the corresponding composite image; and user scribed delineation data corresponding to each of the plurality of user scribed delineations.
7. The apparatus of claim 1, wherein the composite image data comprises:
arrangement data comprising information on the relative position of the individual user scribed delineations making up the corresponding composite image, along two dimensions; and
user scribed delineation data corresponding to each of the plurality of user scribed delineations.
8. The apparatus of claim 1 , wherein the composite image data comprises information on a single merged layer corresponding to the composite image.
9. The apparatus device of claim 1 comprising a display configured to
enable at least one of the plurality of individual user scribed delineations to be displayed; and
enable the composite image to be displayed.
10. The apparatus of claim 9 comprising a display configured to enable at least one of the plurality of individual user scribed delineations and the composite image to be displayed substantially simultaneously.
11. The apparatus of claim 1 , wherein the apparatus is configured to determine a boundary around a plurality of the delineations and to scale the respective created delineations such that the height of each user scribed delineation is the same whilst keeping the same aspect ratio as when it was scribed.
12. The apparatus of claim 1 wherein the user scribed character comprises a combination of one or more of a word, a letter character, a graphic character, a drawing, a phrase, a syllable, a punctuation mark and a sentence.
13. The apparatus according to claim 1 , wherein the apparatus is a portable electronic device, circuitry for a portable electronic device, or a module for a portable electronic device.
14. A method comprising receiving user scribed delineation data corresponding to each of a plurality of individual user scribed delineations produced using a user interface of an electronic device;
enabling display of the plurality of delineations corresponding to the user scribed delineation data;
enabling arrangement of the plurality of individual user scribed delineations to form a corresponding composite image;
generating composite image data corresponding to the composite image; and transmitting/storing the composite image data.
15. A computer program configured to
receive user scribed delineation data corresponding to each of a plurality of individual user scribed delineations produced using a user interface of an electronic device;
enable display of the plurality of delineations corresponding to the user scribed delineation data;
enable arrangement of the plurality of individual user scribed delineations to form a corresponding composite image;
generate composite image data corresponding to the composite image; and transmit/store the composite image data.
PCT/CN2010/075699 2010-08-04 2010-08-04 Apparatus and associated methods WO2012016379A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2010/075699 WO2012016379A1 (en) 2010-08-04 2010-08-04 Apparatus and associated methods

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2010/075699 WO2012016379A1 (en) 2010-08-04 2010-08-04 Apparatus and associated methods

Publications (1)

Publication Number Publication Date
WO2012016379A1 true WO2012016379A1 (en) 2012-02-09

Family

ID=45558910

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2010/075699 WO2012016379A1 (en) 2010-08-04 2010-08-04 Apparatus and associated methods

Country Status (1)

Country Link
WO (1) WO2012016379A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105159479A (en) * 2015-07-10 2015-12-16 深圳市永兴元科技有限公司 Handwriting input method and apparatus

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101281449A (en) * 2007-04-03 2008-10-08 诺基亚(中国)投资有限公司 Hand-written character recognizing method and system
US20090247231A1 (en) * 2008-03-28 2009-10-01 Hong Fu Jin Precision Industry (Shenzhen) Co., Ltd. Telecommunication device and handwriting input processing method thereof

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101281449A (en) * 2007-04-03 2008-10-08 诺基亚(中国)投资有限公司 Hand-written character recognizing method and system
US20090247231A1 (en) * 2008-03-28 2009-10-01 Hong Fu Jin Precision Industry (Shenzhen) Co., Ltd. Telecommunication device and handwriting input processing method thereof

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105159479A (en) * 2015-07-10 2015-12-16 深圳市永兴元科技有限公司 Handwriting input method and apparatus

Similar Documents

Publication Publication Date Title
KR101590457B1 (en) Portable touch screen device, method, and graphical user interface for using emoji characters
US8347221B2 (en) Touch-sensitive display and method of control
EP2249240B1 (en) Mobile terminal capable of recognizing fingernail touch and method of controlling the operation thereof
US8704789B2 (en) Information input apparatus
CN105912071B (en) Mobile device and method for editing page for home screen
US20140123050A1 (en) Text input
KR101167352B1 (en) Apparatus and method for inputing characters of terminal
US8856674B2 (en) Electronic device and method for character deletion
US9342155B2 (en) Character entry apparatus and associated methods
US20130082824A1 (en) Feedback response
KR101102725B1 (en) Apparatus and method for inputing characters of terminal
US20140176600A1 (en) Text-enlargement display method
US20150007088A1 (en) Size reduction and utilization of software keyboards
US20120249425A1 (en) Character entry apparatus and associated methods
CN103631434B (en) Mobile device and its control method with the handwriting functions using multiple point touching
KR101046914B1 (en) Recursive key input apparatus and method thereof
US20130086502A1 (en) User interface
WO2012016379A1 (en) Apparatus and associated methods
US20180004303A1 (en) Electronic device, control method and non-transitory storage medium
CA2791486C (en) Electric device and method for character deletion
CN104182160A (en) Electronic device and method for controlling electronic device
WO2007052958A1 (en) Device having display buttons and display method and medium for the device
WO2012073005A1 (en) Predictive text entry methods and systems
WO2013048397A1 (en) Electronic device and method for character deletion
JP2011238286A (en) Character data input device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10855517

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10855517

Country of ref document: EP

Kind code of ref document: A1