GB2519312A - An apparatus for associating images with electronic text and associated methods - Google Patents

An apparatus for associating images with electronic text and associated methods Download PDF

Info

Publication number
GB2519312A
GB2519312A GB1318285.2A GB201318285A GB2519312A GB 2519312 A GB2519312 A GB 2519312A GB 201318285 A GB201318285 A GB 201318285A GB 2519312 A GB2519312 A GB 2519312A
Authority
GB
United Kingdom
Prior art keywords
text
section
user
graphical representation
electronic text
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1318285.2A
Other versions
GB201318285D0 (en
Inventor
Arto Juhani Lehtiniemi
Jussi Artturi Leppanen
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nokia Technologies Oy
Original Assignee
Nokia Technologies Oy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Technologies Oy filed Critical Nokia Technologies Oy
Priority to GB1318285.2A priority Critical patent/GB2519312A/en
Publication of GB201318285D0 publication Critical patent/GB201318285D0/en
Publication of GB2519312A publication Critical patent/GB2519312A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • G06F40/35Discourse or dialogue representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A section of electronic text is semantically analysed to identify a potential graphical representation for at least one feature, such as a physical aspect of a scene or character, depicted in the text. The image, animation, scene, diagram or picture may subsequently be displayed with the text section. It may also be changed, manipulated, swapped, confirmed or otherwise customized or selected by a user. Multiple iterations of semantic analysis on the section of text may identify more features. Features may include physical aspects such as size, colour, shape, shading and spatial relationships such as size, position, shape and spacing. Multiple graphics may be associated with the same text portion and the user may scroll and select between them, the graphics may also be ranked according to user profile match criteria. Earlier or subsequent graphical representations may be displayed in a timeline and selecting an image from the time line may display an associated text portion. The images may be two or three dimensional and may be virtual bookmarks or illustrations in e-books, narratives, e-magazines, social media messages or user generated content.

Description

AN APPARATUS FOR ASSOCIATING IMAGES WITH ELECTRONIC TEXT AND
ASSOCIATED METHODS
Technical Field
The present disclosure relates to electronic text and images, associated methods, computer programs and apparatus. Certain disclosed examples may relate to portable electronic devices, for example so-called hand-portable electronic devices which may be hand-held in use (although they may be placed in a cradle in use). Such hand-portable electronic devices include so-called Personal Digital Assistants (PDA5), mobile telephones, smartphones and other smart devices, and tablet PCs.
The portable electronic devices/apparatus according to one or more disclosed examples may provide one or more audio/text/video communication functions (e.g. tele-is communication, video-communication, and/or text transmission (Short Message Service (SMS)/Multimedia Message Service (MMS)/e-mailing) functions), interactive/non-interactive viewing functions (e.g. web-browsing, navigation, TV/program viewing functions), music recording/playing functions (e.g. MP3 or other format and/or (FM/AM) radio broadcast recording/playing), downloading/sending of data functions, image capture function (e.g. using a (e.g. in-built) digital camera), e-book functions, and gaming functions.
Background
Users are increasingly using their electronic devices to read text, for example using e-book readers or smartphones (particularly those with larger screens).
The listing or discussion of a prior-published document or any background in this specification should not necessarily be taken as an acknowledgement that the document or background is part of the state of the art or is common general knowledge. One or more examples of the present disclosure may or may not address one or more of the
background issues.
Summary
In a first example there is provided an apparatus comprising at least one processor and at least one memory including computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: based on a semantic analysis of a section of electronic text, provide for association of a graphical representation of at least one or more features depicted in the section of electronic text for subsequent reference with the section of electronic text.
The semantic analysis may identify one or more physical aspects of the one or more features depicted in the section of electronic text, and the graphical representation may provide a graphical representation of one or more of the physical aspects of the one or more features. The one or more physical aspects may comprise one or more of size, shape, colour and shading of the one or more depicted features.
The semantic analysis may identify a spatial interrelationship of one or more features depicted in the section of electronic text with other features depicted in the electronic text, is and the graphical representation may provide a graphical representation of the spatial interrelationship of one or more of the depicted features. The spatial interrelationship may comprise one or more of relative size, relative position and spacing between the depicted features.
The semantic analysis may be one of a current semantic analysis performed on the section of text and a previous semantic analysis performed on the section of text. The previous semantic analysis may have been performed by one of: an apparatus associated with the same user; an apparatus associated with a different user; and an apparatus associated with a different user having a matching user-profile with the current user.
The apparatus may be configured to allow multiple graphical representations of the section of electronic text to be associated with the section of electronic text for subsequent reference. The apparatus may be configured to provide the subsequent so reference by allowing user-scrolling through the multiple graphical representations associated with the section of electronic text. The apparatus may be configured to allow the multiple graphical representations of the section of text to be associated for subsequent reference with the section of electronic text in a ranking order according to a user-profile match criterion with the user-profile of the current user.
The one or more features may be at least one of one or more physical aspects of a scene or of a character depicted in the electronic text.
The apparatus may be configured to provide for the association by presenting the graphical representation of the one or more features for one or more of user-selection, user-confirmation or user-manipulation, to initiate the association. The user-manipulation may allow the user to change a physical characteristic of the graphical representation of the one or more features. The apparatus may be configured to present the graphical representation using a bank of template graphical representations.
The apparatus may be configured to provide for the association by presenting multiple optional graphical representations of the one or more features for one or more of user-selection, user-confirmation or user-manipulation to initiate the association.
The apparatus may be configured to associate earlier or subsequent graphical representations graphical representations depicting the one or more features, in the section of electronic text or in other sections of the electronic text, in a timeline. The timeline may be a chronological timeline. The apparatus may be configured to allow for user selection of a particular graphical representation in the timeline and cause a corresponding page of electronic text associated with the particular graphical representation to be displayed.
The apparatus may be configured to allow for the graphical representation to be used as a virtual bookmark to facilitate location of the section of text associated with the graphical representation.
The subsequent reference may allow multiple subsequent simultaneous or time-spaced viewers of the section of text to view previously associated graphical representations for the section of text in association with the subsequent viewing of the section of text.
The apparatus may be configured to perform the semantic analysis of the section of text so and/or receive the semantic analysis of the section of text from another apparatus.
The section of text may be a section from a narrative text (one which tells a story), an e-book, an e-magazine, a website, a social media message, and user-generated electronic content.
The graphical representation may comprise one or more of: a 2-D image; a 3-D image; and a 3-D navigable virtual landscape.
The section of electronic text may be from an e-book, and the apparatus may be configured to, based on the semantic analysis of the section of electronic text from the e-book, provide for association of a graphical representation of one or more features depicted in the section of electronic text from the e-book for subsequent reference with that section of electronic text from the e-book.
The apparatus may be a portable electronic device, a mobile telephone, a smartphone, a personal digital assistant, an e-book, a tablet computer, a surface computer, a navigation device, a desktop computer, or a module for the same.
According to a further example there is provided a computer readable medium comprising computer program code stored thereon, the computer readable medium and computer program code being configured to, when run on at least one processor perform at least the following: based on a semantic analysis of a section of electronic text, provide for association of a graphical representation of at least one or more features depicted in the section of electronic text for subsequent reference with the section of electronic text.
A computer program may be stored on a storage media (e.g. on a CD, a DVD, a memory stick or other non-transitory medium). A computer program may be configured to run on a device or apparatus as an application. An application may be run by a device or apparatus via an operating system. A computer program may form part of a computer program product. Corresponding computer programs for implementing one or more of the methods disclosed are also within the present disclosure and encompassed by one or more of the described examples.
According to a further example, there is provided a method, the method comprising: based on a semantic analysis of a section of electronic text, providing for association of a so graphical representation of at least one or more features depicted in the section of electronic text for subsequent reference with the section of electronic text.
According to a further example there is provided an apparatus comprising, based on a semantic analysis of a section of electronic text, means for providing for association of a graphical representation of at least one or more features depicted in the section of electronic text for subsequent reference with the section of electronic text.
The present disclosure includes one or more corresponding aspects, examples or features in isolation or in various combinations whether or not specifically stated (including claimed) in that combination or in isolation. Corresponding means and corresponding function units (e.g., semantic analyser, graphical representation associator) for performing one or more of the discussed functions are also within the
present disclosure.
The above summary is intended to be merely exemplary and non-limiting.
Brief Description of the Figures
A description is now given, by way of example only, with reference to the accompanying drawings, in which: figure 1 illustrates an example apparatus comprising a number of electronic components, including memory and a processor, according to one example of the present disclosure; figure 2 illustrates an example apparatus comprising a number of electronic components, including memory, a processor and a communication unit, according to another example
of the present disclosure;
figure 3 illustrates an example apparatus comprising a number of electronic components, including memory and a processor, according to another example of the present
disclosure;
figures 4a-4d illustrate association of a graphical representation of a feature with the corresponding text describing the feature in a section of electronic text according to
examples of the present disclosure;
figures 5a-5d illustrate association of two different graphical representations of a scene comprising multiple features according to the corresponding text describing the scene
according to examples of the present disclosure;
figures 6a-6b each illustrate user-customisation of a graphical representation of features so depicted in electronic text according to examples of the present disclosure; figure 7 illustrates a chronological timeline of graphical representations associated with features depicted in an electronic text according to examples of the present disclosure; figures Ba-Sb each illustrate an apparatus in communication with a remote computing element; figure 9 illustrates a flowchart according to an example method of the present disclosure; and figure 10 illustrates schematically a computer readable medium providing a program.
Description of Example Aspects
A user may enjoy seeing images representing scenes, stories and characters in an electronic text whilst reading the text. Not all electronic texts have images associated with the text. In these cases, there are no images for the user to view. If an electronic text has associated images, these images may have been prepared/approved by the text author and form part of the electronic text program/application.
A user may visualise a description in an electronic text differently to, for example, the author or illustrator for that electronic text. The user may not enjoy viewing images, or the user's interpretation or opinion of the electronic text may be altered if he/she does not personally agree with the authors/illustrators interpretation of the electronic text.
is Examples discussed herein may be considered to, based on a semantic analysis of a section of electronic text, provide for association of a graphical representation of at least one or more features depicted in the section of electronic text for subsequent reference with the section of electronic text.
For example, in a children's story about three bears living in a house in the woods, the electronic story/narrative text may be semantically analysed to identify the features of three bears, a house, and woods, and construct a graphical representation of these features to represent the story described in the text. If the characters are described (e.g. "daddy bear was big and brown with a cheerful face") these descriptive phrases may be identified in the semantic analysis and used to tailor the graphical representation of that particular character within the overall graphical representation of the story.
Other examples depicted in the figures have been provided with reference numerals that correspond to similar features of earlier described examples. For example, feature so number 100 can also correspond to numbers 200, 300 etc. These numbered features may appear in the figures but may not have been directly referred to within the description of these examples. These have still been provided in the figures to aid understanding of the further examples, particularly in relation to the features of similar earlier described examples.
Figure 1 shows an apparatus 100 comprising memory 107, a processor 108, input I and output 0. In this example only one processor and one memory are shown but it will be appreciated that other examples may utilise more than one processor and/or more than one memory (e.g. same or different processor/memory types).
In this example the apparatus 100 is an Application Specific Integrated Circuit (ASIC) for a portable electronic device with a touch sensitive display. In other examples the apparatus 100 can be a module for such a device, or may be the device itself, wherein the processor 108 is a general purpose CPU of the device and the memory 107 is general purpose memory comprised by the device. The display, in other examples, may not be touch sensitive.
The input I allows for receipt of signalling to the apparatus 100 from further components, such as components of a portable electronic device (like a touch-sensitive or hover-sensitive display) or the like. The output 0 allows for onward provision of signalling from within the apparatus 100 to further components such as a display screen, speaker, or is vibration module. In this example the input I and output 0 are part of a connection bus that allows for connection of the apparatus 100 to further components.
The processor 108 is a general purpose processor dedicated to executing/processing information received via the input I in accordance with instructions stored in the form of computer program code on the memory 107. The output signalling generated by such operations from the processor 108 is provided onwards to further components via the output 0.
The memory 107 (not necessarily a single memory unit) is a computer readable medium (solid state memory in this example, but may be other types of memory such as a hard drive, ROM, RAM, Flash or the like) that stores computer program code. This computer program code stores instructions that are executable by the processor 108, when the program code is run on the processor 108. The internal connections between the memory 107 and the processor 108 can be understood to, in one or more examples, so provide an active coupling between the processor 108 and the memory 107 to allow the processor 108 to access the computer program code stored on the memory 107.
In this example the input I, output 0, processor 108 and memory 107 are all electrically connected to one another internally to allow for electrical communication between the respective components I, 0, 107, 108. In this example the components are all located proximate to one another so as to be formed together as an ASIC, in other words, so as to be integrated together as a single chip/circuit that can be installed into an electronic device. In other examples one or more or all of the components may be located separately from one another.
Figure 2 depicts an apparatus 200 of a further example, such as a smartphone, e-reader or tablet computer. In other examples, the apparatus 200 may comprise a module for such a device or may just comprise a suitably configured memory 207 and processor 208.
The example of figure 2 comprises a display device 204 such as, for example, a liquid crystal display (LCD), e-lnk or touch-screen user interface. The apparatus 200 of figure 2 is configured such that it may receive, include, and/or otherwise access data. For example, this example 200 comprises a communications unit 203, such as a receiver, transmitter, and/or transceiver, in communication with an antenna 202 for connecting to a wireless network and/or a port (not shown) for accepting a physical connection to a is network, such that data may be received via one or more types of networks. This example comprises a memory 207 that stores data, possibly after being received via antenna 202 or port or after being generated at the user interface 205. The processor 208 may receive data from the user interface 205, from the memory 207, or from the communication unit 203. It will be appreciated that, in certain examples, the display device 204 may incorporate the user interface 205. Regardless of the origin of the data, these data may be outputted to a user of apparatus 200 via the display device 204, and/or any other output devices provided with apparatus. The processor 208 may also store the data for later use in the memory 207. The memory 207 may store computer program code and/or applications which may be used to instruct/enable the processor 208 to perform functions (e.g. read, write, delete, edit or process data).
Figure 3 depicts a further example of an electronic device 300 comprising the apparatus of figure 1. The apparatus 100 can be provided as a module for device 300, or even as a processor/memory for the device 300 or a processor/memory for a module for such a device 300. The device 300 comprises a processor 308 and a storage medium 307, which are connected (e.g. electrically and/or wirelessly) by a data bus 380. This data bus 380 can provide an active coupling between the processor 308 and the storage medium 307 to allow the processor 308 to access the computer program code. It will be appreciated that the components (e.g. memory, processor) of the device/apparatus may be linked via cloud computing architecture. For example, the storage device may be a remote server accessed via the internet by the processor.
The apparatus 100 in figure 3 is connected (e.g. electrically and/or wirelessly) to an input/output interface 370 that receives the output from the apparatus 100 and transmits this to the device 300 via data bus 380. Interface 370 can be connected via the data bus 380 to a display 304 (touch-sensitive or otherwise) that provides information from the apparatus 100 to a user. Display 304 can be part of the device 300 or can be separate.
The device 300 also comprises a processor 308 configured for general control of the apparatus 100 as well as the device 300 by providing signalling to, and receiving signalling from, other device components to manage their operation.
The storage medium 307 is configured to store computer code configured to perform, control or enable the operation of the apparatus 100. The storage medium 307 may be configured to store settings for the other device components. The processor 308 may access the storage medium 307 to retrieve the component settings in order to manage the operation of the other device components. The storage medium 307 may be a temporary storage medium such as a volatile random access memory. The storage medium 307 may also be a permanent storage medium such as a hard disk drive, a flash memory, a remote server (such as cloud storage) or a non-volatile random access memory. The storage medium 307 could be composed of different combinations of the same or different memory types.
Figures 4a-4c illustrates an example of associating a graphical representation of a feature described in a section of electronic text with the corresponding text describing the feature. An apparatus 400 is displaying a section of electronic text 402 from a novel. The apparatus may be, for example, a mobile telephone, a smartphone, a personal digital assistant, an e-book, a tablet computer, a surface computer, a navigation device, a desktop computer, or a module for the same.
Within the second displayed paragraph 404, a feature is described as a "solitary tall lighthouse" 406. The apparatus 400 has identified this feature based on a semantic so analysis of the text 402. Such semantic analysis may comprise, for example, parsing the text 402 to identify common words throughout the text and identify descriptive words associated with the identified common words. Therefore, if a passage of the text 402 is describing a scene with a lighthouse, the word "lighthouse" may appear more commonly in that section of text than may be expected, and thus "lighthouse" may be identified as an important word in the current passage of text 402. The identification of words which may be representative of the theme of a section of text may be identified using, for example, term frequency-inverse document frequency (tf-idf), which is a numerical statistic which reflects how important a word is in a particular document or section of text.
The tf-idf value increases with the number of times a word appears in the text, but is offset by the general average frequency of the word in the language. In some examples, databases describing word synonyms may be used for assisting the analysis. An example of such a database is the WordNet lexical database for English.
Semantic parsing of the text may identify descriptive words associated with the identified "lighthouse" feature, which in this example are "solitary" and "tall". Thus the semantic analysis identifies one or more physical aspects ("solitary" and "tall") of the feature 406 depicted in the section of electronic text 402. The graphical representation 410 provides a graphical representation of one or more of the physical aspects ("solitary" and "tall") of the feature (the lighthouse). The apparatus may be configured to perform the semantic analysis of the section of text and/or may be configured to receive the semantic analysis of the section of text from another apparatus such as from a remote server.
In this example the physical aspects comprise the qualities of "solitary" and "tall". In other examples, the physical aspects may comprise, for example, the size (e.g., large, small, tall, short, thin, fat, wide, narrow), shape (e.g., round, square, pointed), colour or shading (e.g., dark, bright, shadowy) of the one or more features.
In some examples the semantic analysis may be a current semantic analysis performed on the section of text 404. The text may be semantically analysed "on the fly", as each page of electronic text is displayed or as each new chapter in an e-book is reached by the reader, for example.
In some examples the semantic analysis may be previous semantic analysis performed on the section of text. For example, the semantic analysis of the electronic text 402 may have been performed prior to the user reading the text 402, and when the user displays a section of the text 402 for reading, an association between features depicted in the text so and graphical representations of those features are pre-stored and available for display.
The previous semantic analysis may have been performed by an apparatus associated with the same user (such as the user's desktop computer or laptop computer) or an apparatus associated with a different user (such as a desktop computer or smartphone belonging to a different user who has access to the same piece of electronic text 402, for example the same e-book downloaded from the same online store).
The previous semantic analysis may have been performed by an apparatus associated with a different user having a matching user-profile with the current user. For example, the user currently reading the electronic text may have a matching profile with another user who has read the same electronic text. A profile may be, for example, a user profile stored at an online store from which electronic texts are accessed. An example of characteristics stored in matching user profiles may include, for example "female, 30-40 years old, interested in crime fiction". Another example is that two or more users are all based in the same country. Profile matching between users may be done based on users' previous activity in creating and/or editing illustrations for particular types of electronic texts, such as, for example, horror e-books, cookery e-magazines, or technical documents. Profile matching between users may be done based on what sections of particular electronic texts a user creates/edits illustrations for, such as character portraits, descriptions of outdoor locations or indoor scenes, descriptions of particular groups of objects such as machinery, vehicles, animals, or monsters, for example. Profile matching may be done based on the types/styles of illustrations created or edited by users. For example if certain users create illustrations in a futuristic style, using black and white rather than multicolours, or using a particular image creating application, then they may enjoy the illustrations created by other such similar users.
In this way, in the case where more than one suitable graphical representation is available to be associated with a feature in an electronic text, a user may previously have selected one graphical representation. A second user having a profile matching the first user may automatically, or upon user confirmation, be presented with the same graphical representation when that second user reads the same electronic text. Users having similar profiles may visualise described scenes and characters in a similar way, or may appreciate similar styles of graphical representation, for example.
The apparatus 400 then provides for association of a graphical representation of the feature depicted in the electronic text for subsequent reference with that section of electronic text. Thus a graphical representation 410 of a lighthouse 408, as shown in so figure 4b, is associated with the section of electronic text 406 which references the lighthouse.
In some examples, as in figure 4c, the association may be made so that the graphical representation 410 is automatically embedded within the text 402 at a location close to the occurrence of the word or words 406 describing the feature. Figure 4c shows the image of a solitary tall lighthouse 410 included in the text 402 just before the description of the lighthouse. The user may, in some examples, be able to choose a reading mode in which the user can toggle whether or not to automatically include graphical representations of features within the text.
In some examples, as in figure 4d, the association may be made so that the text 406 which has a graphical representation 410 associated with it is highlighted 412 to indicate to a reader that an image is available for viewing which relates to that text. The user may, for example, select 414 the highlighted text 412 and the graphical representation 410 may be displayed to the user. The graphical representation may be displayed by, for example, overlaying a portion of the text, over substantially all of the display screen of the apparatus 400, or being embedded within the text 402 as shown in figure 4c. In some examples (described further in relation to figures 6a and 6b) the user may be able to edit the graphical representation.
The graphical representation may comprise one or more of: a 2-0 image; a 3-D image; is and a 3-0 navigable virtual landscape. For example, 2-D map image may be associated with a section of text describing the layout of a landscape or location. A 3-0 image may be associated with a section of text describing the outside of a building or a character. A 3-D navigable virtual landscape may be associated with a section of text describing the interior of a house, so that the user may interact with the virtual landscape and explore the rooms and layout of the house, for example.
Figures 5a-5d illustrate an example of associating a graphical representation of multiple features described in a section of electronic text with the corresponding text describing the features. An apparatus 500 is displaying a section of electronic text from a novel. The text describes a scene with two characters (John 506 and Karen 502) in a given location (a beach 508) with a prop (revolver 510).
The relative positions of features are also provided in the text, namely that Karen 502 "stood close" 504 to John 506 and that the revolver 510 was "next to him" 512 "on the so sand" 516. The apparatus provides for association of a graphical representation (in figures Sb and Sc) of the physical features 502, 506, 508, 510 depicted in the electronic text for subsequent reference with that section of the electronic text, based on a semantic analysis of the electronic text.
The semantic analysis is able to identify the features 502, 506, 508, 510 and their relative positions 504, 512, and from these build up a graphical representation of the scene described in the text.
In one example shown in figure 5b, a beach scene 528 is used as a landscape due to the location "beach" 508 being identified in the text. A male character 526 (representing John 506) is positioned on the beach 508 closely next to 524 a female character 522 (representing Karen 502) based on the descriptor "stood close" 504 in relation to John 506 and Karen 502. Based on the text "next to him" 512 and "on the sand" 516, a "revolver" 530 is positioned close to 532 the male character 526 on the sand 534 of the beach 528 in the graphical representation.
Thus in this example, the semantic analysis identifies a spatial interrelationship 524, 532; 544, 552 of one or more features 522, 526, 530; 542, 546, 550 depicted in the section of electronic text with other features 522, 526, 530; 542, 546, 550 depicted in the electronic text and the graphical representation 520, 540 provides a graphical representation of the spatial interrelationship 524, 532; 544, 552 of one or more of the features 522, 526, 530; is 542, 546, 550. The spatial interrelationship may comprise one or more of relative size (e.g., "tiny", in "the tiny ant marched along the leaf"), relative position (e.g., "high on the wall" in "the cross was marked high on the wall of the castle") and spacing between the features (e.g.,"ve close" in "she stood very close to him").
Of course, there may be more than one way in which a scene may be graphically represented while corresponding to the semantically analysed text. Figure Sc shows an alternative graphical representation of the same text of figure 5a, in which a male character 546 is positioned on the beach 548 closely next to 544 a female character 542 on a beach 548 and a revolver 550 is positioned close to 552 the male character 546 on the sand 554. Figure Sc also includes additional features not explicitly described in the text, such as the sun, birds, and plants on the beach. These additional features may be included automatically in the generation of the graphical representation as features likely to be associated with features linked to semantically analysed text. In some examples, such additional features may be added based on co-occurrence information with one or so more features explicitly described in the text. Thus, the apparatus may be configured to build a co-occurrence model for one or more features of textual content and use this to add information on co-occurring features based on their common co-occurrence with explicitly described features. Such co-occurrence models may be memory based, (that is, comprising counting and storing information on feature co-occurrence) or may utilize one or more statistical models such as probabilistic latent semantic analysis (PLSA). The additional features may, in some examples, be included by a user who has edited the graphical representation, using the automatically generated graphical representation as a template for a customised/ personalised image.
Other examples of template landscapes in which users may be able to build up personal graphical representations depicting the electronic text (or upon which a scene described in the electronic text is automatically generated for display as a graphical representation based on semantic analysis of the text) may be, for example, a forest, a castle, in the sky, underwater, in a cave, inside a building, in space, or in a city. The graphical representations may comprise real-life photographs and/or computer generated depictions.
In some examples, these graphical representations may be automatically generated by an image building application which has a stock of default background scenery, characters and props, and arranges the identified features in the image according to the semantic analysis of the text. In some examples, the overall graphical representation may be generated by a user (which may be the user reading the electronic text, or another user who has read the same section of electronic text). Such users may create a graphical representation to show how they imagine the described scene to look. They may do this by, for example, free-drawing in an art application, or by using an image building application as described above and placing pre-drawn items in a pre-drawn background scene (and possibly modifying/customising the items). In some examples, a user may be able to freely select a word or phrase in a section of electronic text and select an option to enter an image creation/editing mode so that the user can create a graphical representation of the selected text for subsequent reference with that section of text.
Two or more graphical representation, as shown in figures 5b and 5c, may be generated for association with the section of semantically analysed electronic text. Thus the apparatus may be configured to allow multiple graphical representations 520, 540 of the section of text to be associated with the section of text for subsequent reference.
The apparatus in this example is configured to provide the subsequent reference from the electronic text to a graphical representation by allowing user-scrolling through the multiple graphical representations associated with the section of text, as shown in figure 5d. Upon the user reaching a part of the electronic text with two or more graphical representations associate with it, the user may be presented with a scroll menu displaying thumbnail images of the available graphical representations so that the user can see one or more of the images. The option to view associated images/graphical representations may be automatically displayed, or the user may be able to select a "show possible images" option or similar which may be highlighted/revealed when the user reaches a section of text with associated images/graphical representations.
A user may be able to scroll through multiple graphical visualisations for a particular section of text in other ways. If one graphical representation of several available representations is displayed in-line with the text as in figure 4c, a user may be able to swipe left or right over that image, or hold/hover a finger over the image, for example, to see other graphical representations for that same section of text. If a particular graphical representation from several available is presented as a default, then the default may be the highest user-rated graphical representation, a graphical representation created by a preferred artist of the user, for example.
is In the examples shown in figures 4a-4d and 5a-5d, the graphical representations depict scenes. In other examples, a graphical representation of a character may be available.
For example, if a section of text (or in some examples if more than one section of text are identified as describing the same character in the story) includes the descriptions "his thick black hair whipped in the winds", "his heavy brow, dark eyes and long beard and "with his red and gold tunic showing the symbol of the dragon", then these sections of text may be semantically analysed and used to build up a graphical representation of that character which a user may be able to view when reading about that character in the story. Such a graphical representation would be of a man, showing his particular hair, facial features and costume as described in the text. In some examples, the text may be semantically analysed to identify, for example, a period of time (e.g., in the past "in 1485", or the future "he boarded the spaceship"), or a country/location (e.g., "in the heart of Africa", "deep in Siberia"), and such descriptors may also be identified and used to build up a graphical representation of a character which fits with the story setting. For example, the costume of a character in a hot country may be light and thin whereas the costume of a character in a cold region may be heavy with many layers and fur lining, for example.
The apparatus may be configured to provide for the association between the graphical representation(s) and the section of electronic text by presenting the graphical representation of the one or more features (as in in figures 4a-4d) or by presenting multiple optional graphical representations of the one or more features (as in figures 5a- 5d) for user-selection, user-confirmation, and /or user-manipulation, to initiate the association.
Thus, for example, a user may be prompted to select a preferred graphical representation from several presented, as in figure 5d, or may be prompted to confirm that a particular presented graphical representation (such as the image in figure 4b) is appropriate, so that the selected/confirmed graphical representation is associated with the section of text for future reference by the user. In some examples the selected/confirmed graphical representation may be associated with the section of text for future reference by one or more other users who read the same section of text. In this way if lots of different users are all reading the same e-book, a series of graphical representations for a section in that e-book may be formed and ranked in order of most preferred to least preferred by readers of that book. A user may opt to automatically see the highest ranked graphical representations for a section of text according to other user ratings, for example. The other user ratings may be from users who have read the same e-book and selected a preferred graphical representation, or may be from a subset of is such users, for example only those users who have a profile which matches that of the current user.
In some examples, the subsequent reference may allow multiple subsequent simultaneous or time-spaced viewers of the section of text to view previously associated graphical representations for the section of text in association with the subsequent viewing of the section of text. Thus, several users may (simultaneously or on separate occasions) view previously associated graphical representations in relation to a section of electronic text. As an example, users may be able to access an online e-book store to browse for e-books. The graphical representations associated with a preview section of the text of an e-book may be viewed by the users to help them decide if the book will be interesting to them. In other examples, users may be able to access the graphical representations associated with the electronic text in an e-book on an online fan forum, or similar. Within the forum a user may be able to, alone or collaboratively with other online users, edit graphical representations associated with the text of the e-book.
Figures 6a-6b each illustrate user customisation of a graphical representation of features depicted in electronic text.
Figure 6a shows that a user can rank a graphical representation with a rating 604. Of course any suitable rating system may be used to rank the graphical representations (e.g., marks out of 10, and using different ranking categories based on quality! accuracy or interest, for example. The ranking may be explicit, such as being explicitly provided by users, or implicit, such as rankings being automatically collected based on user behaviour patterns (e.g., the number of times of viewing a visualization). A user may be able to see the multiple graphical representations as ranked by a group of users having a particular profile, such as a profile matching that of the user. Thus, in the case where multiple graphical representations exist for a section of electronic text, the apparatus may be configured to allow the multiple graphical representations of the section of text to be associated for subsequent reference with the section of text in a ranking order according to a user-profile match criterion with the user-profile of the current user.
Different users may experience the content of e-books and other electronic text differently. Thus different users may visualise content in the text differently. It may be advantageous to allow different users to create and edit graphical representations to allow then to personalise the reading experience. Figure 6a shows that a user can custornise a graphical representation. For example, if a user is presented with the is graphical representation 600 of figure 6a but considers that the lighthouse as described in the text is on a rocky cliff, rather than a grassy outcrop surrounded by trees, the user can change the graphical representation by selecting the "edit" tool 606. Thus the apparatus is configured to allow user-manipulation of a graphical representation such that a user can change a physical characteristic of the graphical representation of the one or more features (the lighthouse in this example). The user may be able to edit the image in an image editing application, which may in some examples be an application configured for editing graphical representations for association with electronic text as described herein. The edited graphical representation may be stored as a new graphical representation, or may overwrite the existing graphical representation.
In examples where a user modifies a default visual representation, the modified representation may be re-used as a default template when the same object/scene is mentioned again in the electronic text. If the object/scene is associated in the later text with additional objects/features, then template illustrations for those objects/features may so be included with the new default visualisation (either positioned in the visualisation or available for user placement within the visualisation). For example, if a user creates a visualisation for a character called "Bill" in an e-book, then this same visualisation may be used as a template for further appearances of Bill in the text. If Bill is described in a later passage of text as wearing, for example, a furry hat, then a furry hat image may be placed on Bill's head in the visualisation to update his appearance according to the story progression in the e-book.
Thus, a group of users may collaboratively edit a graphical representation for association with the electronic text. If multiple users edit a particular graphical representation, they may all be shown as authors for that image.
Figure 6b illustrates another way in which users can create and edit a graphical representation 650 for associated with a section of electronic text. The text in this example describes a medieval tavern. If a user is presented with the graphical representation 650, in this example he can choose from a bank of template graphical representations so that he can include one or more of the presented template graphical representations of features in the text in the overall graphical representation 650. The semantic analysis of the text, in this example, has identified the words "medieval tavern" thereby providing the basis for the background image 652. The semantic analysis has also identified the words "cake", "pickaxe" and "armour" in the section of text. In this example, the user is presented with these features 656 in a selectable menu 654 from is which the user can select which items he feels should be included in the graphical representation 600, and position then accordingly within the background image 652. In this way, there is an increased level of user interactivity with creating the graphical representation and the sophistication of the semantic analysis need not be such that the relationship between identified features is determined and accounted for as in relation to figures 5a-5c. By presenting items identified in the analysed text for user inclusion, the user can easily add relevant items to the graphical representation. The user may in some examples be able to modify such items and/or create new items (e.g., items not presented to the user).
Items offered for inclusion in a selection menu 654 may be stored in a database/repository which is stored at the apparatus or accessible by the apparatus (e.g., on a remote server or cloud). Such items may be labelled with descriptive metadata labels, such that they are selected for inclusion in a menu 654 based on their metadata label matching a word identified in the semantic analysis of the section of text. In some so examples users may be able to search in the database/repository for items even if they are not identified from the analysis of the electronic text, and in some examples users may be able to create items for inclusion in the database/repository.
In some examples, an electronic text may be provided with a graphical representation template which users can start from to create graphical representations associated with a section of text in the electronic text. For example, a fantasy novel e-book may be provided with a map of the fantasy world in which the story is set. Users may be able to create graphical representations for different locations on the map, and associate those locations by tagging them with an appropriate keyword, such as the name of the location on the map. When the name of that location is mentioned in the electronic text, a user may be able to select it to see the map with the graphical representation of that location, created by a user, on the map, for example.
In some examples, the graphical representation may be available for sale or free download separately from the electronic text. In this way if a user purchases/downloads an electronic text such as an e-book, which at the time of download does not have any associated graphical representations, the user may, at a later time when one or more graphical representations are available, access and download these. The user may then, re-read the e-book and have a different reading experience with the addition of the newly available images.
is Figure 7 illustrates a chronological timeline of graphical representations associated with features depicted in an electronic text. In this example, the overall story in the electronic text first describes a scene of a hut in the mountains 702. This is followed by a description of a royal fortress 704. The story concludes with a description of a futuristic island world 706. The apparatus in this example is configured to associate earlier or subsequent depictions of the one or more features (the hut 702, fortress 704, and island 706), in the electronic text in a chronological timeline. Such a chronological timeline may be viewed by a user in the same way as a user may review the description on the back of a book to decide whether or not the book is of interest to them. A chronological timeline visual description may provide an appealing and readily compiehensible overview of the contents of the e-book or electronic text. Another example of a chronological timeline may be of the evolution of a particular character as they progress through a story. The character may be represented in a series of images showing how the character ages and develops throughout the story of the electronic text.
so In some examples a user may be able to select a particular graphical representation on the tirneline and the corresponding page of the electronic text may be automatically displayed. In this way each image in the timeline may be used as a virtual bookmark, providing a readily understandable visual prompt for different themes and sections in an electronic text. For example, in an e-magazine, each feature may be represented on a visual timeline showing graphical representations of the topics discussed in the magazine. The user could select a graphical representation of interest and the corresponding feature in the magazine could be readily displayed (e.g., automatically, or after user confirmation). Such a timeline need not be chronological but may represent different themes in, for example, a non-fictional text such as a magazine or encyclopaedia.
In certain examples, a graphical representation of the current portion of an electronic text may be generated just prior to a user ending his/her viewing/reading. The graphical representation may be used as a virtual pictorial bookmark. When the user wishes to recommence reading, he/she may be presented with the virtual bookmark as a readily understandable prompt of the latest events which the user read about before he/she stopped reading last time. Such a virtual bookmark may help a user to quickly resume reading as they will be prompted regarding the previously read story. It may also serve to enhance the user's reading experience, as he/she may be presented with a personalised image each time reading recommences based on where the user read to in the story.
Advantageously, in certain embodiments, the user need not perform any particular input to create the visual bookmark, as it may be automatically generated based on, for example, the last page of text read by the user when the user leaves the electronic text/switches off the reading device.
While the discussion above mentions the electronic text of e-books and e-magazines, the electronic text in other examples may be from a website, a social media message, or user-generated electronic content, for example. A user may wish to view a graphical representation of a location or person described in an online blog, and so may be able to select the relevant text and choose an "analyse" option to have that text semantically analysed and create a graphical representation of that text for viewing and/or editing.
Use of graphical representations as discussed here may advantageously provide a way for electronic text providers to distinguish their texts from those of other providers.
Graphical representations may advantageously be used to provide quick previews for users for the content of electronic texts without the user necessarily having to read the so text. Use of graphical representations may be advantageous for users who may be able to re-read books and have a different reading experience each time, by viewing different graphical representations of the text content. Such graphical representations can be advantageously automatically generated by semantic analysis of the text to identify key features described in the text. Such representations may also be customised by one or more users to represent the text as the different readers visualise the story and content of the text. Visual bookmarks may be electronically created to aid the user in remembering where they previously finished reading and advantageously provide a quick, visually appealing prompt of the part of the text which was last read.
Figure 8a shows an example of an apparatus 800 in communication with a remote server 804. Figure 8b shows an example of an apparatus 800 in communication with a cloud" 810 for cloud computing. In figures 8a and 8b, apparatus 800 (which may be apparatus 100, 200 or 300) is also in communication with a further apparatus 802. In some examples the further apparatus 802 may be, for example, a display screen for a tablet computer or electronic book/reader (so that the apparatus 800 and 802 are comprised in the same device). In some examples the apparatus 800 and further apparatus 802 may each be separate user computers or modules for separate user computers, each in communication with a remote server 806 or cloud 808 (and possibly in communication with each other, for example through an ad-hoc network). In some examples more than one further apparatus 802 may be in communication with the apparatus 800.
Communication may be via a communications unit, for example. In some examples the apparatus 802 may be able to communicate directly with the remote server 804 or cloud 810, for example if both the apparatus 800 and further apparatus 802 are user computers/devices, and semantic analysis is performed at the remote server 804 or cloud 810.
Figure 8a shows the remote computing element to be a remote server 804, with which the apparatus 800 (and in some examples apparatus 802) may be in wired or wireless communication (e.g. via the internet, Bluetooth, NFC, a USB connection, or any other suitable connection as known to one skilled in the art). In figure 8b, the apparatus 800 is in communication with a remote cloud 810 (which may, for example, be the Internet, or a system of remote computers configured for cloud computing). In some examples the further apparatus 802 may also be in communication with the remote server 804 or remote cloud 810.
so Figure 9a illustrates a method 900 according to an example of the present disclosure.
The method comprises, based on a semantic analysis of a section of electronic text, providing for association of a graphical representation of at least one or more features depicted in the section of electronic text for subsequent reference with the section of electronic text.
It will be appreciated that the aforementioned association may be done by electronically flagging, tagging, or linking at least a part of the section of text with the graphical representation of the one or more depicted features identified based on a semantic analysis of the section of text.
Figure 10 illustrates schematically a computer/processor readable medium 1000 providing a program according to an example of this disclosure. In this example! the computer! processor readable medium is a disc such as a Digital Versatile Disc (DVD) or a compact disc (CD). In other examples, the computer readable medium may be any medium that has been programmed in such a way as to carry out the functionality herein described. The computer program code may be distributed between the multiple memories of the same type, or multiple memories of a different type, such as ROM, RAM, flash, hard disk, solid state, etc. Any mentioned apparatus/device/server and/or other features of a mentioned apparatus! device/server may be provided by apparatus arranged such that they become configured to carry out the desired operations only when enabled, e.g. switched on, or the like. In such cases, they may not necessarily have the appropriate software loaded into the active memory in the non-enabled (e.g. switched off state) and only load the appropriate software in the enabled (e.g. on state). The apparatus may comprise hardware circuitry and/or firmware. The apparatus may comprise software loaded onto memory. Such software/computer programs may be recorded on the same memory/processor/functional units and/or on one or more memories/processors/ functional units.
In some examples, a mentioned apparatus/device/server may be pre-programmed with the appropriate software to carry out desired operations, and wherein the appropriate software can be enabled for use by a user downloading a "key", for example, to unlock/enable the software and its associated functionality. Advantages associated with such examples can include a reduced requirement to download data when further functionality is required for a device, and this can be useful in examples where a device is perceived to have sufficient capacity to store such pre-programmed software for functionality that may not be enabled by a user.
Any mentioned apparatus/circuitry/elements/processor may have other functions in addition to the mentioned functions, and that these functions may be performed by the same apparatus/circuitry/elements/processor. One or more disclosed aspects may encompass the electronic distribution of associated computer programs and computer programs (which may be source/transport encoded) recorded on an appropriate carrier (e.g. memory, signal).
Any computer" described herein can comprise a collection of one or more individual processors/processing elements that may or may not be located on the same circuit board, or the same region/position of a circuit board or even the same device. In some examples one or more of any mentioned processors may be distributed over a plurality of devices. The same or different processor/processing elements may perform one or more functions described herein.
The term "signalling" may refer to one or more signals transmifted as a series of transmitted and/or received electrical/optical signals. The series of signals may comprise one, two, three, four or even more individual signal components or distinct signals to make up said signalling. Some or all of these individual signals may be transmitted/received by wireless or wired communication simultaneously, in sequence, and/or such that they temporally overlap one another.
With reference to any discussion of any mentioned computer and/or processor and memory (e.g. including ROM, CD-ROM etc), these may comprise a computer processor, Application Specific Integrated Circuit (ASIC), field-programmable gate array (FPGA), and/or other hardware components that have been programmed in such a way to carry out the inventive function.
The applicant hereby discloses in isolation each individual feature described herein and any combination of two or more such features, to the extent that such features or combinations are capable of being carried out based on the present specification as a whole, in the light of the common general knowledge of a person skilled in the art, irrespective of whether such features or combinations of features solve any problems disclosed herein, and without limitation to the scope of the claims. The applicant indicates that the disclosed aspects/examples may consist of any such individual feature or combination of features. In view of the foregoing description it will be evident to a person skilled in the art that various modifications may be made within the scope of the
disclosure.
While there have been shown and described and pointed out fundamental novel features as applied to examples thereof, it will be understood that various omissions and substitutions and changes in the form and details of the devices and methods described may be made by those skilled in the art without departing from the scope of the disclosure. For example, it is expressly intended that all combinations of those elements and/or method steps which perform substantially the same function in substantially the same way to achieve the same results are within the scope of the disclosure. Moreover, it should be recognized that structures and/or elements and/or method steps shown and/or described in connection with any disclosed form or examples may be incorporated in any other disclosed or described or suggested form or example as a general matter of design choice. Furthermore, in the claims means-plus-function clauses are intended to cover the structures described herein as performing the recited function and not only structural equivalents, but also equivalent structures. Thus although a nail and a screw may not be structural equivalents in that a nail employs a cylindrical surface to secure wooden parts together, whereas a screw employs a helical surface, in the environment of fastening wooden parts, a nail and a screw may be equivalent structures.

Claims (27)

  1. CLAIMS1. An apparatus comprising: at least one processor; and at least one memory including computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: based on a semantic analysis of a section of electronic text, provide for association of a graphical representation of at least one or more features depicted in the section of electronic text for subsequent reference with the section of electronic text.
  2. 2. The apparatus of claim 1, wherein the semantic analysis identifies one or more physical aspects of the one or more features depicted in the section of electronic text and is the graphical representation provides a graphical representation of one or more of the physical aspects of the one or more depicted features.
  3. 3. The apparatus of claim 2, wherein the one or more physical aspects comprise one or more of size, shape, colour and shading of the one or more depicted features.
  4. 4. The apparatus of claim 1, wherein the semantic analysis identifies a spatial interrelationship of one or more features depicted in the section of electronic text with other features depicted in the electronic text and the graphical representation provides a graphical representation of the spatial interrelationship of one or more of the depicted features.
  5. 5. The apparatus of claim 4, wherein the spatial interrelationship comprises one or more of relative size, relative position and spacing between the depicted features.so
  6. 6. The apparatus of claim 1, wherein the semantic analysis is one of a current semantic analysis performed on the section of text and a previous semantic analysis performed on the section of text.
  7. 7. The apparatus of claim 6, wherein the previous semantic analysis was performed by one of: an apparatus associated with the same user; an apparatus associated with a different user; and an apparatus associated with a different user having a matching user-profile with the current user.
  8. 8. The apparatus of claim 1, wherein the apparatus is configured to allow multiple graphical representations of the section of electronic text to be associated with the section of electronic text for subsequent reference.
  9. 9. The apparatus of claim 8, wherein the apparatus is configured to provide the subsequent reference by allowing user-scrolling through the multiple graphical representations associated with the section of electronic text.
  10. 10. The apparatus of claim 8, wherein the apparatus is configured to allow the multiple graphical representations of the section of text to be associated for subsequent reference with the section of electronic text in a ranking order according to a user-profile match criterion with the user-profile of the current user.
  11. 11. The apparatus of claim 1, wherein the one or more features are at least one of one or more physical aspects of a scene or of a character depicted in the electronic text.
  12. 12. The apparatus of claim 1, wherein the apparatus is configured to provide for the association by presenting the graphical representation of the one or more features for one or more of user-selection, user-confirmation or user-manipulation, to initiate the association.
  13. 13. The apparatus of claim 12, wherein the user-manipulation allows the user to change a physical characteristic of the graphical representation of the one or more features.
  14. 14. The apparatus of claim 12, wherein the apparatus is configured to present the graphical representation using a bank of template graphical representations.so
  15. 15. The apparatus of claim 1, wherein the apparatus is configured to provide for the association by presenting multiple optional graphical representations of the one or more features for one or more of user-selection, user-confirmation or user-manipulation to initiate the association.
  16. 16. The apparatus of claim 1, wherein the apparatus is configured to associate earlier or subsequent graphical representations depicting the one or more features, in the section of electronic text or in other sections of the electronic text, in a timeline.
  17. 17. The apparatus of claim 16, wherein the apparatus is configured to allow for user selection of a particular graphical representation in the timeline and cause the corresponding page of electronic text associated with the particular graphical representation to be displayed.
  18. 18. The apparatus of claim 1, wherein the apparatus is configured to allow for the graphical representation to be used as a virtual bookmark to facilitate location of the section of text associated with the graphical representation.
  19. 19. The apparatus of claim 1, wherein the subsequent reference allows multiple subsequent simultaneous or time-spaced viewers of the section of text to view previously associated graphical representations for the section of text in association with the subsequent viewing of the section of text.
  20. 20. The apparatus of claim 1, wherein the apparatus is configured to perform the semantic analysis of the section of text and/or receive the semantic analysis of the section of text from another apparatus.
  21. 21. The apparatus of claim 1, wherein the section of text is a section from a narrative text, an e-book, an e-magazine, a website, a social media message, and user-generated electronic content.
  22. 22. The apparatus of claim 1, wherein the graphical representation comprises one or more of: a 2-D image; a 3-0 image; and a 3-D navigable virtual landscape.
  23. 23. The apparatus of claim 1, wherein the section of electronic text is from an e-book, and the apparatus is configured to, based on the semantic analysis of the section of electronic text from the e-book, provide for association of a graphical representation of one or more features depicted in the section of electronic text from the e-book for subsequent reference with that section of electronic text from the e-book.
  24. 24. The apparatus of claim 1, wherein the apparatus is: a portable electronic device, a mobile telephone, a smartphone, a personal digital assistant, an e-book, a tablet computer, a surface computer, a navigation device, a desktop computer, or a module for the same.
  25. 25. A method comprising: based on a semantic analysis of a section of electronic text, providing for association of a graphical representation of at least one or more features depicted in the section of electronic text for subsequent reference with the section of electronic text.
  26. 26. A computer readable medium comprising computer program code stored thereon, the computer readable medium and computer program code being configured to, when run on at least one processor perform at least the following: based on a semantic analysis of a section of electronic text, provide for association of a graphical representation of at least one or more features depicted in the section of electronic text for subsequent reference with the section of electronic text.
  27. 27. An apparatus substantially as described herein with reference to and as illustrated in the accompanying figures.
GB1318285.2A 2013-10-16 2013-10-16 An apparatus for associating images with electronic text and associated methods Withdrawn GB2519312A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1318285.2A GB2519312A (en) 2013-10-16 2013-10-16 An apparatus for associating images with electronic text and associated methods

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1318285.2A GB2519312A (en) 2013-10-16 2013-10-16 An apparatus for associating images with electronic text and associated methods

Publications (2)

Publication Number Publication Date
GB201318285D0 GB201318285D0 (en) 2013-11-27
GB2519312A true GB2519312A (en) 2015-04-22

Family

ID=49680115

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1318285.2A Withdrawn GB2519312A (en) 2013-10-16 2013-10-16 An apparatus for associating images with electronic text and associated methods

Country Status (1)

Country Link
GB (1) GB2519312A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3467674A1 (en) * 2017-10-06 2019-04-10 Disney Enterprises, Inc. Automated storyboarding based on natural language processing and 2d/3d pre-visualization
US10318559B2 (en) 2015-12-02 2019-06-11 International Business Machines Corporation Generation of graphical maps based on text content
EP3547160A1 (en) * 2018-03-27 2019-10-02 Nokia Technologies Oy Creation of rich content from textual content
US10719545B2 (en) 2017-09-22 2020-07-21 Swarna Ananthan Methods and systems for facilitating storytelling using visual media
US11302047B2 (en) 2020-03-26 2022-04-12 Disney Enterprises, Inc. Techniques for generating media content for storyboards
WO2023234859A3 (en) * 2022-06-02 2024-02-08 Lemon Inc. Method and system for creating social media content collections

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11514244B2 (en) * 2015-11-11 2022-11-29 Adobe Inc. Structured knowledge modeling and extraction from images
CN113536006B (en) * 2021-06-25 2023-06-13 北京百度网讯科技有限公司 Method, apparatus, device, storage medium and computer product for generating picture

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010049596A1 (en) * 2000-05-30 2001-12-06 Adam Lavine Text to animation process
EP1395025A1 (en) * 2002-08-26 2004-03-03 Hitachi, Ltd. Interactive animation mailing system
EP1703468A1 (en) * 2004-01-27 2006-09-20 Matsushita Electric Industrial Co., Ltd. Image formation device and image formation method
US20060217979A1 (en) * 2005-03-22 2006-09-28 Microsoft Corporation NLP tool to dynamically create movies/animated scenes
WO2009109039A1 (en) * 2008-03-07 2009-09-11 Unima Logiciel Inc. Method and apparatus for associating a plurality of processing functions with a text
EP2256642A2 (en) * 2009-05-28 2010-12-01 Samsung Electronics Co., Ltd. Animation system for generating animation based on text-based data and user information
US20110040555A1 (en) * 2009-07-21 2011-02-17 Wegner Peter Juergen System and method for creating and playing timed, artistic multimedia representations of typed, spoken, or loaded narratives, theatrical scripts, dialogues, lyrics, or other linguistic texts
EP2290924A1 (en) * 2009-08-24 2011-03-02 Vodafone Group plc Converting text messages into graphical image strings
US20130093774A1 (en) * 2011-10-13 2013-04-18 Bharath Sridhar Cloud-based animation tool

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010049596A1 (en) * 2000-05-30 2001-12-06 Adam Lavine Text to animation process
EP1395025A1 (en) * 2002-08-26 2004-03-03 Hitachi, Ltd. Interactive animation mailing system
EP1703468A1 (en) * 2004-01-27 2006-09-20 Matsushita Electric Industrial Co., Ltd. Image formation device and image formation method
US20060217979A1 (en) * 2005-03-22 2006-09-28 Microsoft Corporation NLP tool to dynamically create movies/animated scenes
WO2009109039A1 (en) * 2008-03-07 2009-09-11 Unima Logiciel Inc. Method and apparatus for associating a plurality of processing functions with a text
EP2256642A2 (en) * 2009-05-28 2010-12-01 Samsung Electronics Co., Ltd. Animation system for generating animation based on text-based data and user information
US20110040555A1 (en) * 2009-07-21 2011-02-17 Wegner Peter Juergen System and method for creating and playing timed, artistic multimedia representations of typed, spoken, or loaded narratives, theatrical scripts, dialogues, lyrics, or other linguistic texts
EP2290924A1 (en) * 2009-08-24 2011-03-02 Vodafone Group plc Converting text messages into graphical image strings
US20130093774A1 (en) * 2011-10-13 2013-04-18 Bharath Sridhar Cloud-based animation tool

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10318559B2 (en) 2015-12-02 2019-06-11 International Business Machines Corporation Generation of graphical maps based on text content
US10719545B2 (en) 2017-09-22 2020-07-21 Swarna Ananthan Methods and systems for facilitating storytelling using visual media
EP3467674A1 (en) * 2017-10-06 2019-04-10 Disney Enterprises, Inc. Automated storyboarding based on natural language processing and 2d/3d pre-visualization
US10977287B2 (en) 2017-10-06 2021-04-13 Disney Enterprises, Inc. Automated storyboarding based on natural language processing and 2D/3D pre-visualization
US11269941B2 (en) 2017-10-06 2022-03-08 Disney Enterprises, Inc. Automated storyboarding based on natural language processing and 2D/3D pre-visualization
EP4180990A1 (en) * 2017-10-06 2023-05-17 Disney Enterprises, Inc. Automated storyboarding based on natural language processing and 2d/3d pre-visualization
EP3547160A1 (en) * 2018-03-27 2019-10-02 Nokia Technologies Oy Creation of rich content from textual content
WO2019185689A1 (en) * 2018-03-27 2019-10-03 Nokia Technologies Oy Creation of rich content from textual content
US11302047B2 (en) 2020-03-26 2022-04-12 Disney Enterprises, Inc. Techniques for generating media content for storyboards
WO2023234859A3 (en) * 2022-06-02 2024-02-08 Lemon Inc. Method and system for creating social media content collections

Also Published As

Publication number Publication date
GB201318285D0 (en) 2013-11-27

Similar Documents

Publication Publication Date Title
GB2519312A (en) An apparatus for associating images with electronic text and associated methods
US10580319B2 (en) Interactive multimedia story creation application
JP5872753B2 (en) Server apparatus, electronic apparatus, electronic book providing system, electronic book providing method of server apparatus, electronic book display method of electronic apparatus, and program
US20140089826A1 (en) System and method for a universal resident scalable navigation and content display system compatible with any digital device using scalable transparent adaptable resident interface design and picto-overlay interface enhanced trans-snip technology
US11531442B2 (en) User interface providing supplemental and social information
US20150277686A1 (en) Systems and Methods for the Real-Time Modification of Videos and Images Within a Social Network Format
WO2013181171A1 (en) System and method for a universal resident scalable navigation and content display system compatible with any digital device using scalable transparent adaptable resident interface design and picto-overlay interface enhanced trans -snip technology
WO2011083497A1 (en) Map topology for navigating a sequence of multimedia
US20170235706A1 (en) Effecting multi-step operations in an application in response to direct manipulation of a selected object
WO2016201571A1 (en) System and method for generating an electronic page
US11630940B2 (en) Method and apparatus applicable for voice recognition with limited dictionary
CN111158573B (en) Vehicle-mounted machine interaction method, system, medium and equipment based on picture framework
JP7177175B2 (en) Creating rich content from text content
Fischer et al. Brassau: automatic generation of graphical user interfaces for virtual assistants
Veg Anatomy of the ordinary: new perspectives in Hong Kong independent cinema
US10775877B2 (en) System to generate a mixed media experience
KR102669324B1 (en) Method, apparatus and system for providing virtual gallery service based on metaverse platform
US10257561B2 (en) Time-line based digital media post viewing experience
Morson Learn design for iOS development
Amr et al. Practical D3. js
Chambers MacBook All-in-one for Dummies
Murray My Windows 10 (includes video and Content Update Program)
Vandome Android Tablets for Seniors in easy steps: Covers Android 7.0 Nougat
Vandome Android Tablets for Seniors in easy steps: Fully illustrated using Google Nexus. Covers Android Jelly Bean.
Westfall et al. Beginning Android Web Apps Development: Develop for Android Using HTML5, CSS3, and JavaScript

Legal Events

Date Code Title Description
COOA Change in applicant's name or ownership of the application

Owner name: NOKIA TECHNOLOGIES OY

Free format text: FORMER OWNER: NOKIA CORPORATION

WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)