WO2012146627A1 - Procédé et appareil de téléchargement de contenu en collaboration - Google Patents

Procédé et appareil de téléchargement de contenu en collaboration Download PDF

Info

Publication number
WO2012146627A1
WO2012146627A1 PCT/EP2012/057585 EP2012057585W WO2012146627A1 WO 2012146627 A1 WO2012146627 A1 WO 2012146627A1 EP 2012057585 W EP2012057585 W EP 2012057585W WO 2012146627 A1 WO2012146627 A1 WO 2012146627A1
Authority
WO
WIPO (PCT)
Prior art keywords
content
viewer
content object
data
data representing
Prior art date
Application number
PCT/EP2012/057585
Other languages
English (en)
Inventor
Bart Van Coppenolle
Philip Vandormael
Original Assignee
Right Brain Interface N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from PCT/EP2011/068485 external-priority patent/WO2012052559A1/fr
Application filed by Right Brain Interface N.V. filed Critical Right Brain Interface N.V.
Priority to CN201280030109.9A priority Critical patent/CN103718205A/zh
Priority to SG2013079389A priority patent/SG194633A1/en
Priority to CA2834351A priority patent/CA2834351A1/fr
Priority to EP12716455.6A priority patent/EP2702539A1/fr
Priority to JP2014506851A priority patent/JP2014516503A/ja
Priority to KR1020137031406A priority patent/KR20140041500A/ko
Publication of WO2012146627A1 publication Critical patent/WO2012146627A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/27Server based end-user applications
    • H04N21/274Storing end-user multimedia data in response to end-user request, e.g. network recorder
    • H04N21/2747Remote storage of video programs received via the downstream path, e.g. from the server
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/4508Management of client data or end-user data
    • H04N21/4532Management of client data or end-user data involving end-user characteristics, e.g. viewer profile, preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/458Scheduling content for creating a personalised stream, e.g. by combining a locally stored advertisement with an incoming stream; Updating operations, e.g. for OS modules ; time-related management operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47217End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/475End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
    • H04N21/4751End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data for defining user accounts, e.g. accounts for children
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/47815Electronic shopping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4782Web browsing, e.g. WebTV
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4784Supplemental services, e.g. displaying phone caller identification, shopping application receiving rewards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/482End-user interface for program selection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4886Data services, e.g. news ticker for displaying a ticker, e.g. scrolling banner for news, stock exchange, weather data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/812Monomedia components thereof involving advertisement data

Definitions

  • the disclosure relates to human behavior, and, more specifically, to a method and apparatus for collaborative upload of content to enable time shifted viewing thereof.
  • the human brain comprises a left hemisphere and a right hemisphere, which each have a distinct personality or consciousness and a distinct way of processing information. For simplicity, these will be referred to as the left brain and the right brain, respectively.
  • the left brain is known for analytic, categorical thinking and textual, sequential processing.
  • the right brain is known for synthetic, intuitive, holistic thinking and visual-spatial, parallel processing. Therefore, some processes or even simple exposure to certain stimuli will rather activate the right brain and some other rather the left brain.
  • textual information will rather activate the left brain, visual- spatial information rather the right.
  • the act of searching through menus will rather activate the left brain, whereas navigating with e.g. a joystick through a natural landscape or space will rather activate the right brain. Therefore, experience interfaces will rather activate the left or right brain, depending on the type of elements used for interfacing : e.g. visual-spatial or textual elements.
  • Such left right activation will also depend on the type of actions and thinking that are required for using these elements.
  • hemispheric brain activity can also be linked to human emotions and moods.
  • a product interface can support a certain mood, depending on the way its front-end and back-end are designed, and depending on the processes required to operate such interface.
  • Recommendation technology is used to help people find products they like on the internet, through other media or elsewhere.
  • recommenders support the trend towards higher personalization of experiences, and thus make up an important part of the back- and front-end of today's interfaces for internet, media or television.
  • today's recommenders there are some issues with today's recommenders.
  • Some platforms like television, hardly use recommender technology.
  • Television content providers video on demand (VOD) content providers or broadcasters preselect programs based on assumptions about the wishes of groups. They provide a selection of programs they believe will most appeal to the class of viewers who subscribe to a channel or a group of channels.
  • Specific genre channels like e.g. cooking channels provide some individualization, compared to the more general channels, but also they contain a pre-selection of programs.
  • viewers need to actively search for programs: they need to search through Electronic Program Guide and menus with their remote control to find something of their interest, forcing them into a frustrated mood.
  • the recommenders that are currently used fail in a very important aspect: i.e. they do not consider people's desired mood. Therefore they cannot recommend content that is specifically suited to support a person's desired mood.
  • Neuropsychology teaches that human emotions and moods are two-dimensional or bivalent, rather than bipolar. In other words, people may feel attracted (positive valence) or repulsed (negative valence) by content, or they may experience mixed emotions; such as when watching a bloody surgery that both fascinates and disgusts them. It is the relative strength of our positive and negative emotions, which determines our mood. An overall positive emotion, like passion, does not imply a lack of negative emotions. On the contrary, when a person is passionate about something/someone, they typically have both high positive and high negative emotions. A relaxed mood on the hand is characterized by high positive and low negative emotions.
  • the second group provides roughly personalized recommendations for which the personalization is trivial, for example a recommendation based on the favorite genre of the viewer. Often this second group of recommendations is based on demographics like age, gender, occupation, family situation etc.
  • the advantage of this group is that the results are partially personalized, the disadvantage however is that no high level of personalization can be reached since the available profile data is limited.
  • the third group provides recommendations with the highest level of personalization with two techniques often used in combination :
  • Content-based recommendations the user will be recommended items similar to the ones the user preferred in the past.
  • the algorithms use the analysis of the content whereby items are modeled by a set of features that describe the content.
  • CF o Collaborative filtering
  • Some current web TV systems allow the user to create virtual channels. However, these systems require the user to go through menus and type in key words using a keyboard-like device, while sitting in front of their television. This does not support the relaxing nature of the natural TV viewing experience. On the contrary, it often jeopardizes relaxation and sometimes even causes frustration.
  • Recording devices which enable time shifted viewing have physical restrictions associated with the system, such as the number programs which may be recorded, or, the number programs which may be simultaneously record, but the most important disadvantage is the hassle and frustration accompanying the programming of recording and the selection and replaying of recorded content.
  • Catch-Up TV is available for time shifting, its functionality is limited and its use does not support the relaxing nature of the natural TV viewing experience. Both selection of time shifted content and programming of time shifting devices are not relaxing, involving too much left brain activity.
  • mind mapping software allows one to organize material in a more visual-spatial way, using branched structures, colors, some images, etc.
  • Another example is a web tool like Pearltrees that allows one to organize and access e.g. all one's material on a specific hobby into one or more branched, schematic trees.
  • a fourth embodiment concerns a system for trading securities.
  • Securities are typically optimally bought from sellers who are in panic, and sold to buyers who are passionate about these securities.
  • current systems are not able to detect panic or passion in economic markets or individual trading parties at the time these moods emerge, neither are they able to automatically buy or sell securities based on such knowledge.
  • an automatic trading system for securities that takes into account buyers' and sellers' moods and performs automated trading activities accordingly.
  • M&A mergers and acquisitions
  • a typical example consists of businesses trying to initiate a buying cycle by sending marketing brochures or emails full of technical specifications to prospects who did not yet buy into the vision behind the offering.
  • Another example consists of businesses, who succeed in selling a vision, in making buyers willing to change, but who subsequently fail to hedge the buyers' private or social fears.
  • the disclosure relates to a neuropsychological modeling technique and resulting mathematical model for human emotions and moods applied in buyer, seller, user and experience psychology, more specifically applied in experience, interface, platform, process, and back-end design of products, processes or services.
  • Natural experiences interfaces are based on specific characteristics of the left and right consciousness, applied in left brain, right brain or tandem interfaces.
  • a natural user experience mood is selected, that dictates the design of the user interface as well as the backend of the product, process or service concerned.
  • the model or modeling technique therefore forms the basis of the design of the natural user experience, its user interface, its product process or workflow as well as its back-end.
  • neuropsychological modeling technique or model or the natural user experience or its interface or its back-end process may be applied in several embodiments including, but not limited to:
  • a tandem interface for reading and/or researching and/or writing 2) A tandem interface for reading and/or researching and/or writing, 3) A tandem user interface for an automatic internet enabled buying system for recurrent consumer purchases,
  • Experiences are understood as moods that naturally and optimally occur in certain processes. Interfaces between those processes and the experiences in the human brain are optimally designed to support the natural and optimal experience in each phase of the process. Therefore left, right and tandem interfaces are used, featuring specific cortical solicitation, eliciting specific moods.
  • six other inventions are presented to support the specific practical relevancy and technical execution as embodiments of the modeling technique in specific applications. Representation of human moods in two-dimensional space
  • a system and technique for modeling human moods comprises a representation of human moods in a 2- dimensional space with one dimension representing emotions with negative valence and the other dimension representing emotions with positive valence.
  • the respective emotions may be given alternative naming, e.g. 'fear' or 'reluctance' for the emotions with negative valence, and 'desire' or 'attraction' for the emotions with positive valence.
  • One or more of multiple basic human moods may, depending on the application, be substituted by one of these variants.
  • a system and technique for modeling human moods is based on the premise that that moods that are constituted of an emotion component with a more than average negative valence lead to increased activity in the left brain frontal cortex in the absence of new emotionally associated sensory input.
  • a system and technique for modeling human moods is based on the premise that moods that are constituted of an emotion component with a more than average positive valence lead to increased activity in the right brain frontal cortex in the absence of new emotionally associated sensory input.
  • this also implies, that moods, which are constituted of both an emotion component with positive and negative valence, will lead to an increased activity in the cortex of both hemispheres in the absence of new emotionally associated sensory input.
  • a system and technique for relaxing a subject viewer comprises stimulation of the right brain (*) through exposure to predominantly visual data, with minimal textual or analytical data, like tables.
  • Such technique may be applied, e.g. to a television experience system, in a wellness setting, etc.
  • a system and technique for exciting a subject viewer in to a passionate mood by stimulating both the right (*) and left brain through exposure to a mixture or balance of visual data and textual or analytical data on the other (like graphs, tables, lists, written reviews, etc).
  • Such technique may be applied to, e.g. websites, games, sports related products and in educational products.
  • the correlation of hemispheric asymmetry to mood theory can be applied to sales & marketing models and strategies.
  • a method for increasing business to customer sales comprises bringing the potential customer into a relaxed mood, by stimulating the right brain cortex (*) and not the left. This is done by using mostly visual data (e.g. visually appealing packaging) appealing to the potential customer's positive emotions, and limiting the amount of textual or analytical data.
  • Such technique may be applied to, e.g. sales or marketing of clothes, shampoo, etc. but also to the design of business-to-consumer websites and on-line stores.
  • a method for increasing business to business sales strategies comprises bringing the potential customer into a passionate mood, by exciting both his right (*) and left brain cortex. This is done by using both visual data and textual or analytical data, like reviews, tables, etc, the visual data helping the business to open up to change and create a vision for a better future, the analytical data helping get control over any negative emotions, like personal and social fears.
  • This may e.g. translate in packaging with a nice, but drier layout that uses more lines. It may also e.g. translate into marketing material that includes both video material and written consumer testimonials.
  • This insight may be e.g. applied to the sales and marketing of ICT products, machinery equipment, financial products, etc. (*) It's important to note, that one cannot create desire in another person, but one can nurture a seed of desire that is already present.
  • a system and technique uses a bivalent (not bipolar) rating system for video and other content (like books and art), in which one rating parameter has a value that expresses the strength of emotions with positive valence towards specific content, the other parameter has a value that expresses the strength of emotions with negative valence towards that same content.
  • the respective emotions may be given alternative naming, e.g. 'fear' or 'reluctance' for the emotions with negative valence, and 'desire' or 'attracted' or 'like it' for the emotions with positive valence.
  • a system and technique uses a multivalent rating system that incorporates the bivalent rating system described above.
  • a system and technique uses a ranking application (for video or other content) that is at least partly based on the bivalent or multivalent rating system described above.
  • a system and technique uses a recommender application (for video or other content) that is at least partly based on the bivalent or multivalent rating system described above.
  • a system and technique uses a metadata file that contains information on the viewing preferences of one or more viewers of video content and in which the preferences are expressed using the multivalent or bivalent rating system described above.
  • a system and technique uses an interface that presents viewable content and information across a set of interface devices in a manner, which mimics typical human brain task delineation, distinguish between visual and textual tasks, for the different devices.
  • Such a system and technique may comprise:
  • a first interface which presents visual content, with minimal or no text and which may be implemented with a traditional television display.
  • a second interface which presents a content surfing interface and purchasing interface and may be implemented on a Personal Digital Assistant (PDA) or smart phone, tablet computer or laptop computer.
  • PDA Personal Digital Assistant
  • Optional extra user interfaces which present mainly the textual based interfaces for content surfing and purchasing, as well as visual content and may be implemented with a traditional personal computer, including a desktop, tablet computer or laptop system, as well as other systems.
  • the two or more interfaces may be viewable simultaneously on separate devices. They may also be sequentially accessible from one single device.
  • a system and technique uses an interface that consists of one or more left brain interfaces that can operate in tandem with a (right brain) television interface, and for which applies:
  • the television interface and its operation allow the user easy access to virtual channels, without the need to go through any menus or to type letters, numbers or symbols on a device with keyboard like functionality. Instead, the user can scroll between any classic or virtual channels and also within these channels (i.e. between the different content objects in these channels), similar to the browsing or surfing, using only a very limited amount of buttons or similar touchscreen operations.
  • the virtual channels may be a social medium channel, e.g. Facebook, Twitter, Linkedln,.., a channel of which the user himself is the Channel Director, etc.
  • the one or more left brain interfaces operating in tandem with the television interface may be implemented on a smartphone, tablet, laptop, PC, etc.
  • This interface allows the management of the virtual channels, including such things as: setting a channels' order number, choosing the content of the channel, choosing which Facebook users can post recommendations on the users' Facebook channel, etc.
  • the television interface is designed to keep the user relaxed, i.e. in the -3n/8 to -n/8 area of the mood square/disk.
  • the left brain interface is designed in such a way as to keep the user in a passionate or controlled mood, represented by the -n/8 to +3n/8 area of the mood square/disk.
  • a system and technique uses advertisement accounts for some or all of its TV user accounts and broadcasters.
  • advertisement should not disturb the natural relaxing nature of the TV viewing experience. Therefore being able to watch advertisements of interest when TV viewers want it is a design imperative for the relaxing TV experience.
  • a TV commercial or other advertisement is more valuable if it is more personalized to the interest of the viewer, when the viewer watches it at his/her own convenience, in a relaxed mood, when the viewer pulls the advertisement rather than that the advertisement is pushed to the viewer and of course if the TV user actually watches the advertisement, instead of simply taking a break.
  • the credit model takes these value creation parameters into account, by crediting the advertisement account. For each viewer or viewer profile or each family or home or other group validly subscribed, combined with each broadcaster or group of cooperating broadcasters a separate advertisement account is kept. Each advertisement account is credited using the advertiser value credits model, potentially but not necessary including a monetary payment system to credit the advertisement account.
  • Such same advertisement account is then debited using a broadcaster cost or selling price debit model, in such a way, that:
  • the fast-forwarding of an advertisement by a viewer or viewer group, or the automatic skipping of an advertisement leads to a lowering of the credits on the viewer or viewer group's advertisement account with that broadcaster or group of broadcasters, based on a cost or selling price model or based on an advertiser value model, or a combination of both.
  • the credits on the viewer's or viewer group's advertisement account with that broadcaster or group of broadcasters increase based on an advertiser and/or broadcaster value model.
  • Such advertiser and/or broadcaster value model may include: the length of the advertisement, its level of personalization, whether it is embedded in the broadcasted content or separately viewed, its degree of viewer pull or push, the viewer's mood estimate relative to the relaxed mood, the verification of the actual viewing etc.
  • a viewer feedback system can be implemented.
  • Such feedback system may e.g. consist of a message, in the form of a ticker line passing by at the bottom of the TV screen, asking the viewer to press a specific number on his remote, if he is watching the advertisement.
  • the number to press optimally changes from advertisement to advertisement, in a random or other not easily predictable way.
  • the message is optimally displayed towards the middle to end of an advertisement, rather than at its start, however not systematically to prevent abuse.
  • the system supports the function to block the viewer or viewer group from fast forwarding commercials and/or automatically skipping commercials, for that broadcaster or group of broadcasters for whom account the critical low level has been reached, until the viewer or viewer group earns sufficiently new credits to reach a critical switch-on level e.g. by watching advertisement, or by paying a sum of money.
  • VOD content or any other type of purchase which contributes to the advertiser or broadcaster value creation by means of commission on such purchase or otherwise, may also result in an increase of credits on an advertisement account.
  • broadcasters can earn a commission on VOD or other sales induced by special purpose advertisement allowing for on-line TV ordering and in return grant credits on the viewer/purchaser's advertisement account.
  • a system and technique comprises an interface that presents content that, in general, is mostly textual or analytic, but may also be visual, in such a manner the content can be accessed in two alternative ways at the same moment or approximately at the same moment, i.e. one shortly after the other within the same overall experience, including the following :
  • the user is kept in the spectrum of moods, covered by the -3n/8 to +3n/8 area of the mood disk.
  • Possible content may be articles, papers, e-books, reviews, brochures and the like, as well as images, video material, etc.
  • Any number of metaphors may be utilized visual design of such an interface.
  • One embodiment utilizes a landscape metaphor in which forests and fields and trees support the associative, exploring way of accessing mostly new content, and in which the houses, pieces of land that have been divided out, etc. support the categorical, analytic way of accessing mostly known content, one wants to retrieve.
  • a system and technique comprises an automatic order placement system which utilizes an e-reader device having interface which enables the user to buy material online by using a few, simple operations, e.g. by simply pressing OK. Then entering of bank credentials, choosing a preferred supplier, etc. can be done prior, through the left brain interface.
  • a system and technique enables a change in the display of a figure on an e-reader from black-white to color by clicking or double clicking the figure, or by performing similar operations on a touch screen e-reader device. After such operation, either the selected figure, or all figures may be displayed in color.
  • a system and technique enables recurrent consumer purchases in the following manner: consumers use their smartphone to collect information, which identifies a consumer product in a unique way, e.g. taking a picture of the barcode of the product. This information, or a processed version of it, is subsequently uploaded to a central inventory management system that automatically places orders at a supplier of choice.
  • the smartphone interface works in tandem with a second interface, which is typically a more left-brain interface, meaning it contains more textual, analytical or menu- based items, rather than visual or graphical elements.
  • the second interface allows such things as the management of the choice of suppliers and products, the choice of a payment method, the entrance of bank credentials, etc.
  • This technique is linked to our mood model in the following way:
  • the recurrent purchasing of consumer goods like shampoo, butter and toilet paper requires the hassle of such things as remembering what needs to be bought and/or making a shopping list, going to a shop (either a classic shop or webshop), searching the needed product in the shop, etc.
  • the disclosed system and technique decreases this type of hassle, and thus the negative emotions associated with them, so the consumer, while operating the smartphone interface, can remain in a relaxed mood, represented by the -n/8 to -3n/8 area of the mood disk.
  • the second interface i.e.
  • the left brain interface is designed in such a way as to bring/keep the consumer in a passionate or dominant mood, with the word 'passionate' used in the sense of 'positively focused' and the word 'dominant' used in the sense of 'consciously in control'.
  • the consumers' mood is in the -n/8 to +3n/8 area of the mood disk.
  • a system and technique for trading securities detects the occurrence of panic and passion in economic markets, by modeling the purchasing and selling behavior of traders, using two independent emotional parameters per trader and/or per security, with one parameter having a positive valence and one having a negative valence.
  • Panic occurs when, for a significant portion of traders, and for a significant portion of securities, the parameter with negative valence is significantly more important than the parameter with positive valence, bringing the angle in the emotion square/disk at 3n/4 ⁇ n/8.
  • Passion occurs when, for a significant portion of traders, and for a significant portion of securities, the parameters with positive and negative valence are significant, bringing the angle in the emotion square/disk at n/4 ⁇ n/8.
  • the trading system automatically buys (or propose to buy) securities from traders in a panic mood, and sells (or propose to sell) securities to traders in passion, taking into some personal preferences of the user of the trading system.
  • Disclosed herein is a system and technique in which the traditional recommendation engine paradigm is reversed to achieve more accurate predictive model which mimics the subjects emotional motivations. Rather than classifying "subjects" objectively, the disclosed system and technique classify "objects" subjectively relative to an individual's (or small group of individuals, e.g. a family) behavior so that the resulting group of objects can be ranked and presented in a manner that provides greater emotional motivation for selection according to the individual's specific subjective desires and reluctance tastes.
  • a plurality of content objects such as videos, music, art, books, consumer goods, financial instruments, etc.
  • content objects are subjectively analyzed according to a specific individual's tastes and behavioral history and presented to the individual in rankings or "channels" which it can be explored or “surfed” multi- dimensionally.
  • content objects are processed through a unique neuropsychological modeling engine, utilizing data specific to an individual or group of individuals, and arranged according to their eligibility and the magnitude the individual's predicted emotional motivation to select or purchase a content object.
  • a ranking position within a channel representing the individual's emotional motivation to select such content object, is determined.
  • Content objects are arranged in a first selectable dimension, according to a desire and fear vector, that is, from lower to higher emotional motivation for possible selection and presentation according to an individual's behavioral data. Content objects may be further arranged according to a second selectable dimension based on a time vector. As contemplated, multiple sequentially arranged versions of content objects which share one or more common parameters or metadata values, such as episodes within a television series, or prequel/sequel movie releases, or books with a series, are arranged chronologically, allowing selection either forward or backward chronologically from a currently selected content object.
  • a system for accurately modeling of buyer/purchaser psychology and ranking of content objects within a channel for user initiated browsing and presentation comprises a neuropsychological modeling engine, a ranking application, and a behavior modeler all of which communicate with each other as well as with a plurality of databases and a presentation system over either public or private networks.
  • the neuropsychological modeling engine utilizes metafiles associated with a content object, a purchaser/viewer model and a channel model to derive a fear vector value representing an individual's fear (reluctance) to select or purchase the content object and to further derive a desire vector value representing the individual's desire to select or purchase the offered item.
  • the neuropsychological modeling engine derives a value ⁇ representing an individuals mood and a value m representing an individuals motivational strength to select or purchase the content object. If the value ⁇ representing an individuals mood is within an acceptable predetermined range, the value m is used to determine a ranking for the content object relative to other content objects associated with the channel model for possible presentation to the purchaser/viewer.
  • a modeling system contains neuropsychological modeling engine, ranking application, and behavior modeler all of which communicate with each other as well as with a plurality of databases and a viewing system over either public or private networks.
  • the neuropsychological modeling engine utilizes metafiles associated with a content object, a viewer model and a channel model to derive a fear vector value representing an individual's fear (reluctance) to select or purchase the content object and to further derive a desire vector value representing the individual's desire to select or purchase the offered item.
  • the neuropsychological modeling engine derives a value ⁇ representing an individuals mood and a value m representing an individuals motivational strength to select or purchase the content object. If the value ⁇ representing an individuals mood is within an acceptable predetermined range, the value m is used to determine a ranking for the content object relative to other content objects associated with the channel model.
  • a method comprises: A) comparing metadata associated with a content object to metadata associated with a channel model;
  • D) comprises: Dl) deriving, from the desire vector value and the fear vector value, a value ⁇ representing an individuals mood.
  • D) comprises further D2) deriving, from the desire vector value and the fear vector value, a value m representing an individuals motivational strength to select or purchase the content object.
  • D) further comprises: D3) if the value ⁇ representing an individuals mood is within an acceptable predetermined range, using the value m representing an individuals motivational strength to select or purchase the content object to determine a ranking for the content object relative to other content objects associated with the channel model.
  • a system for modeling of buyer/purchaser psychology comprises: A) a network accessible memory for storing at least one channel model; B) a modeling engine operably coupled to the network accessible memory and configured to compare metadata associated with a content object to metadata associated with the channel model and for generating : i) a fear vector value representing an individual's fear (reluctance) to select or purchase the content object, ii) a desire vector value representing the individual's desire to select or purchase the offered item; and iii) a ranking for the content object relative to other content objects associated with the channel model, said ranking derived from the desire vector value and the fear vector value.
  • the modeling engine is further configured to generate: iv) a value ⁇ representing an individuals mood, the value ⁇ being derived from the desire vector value and the fear vector value, and v) a value m representing an individuals motivational strength to select or purchase the content object, the value m being derived from the desire vector value and the fear vector value.
  • the system further comprises: C) a ranking module responsive to the modeling engine for deriving a ranking for the content object relative to other content objects associated with the channel model from the value m generated by the modeling engine, if the value ⁇ generated by the modeling engine is within an acceptable predetermined range.
  • a method for modeling of buyer/purchaser psychology comprising : A) receiving data associated with a viewing event; B) comparing metadata associated with a channel model to data associated with the viewing event; and C) modifying the channel model to account for the viewing event.
  • the method further comprises D) deriving at least one database query from the channel model.
  • the method comprises: Al) comparing metadata associated with a channel model to data associated with a viewer model.
  • a primary content stream is presented in a substantial portion of the user interface display area while a plurality of secondary content streams are presented in smaller sized display areas or thumbnail formats.
  • the multiple secondary content streams presented on the user interface each represent selectable content having a queued relationship to the currently selected (primary) stream which is selected and updated by the current user/viewer navigation commands. Such a queued relationship may exist between and among different content streams or between separately user selectable portions of a single stream or program content.
  • a data structure storable in memory and capable of being processed by a computer system comprises: data identifying a first content object associated with a subject; and data identifying a ranking of the first content object related to an emotional motivation of the subject to select the first content object.
  • the data structure further comprises data identifying one of the first plurality of other content objects having an emotional motivation value equal to, greater than or less than the first content object.
  • the data structure further comprises data identifying a chronological ranking value of the first content object among a second plurality of content objects having at least one common parameter value with the first content object, the second plurality of content objects having a ranking value greater or less than that of the first content object.
  • a method for enabling multidimensional surfing of content comprises : A) evaluating a first content object according to behavioral metadata associated with a subject to determine eligibility for ranking; B) assigning an emotional motivation value to the first content object, if eligible for ranking; and C) arranging for selection by the subject the first content object among a first plurality of content objects in order of increasing or decreasing emotional motivation values.
  • the method further comprises D) assigning a chronological ranking value to the first content object relative to a second plurality of content objects having at least one common parameter value with the first content object; and E) arranging for selection by the subject the first content object among the second plurality of content objects, in order of increasing or decreasing chronological ranking value.
  • a video display system having navigation controls, such as a standard television remote control with directional cursor navigation controls (e.g. up, down, left, and right).
  • An application executing in conjunction with the video display interface intercepts and redefines the cursor navigation control commands from the remote to enable them to be utilized as the primary mechanism for surfing/selecting channel(s) and initiating viewing of previously aggregated and ranked content objects associated with the viewer's neuropsychological behavior as described herein.
  • the up and down cursor controls of a remote may be utilized to move through content objects, previously ranked within a channel, according to increasing or decreasing emotional motivation of the subject to select such content objects relative to a subject's behavioral data.
  • left and right cursor arrows of the remote may be utilized to select chronologically backward or forward other control objects, respectively, relative to a currently selected content object, for example, for past or future episodes of the same program series currently being viewed or recently viewed.
  • a method for use with a video display system having a video display and a plurality of cursor navigation controls for moving a user selectable sub-region of the video display area sequentially and/or incrementally in one or more directions comprises: A) receiving a first of the cursor navigation control commands; and B) redirecting the first cursor navigation control command to initiate presentation of a first content object from among a first plurality of content objects previously arranged according to a predefined criteria.
  • the first plurality of content objects are previously arranged in order of increasing or decreasing emotional motivation.
  • the first plurality of content objects are previously arranged in a chronological sequence relative to the same program series currently being viewed or recently viewed.
  • a video display system comprises: a video display; a plurality of directional navigation controls for sequentially moving a user selectable sub-area of the video display in one or more directions about the video display area; and control logic for receiving command signals associated with one of the navigation controls and for redirecting the command signal to initiate presentation of a first content object from among a first plurality of content objects previously arranged according to a predefined criteria.
  • the first plurality of content objects are arranged in order of increasing or decreasing emotional motivation for selection. Selection of a navigational control associated with a first direction initiates presentation of a first content object having at least the same as or increased emotional motivation than a current or previously presented content object.
  • the first plurality of content objects are previously arranged in a chronological sequence and selection of a navigational control associated with a first direction initiates presentation of a first content object having an earlier chronological value than the current or previously presented content object.
  • Selection of a navigational control associated with a second direction, opposite the first direction initiates presentation of a first content object having a later chronological value than a current or previously presented content object.
  • a user interface and associated controls that present a subject with viewable content and information across a set of interface devices and in a manner which most closely mimics human brain task delineation.
  • a first user interface presents visual content only, with minimal or no text, and may be implemented on a traditional television display.
  • Such first user interface predominantly uses and/or stimulates activity in the right hemisphere of the human brain.
  • a second user interface presents a content surfing interface and purchasing interface and may be implemented on a Personal Digital Assistant (PDA) or smart phone, tablet computer or even laptop computer.
  • PDA Personal Digital Assistant
  • Such second user interface predominantly uses and/or stimulates activity in the left hemisphere of the human brain, and also, to a certain extent, the right hemisphere of the human brain.
  • third and fourth user interfaces are capable of presenting mainly the textual based interfaces for content surfing and purchasing, as well as visual content and may be implemented with a traditional personal computer, including a desktop, tablet computer or laptop system, as well as other systems.
  • Such optional third and fourth user interfaces also predominantly use and/or stimulates activity in the left hemisphere of the human brain, and, optionally, to a limited extent, the right hemisphere of the human brain.
  • the two, three or more interfaces may be viewable simultaneously on separate devices, such as in a system that utilizes three platforms for the two brain hemispheres: a TV display (full Right, minimal Left), a smartphone/PDA (mainly Left, limited Left, limited Right optionally ), a personal computer (full Left, limited Right optionally), and a tablet computer (mainly Left, limited Left, full Right optionally).
  • the different interfaces may be accessible sequentially from a single device such as a TV display or personal computer display.
  • a method for selecting and viewing program content comprises: A) providing a first user-interface, operably coupled to compilation of selectable and viewable content objects, for presenting substantially visual, non-textual information of the content objects; and B) providing a second user-interface operably coupled to metadata associated with the content objects for presenting substantially textual information.
  • the method further comprises C) providing a third user-interface operably coupled to the compilation of selectable and viewable content objects and the metadata associated with the content objects for presenting one of visual content and textual information.
  • the disclosed collaborative cloud DVR system solves the above noted shortcomings in the current state-of-the-art by collaboratively sharing bandwidth and cloud storage capacity among a plurality of individual client Digital Video Recorder (DVR) devices.
  • each owner/user of a DVR device authorizes his or her individual DVR device to be utilized by both the cloud storage system server and any other owner/user of a DVR device in the respective service community and receives similar permission in return.
  • the collaborative cloud storage community which comprises the cloud storage system and all participating DVR devices acts collectively as a single entity authorized by the individual users/viewers to upload, remotely store and download licensed content for time shifted viewing, in a manner which rigorously protects legal rights of the content owners while overcoming the potential physical obstacles of limited bandwidth, power failures, incomplete uploads or downloads of content, limited cloud storage capacity, etc. More specifically, disclosed is a system and technique for collaborative upload of content to enable time shifted viewing thereof.
  • a user/viewer receives a streamed licensed copy of a content object from a source of content objects, typically an on-line content server or cable company, and transmits (uploads) either the entire content object, or a portion thereof, to the cloud storage server where a complete copy of the content is retained and made available for streamed transmission (download) back to the user/viewer of one of the DVR devices within the collaborative community upon request, including at times outside the viewable time window made available from the original source.
  • a source of content objects typically an on-line content server or cable company
  • the process for collaborative recording and storage of content for playback occurs as follows.
  • An individual (or family) viewer/user schedules regular recording of a content object, such as program series. Recorded content object is transmitted by their respective DVR device to a cloud storage server.
  • the viewer/user may schedule downloading and viewing of a program series in a time shifted manner, i.e. not in real time with recording thereof.
  • the percentage of content object transmitted to cloud storage server from the particular DVR client device may range from 0% to 100%.
  • the content uploaded by the viewer/user, using the collaborative cloud system under the viewer's license remains 100%.
  • DVR software in conjunction with cloud storage server verifies that viewer has authority to upload/record content objects.
  • the viewer/user may download and view a content object, e.g. a program series, in a time shifted manner, if licensed, whether recorded from the viewer's own DVR client device, participating in the ccDVR or not.
  • a content object e.g. a program series
  • the viewer/user may download from the cloud storage system, and view in a time shifted manner, content objects recorded by the viewer/user using the ccDVR system, including the DVR device of another viewer community member physically executing the recording, if licensed to do so, and when the user/viewer device would otherwise and stand-alone be unable to upload such content for various reasons, such as unavailable network bandwidth, power systems failure, less than a complete copy of the content object being uploaded and/or retained by the cloud store system, etc.
  • a method for delayed streaming of content comprises: A) providing plurality of network accessible memory locations for storing data representing a content object; B) receiving into a first of the plurality of network accessible memory locations data representing process portions of the content object from a first viewer device process having access to the content object data from a source; C) receiving into a second of the plurality of network accessible memory locations data representing portions of the content object from at least a second viewer device process having access to the content object data from the same or another source; D) upon receiving a request from the first viewer device process, transmitting the data representing the content object from the first network accessible memory location to the first viewer device process if the data stored in the first network accessible memory location represents a complete copy of the content object, else transmitting data representing the content object from the second network accessible memory location to the first viewer device process.
  • the data representing the content object is any of video, textual, graphic, photographic, audio, or haptic data.
  • the data representing the content object is accessible from another source during a first time period and wherein the data representing the content object is transmitted to one of the first and second viewer device processes during a second time period not identical to the first time period.
  • the method further comprises: E) maintaining in the plurality of network accessible memory locations a data structure received with the data representing the content object from one of the plurality of viewer device processes.
  • the method further comprises: E) receiving, from each of the first and second viewer device processes, authorization for the other of the first and second viewers device processes to transmit data representing a content object to one of the plurality of network accessible memory locations on behalf thereof.
  • a system for delayed streaming of content comprises: A) a cloud storage server comprising : Al) a network accessible memory for storing a plurality of copies of data representing a content object; A2) a network interface for receiving into the network accessible memory data representing at least portions of the content object from a plurality of viewer device processes having access to the content object from a content source; A3) a process for managing access to the network accessible memory by the plurality of viewer device processes; and A4) a streaming interface for transmitting one of the plurality of copies from the network accessible memory to one of the plurality of viewer device processes upon a request therefore.
  • the system further comprises B) a plurality of viewer devices operably coupled over a network to the content source and the cloud storage server, each of the viewer devices executing one of the plurality of viewer processes and further comprising : Bl) program logic for determining which of a plurality of content objects are accessible from the content source and for requesting transmission of the data representing the content object to the viewer device at a first time; and B2) program logic for upload transmitting a copy of data representing the content object, received by the viewer device, to the cloud storage server along with authorization indicia identifying the viewer device.
  • each of the viewer devices further comprises: B3) program logic for requesting download transmission of the data representing a copy of the content object from the cloud storage server to the viewer device at a second time different from the first time, and B4) program logic for receiving a copy of the content object from the cloud storage server, wherein the copy of the content object received from the cloud storage server is the same or a different copy of the content object transmitted by to the cloud storage system up by the viewer device.
  • the network accessible memory is further configured to store a data structure received with the data representing the content object from one of the plurality of viewer device processes, the data structure comprising any of: i) data identifying a portion of the content object; ii) data identifying an authorized viewer process; iii) data identifying one of temporal or sequential identifiers associated with the content object; or iv) data identifying the network address of the authorized viewer process.
  • a system for delayed streaming of content comprises: A) a cloud storage server comprising a network interface for receiving into an associated network accessible memory a plurality of copies of data representing a content object; and B) a plurality of viewer devices operably coupled over a network to the cloud storage server and the network accessible memory, wherein each of the plurality of viewer devices is authorized to receive, from the cloud storage system, data representing one of the copies of the content object transmitted to the cloud storage system by another of the plurality of viewer devices.
  • a method for delayed streaming of content comprises: A) providing a collaborative cloud storage system comprising : i) a cloud storage server comprising a network interface process for receiving into an associated network accessible memory a plurality of copies of data representing a content object, and ii) a plurality of viewer device processes operably coupled over a network to a content source and to the cloud storage server and network accessible memory; and B) receiving, from each of the plurality of viewer device processes, authorization for all other of the plurality of viewers device processes to transmit data representing a content object to the cloud storage system on its behalf.
  • A) providing a collaborative cloud storage system comprising : i) a cloud storage server comprising a network interface process for receiving into an associated network accessible memory a plurality of copies of data representing a content object, and ii) a plurality of viewer device processes operably coupled over a network to a content source and to the cloud storage server and network accessible memory; and B) receiving, from each of the plurality of viewer device processes, authorization for all other of the plurality of
  • Figure 1A illustrates conceptually the Mood disk with brain activity varying in function relative to the Real and Imaginary axis in accordance with the disclosure
  • Figure IB illustrates conceptually the Mood square in accordance with the disclosure
  • Figure 1C is a graph illustrating the decomposition of an Emotion in its independent and fully constituent components Fear and Desire in accordance with the disclosure
  • Figure ID illustrates the transformation from the complex plane positive quadrant to the logarithmic complex mood space in accordance with the disclosure
  • Figure IE illustrates the emotion and mood disk as a unity disk in accordance with the disclosure
  • Figure IF illustrates the stereographic projection on a sphere and half sphere in accordance with the disclosure
  • Figure 1G illustrates the projections of the human eye and brain on visual stimuli in accordance with the disclosure
  • Figure 1H illustrates the mood square, as a representation of the mood unity disk in the Chebyshev metric
  • Figure II illustrates cortical activity on the mood unity square in accordance with the disclosure
  • Figure 1J illustrates the resulting stable moods on the mood unity square in accordance with the disclosure
  • Figure IK illustrates the mood disk in accordance with the disclosure
  • Figure 1L illustrates the emotion and mood square in accordance with the disclosure
  • Figure 1M illustrates a range of mood variants on the mood disk in accordance with the disclosure
  • Figure IN illustrates a prior art mental state model as proposed by Csikszentmihalyi
  • Figure 2 is a conceptual illustration of the natural representation of the state space of human psychology in accordance with the disclosure
  • Figure 3 is a graph illustrating morphing of the single quadrant phenomenon to the entire complex plane of the perception
  • Figure 4 is a graph illustrating morphing of the entire complex plane of the perception to the cortical experience, represented by a Riemann complex half sphere
  • Figure 5 illustrates conceptually effect of a desirable TV user interfacing, including exemplary values for the Fear coordinate f, the Desire coordinate d, the mood ⁇ and the motivational strength m, in the Mood disk in accordance with the disclosure;
  • Figure 6A illustrates conceptually the effect of an undesirable TV user interfacing, represented as a path in the Mood disk starting at relaxed mood and ending in an angry mood;
  • Figure 6B illustrates conceptually the sales paths of desire-based B2B sales, fear-based B2B sales and B2C sales on the mood disk in accordance with the disclosure
  • Figure 6C illustrates the sales paths of desire-based B2B sales, fear-based B2B sales and B2C sales on the mood disk, with their numbered stages in accordance with the disclosure
  • Figure 6D illustrates conceptually a mood disk with highlighted regions in the passionate, dominant, and relaxed sections thereof in accordance with the disclosure
  • Figure 7 illustrates conceptually a network environment in which the neuropsychological modeling engine disclosed herein may be implemented
  • Figure 8 illustrates conceptually a block diagram of a computer implemented neuropsychological modeling engine relative to a plurality of content objects in accordance with the disclosure
  • Figure 9A illustrates conceptually the relationship the various components of the modeling system in accordance with the disclosure.
  • Figure 9B-C illustrate a flow diagram of the process utilized by the neuropsychological modeling engine to provide a ranking of content objects in accordance with the disclosure
  • FIG. 9D illustrates conceptually the relationship of the various components of the modeling system in accordance with the disclosure.
  • FIGS. 9E-F collectively and conceptually illustrate an algorithmic process performed by the neuropsychological modeling engine in accordance with the disclosure
  • Figures 10A, 10A1, 10B, 10B1, IOC, and 10C1 illustrate conceptually the data structures utilized by modeling system and/or viewer system in accordance with the disclosure.
  • Figure 11A illustrates conceptually an interface system for a viewer in accordance with the disclosure;
  • Figure 11B illustrates conceptually the algorithmic process performed by redirection application.
  • Figure 11C illustrates conceptually the algorithmic process performed by the modeling system in accordance with the disclosure
  • Figure 11D illustrates conceptually another algorithmic process performed by the viewer system for navigation and display of content objects in accordance with the disclosure.
  • Figure 12A illustrates conceptually a channel which enables multidimensional surfing of content using traditional cursor navigation controls in accordance with the disclosure
  • Figure 12B illustrates conceptually the implementation of a channel associated with a specific subject/viewer in accordance with the disclosure
  • Figure 12C illustrates conceptually a sample data structure from which the groups within channels may be constructed in accordance with the disclosure
  • Figure 12D also illustrates conceptually a data structure of a channel model which enables multidimensional surfing of content using traditional cursor navigation controls in accordance with the disclosure
  • FIG. 13A illustrates conceptually a network environment in which the disclosed distributed upload technique may be implemented in accordance with the disclosure
  • Figure 13B illustrates conceptually a network environment in which the disclosed distributed upload technique may be implemented in accordance with the disclosure
  • FIG. 13C illustrates conceptually an algorithmic process to capture and upload of content object fractions in accordance with the disclosure
  • Figures 13D illustrates conceptually an algorithmic process performed by a viewing system to request viewing of content in accordance with the disclosure
  • FIGS 13E illustrates conceptually an algorithmic process to upload of content object metadata and fractional portions thereof in accordance with the disclosure
  • Figure 14 illustrates conceptually an interface system for a viewer in accordance with the disclosure
  • Figure 15 illustrates conceptually a data structure utilized in accordance with the disclosure
  • Figure 16 illustrates conceptually the relationship of components within display 80 including buffering of multiple content object data streams
  • Figure 17 illustrates conceptually a sample data structure which may be used with each displayed content object data stream
  • Figures 18 illustrates conceptually a user interface for presenting multiple content object data streams to a viewer
  • Figures 19 illustrates conceptually a user interface for presenting multiple content object data streams to a viewer
  • Figure 20 illustrates conceptually various graphic indicia associated with multiple content object data streams
  • Figure 21 illustrates conceptually a user interface for presenting multiple content object data streams that have recommended to a viewer
  • Figure 22 illustrates conceptually a user interface for presenting multiple content object data streams that allow for surfing of nested dimensions
  • Figure 23 illustrates conceptually a network environment in which multiple virtual channel as disclosed herein may be implemented
  • Figure 24A illustrates conceptually a network environment in which a virtual recommendation channel as disclosed herein may be implemented
  • Figure 24B illustrates conceptually an algorithmic process that enables a virtual recommendation channel in accordance with the disclosure
  • Figure 25 illustrates conceptually a network environment in which a virtual program director channel as disclosed herein may be implemented
  • Figure 26A illustrates conceptually a network environment in which a virtual third party channel as disclosed herein may be implemented
  • Figure 26B illustrates conceptually an algorithmic process that enables a virtual third party channel in accordance with the disclosure
  • Figure 27 illustrates conceptually a network environment in which a virtual library channel as disclosed herein may be implemented
  • Figure 28A illustrates conceptually a network environment in which a virtual off-line channel as disclosed herein may be implemented
  • Figure 28B illustrates conceptually an algorithmic process that enables a virtual off-line channel in accordance with the disclosure
  • Figure 29A illustrates conceptually a network environment in which a virtual picture/user generated content channel as disclosed herein may be implemented
  • Figure 29B illustrates conceptually an algorithmic process that enables a virtual picture/user generated content channel in accordance with the disclosure
  • Figure 30A illustrates conceptually a network environment in which a virtual post channel as disclosed herein may be implemented
  • Figure 30B illustrates conceptually an algorithmic process that enables a virtual post channel in accordance with the disclosure
  • Figure 31A illustrates conceptually a network environment in which a virtual mail channel as disclosed herein may be implemented
  • Figure 31B illustrates conceptually an algorithmic process that enables a virtual mail channel in accordance with the disclosure
  • Figure 32 illustrates conceptually a remote control having designated controls for providing explicit viewer feedback in accordance with the disclosure
  • Figure 33 illustrates conceptually an algorithmic process that enables explicit feedback from the viewer system in accordance with the disclosure
  • Figure 34 illustrates conceptually the buying cycle of desire-based B2B sales in accordance with the disclosure
  • Figure 35 illustrates conceptually the buying cycle of fear-based B2B sales in accordance with the disclosure
  • Figure 36 illustrates conceptually the buying cycle of B2C sales in accordance with the disclosure
  • Figure 37 illustrates conceptually the relationship of the various components of the modeling system in accordance with the disclosure
  • Figure 38A illustrates conceptually a collaborative cloud DVR system in accordance with the disclosure
  • Figure 38B illustrates conceptually a DVR device in accordance with the disclosure
  • Figure 38C illustrates conceptually an interface system for a viewer in accordance with the disclosure.
  • Figure 39 illustrates conceptually a network environment in which the disclosed collaborative upload technique may be implemented in accordance with the disclosure.
  • Proposed herein are specific characteristics of the parallel human thinking in the left and right cortex, including a proposal for explaining the underlying neurotransmitter mechanism.
  • Positive and negative human emotions are defined and the bivalence of emotions under this definition proposed.
  • the proposed mathematical independence of positive and negative emotions is supported with their largely independent physiological constitution. This forms the basis for the mathematical classification of emotions and moods in a two dimensional emotion space. Separate forms of consciousness are defined and an explanation of how mood emerges from consciousness is provided.
  • the moods are well described in a logarithmic complex emotion plane, formed by two perpendicular dimensions, expressing the natural Fear and Desire components.
  • the mathematical transformation is derived from right cortex to left cortex representations and its inverse transformation as the complex 1/z function.
  • the language disorder Aphasia typically results from lesions in the language-relevant areas of the frontal, temporal and parietal lobes of the brain, such as Broca's area, Wernicke's area, and the neural pathways between them. These are all areas that are typically located in the left hemisphere with right-handed people. When we further refer to the left or right brain we implicitly refer to what is typical in right handed people, when referring to western language.
  • the left cortex adopts an analytical approach to perception and cognition, while the right cortex grasps information holistically or synthetically.
  • Hacaen et al. observed that patients with left brain damage may make errors of detail in copying and remembering complex figures, but the intact right hemisphere was adept at grasping the general configuration of the figure.
  • patients with right hemisphere damage would attempt a piecemeal strategy of copying and remembering, in which the left hemisphere was unable to integrate details within the meaningful whole.
  • Bogen and Bogen showed that the isolated left brain is impaired in perceiving whole configurations of geometric designs and attempts to analyze the patterns into discrete parts. Over the next two decades a number of studies showed that these differential hemispheric skills in holistic and analytic perception extend to the normal population (Allen 1983; Kinsbourne 1978).
  • the left cortex is specialized in convergent thinking, the right in divergent thinking.
  • Analytical thinking is convergent, whereas holistic or synthetic thinking is divergent. Indeed, both language and logic result from convergent thinking : language converges a multitude of visual and/or auditive impressions to linguistic objects. Logic converges phenomena and their interactions to deterministic relationships, leaving no place for contradictions or paradoxes.
  • the difference between convergent and divergent thinking also relates to the difference between serial and parallel processing respectively.
  • Reading a text for example requires the serial processing of words, one after the other.
  • Spatial awareness on the other hand requires the parallel processing of visual stimuli, which are synthetically combined into one holistic whole image.
  • the brain areas at the frontal left are specialized in directing and organizing the convergent thinking of logic, those at the right create divergent thinking.
  • the lateralization of serial and parallel processing is for instance supported by the fact that the left cortex is specialized for unimodal sensory and motor areas, whereas the right brain is specialized for cross-modal association areas (Goldberg and Costa 1981).
  • the lateralized neurotransmitter pairs dopamine- acetylcholine and norepinephrine-serotonin explain lateralized thinking. Whereas norepinephrine and serotonin are right lateralized in the brain, dopamine and acetylcholine are left lateralized (Tucker and Williamson 1984; Arato at al. 1991; Wittling 1995). Serotonin is thought generally to act as an inhibitory neurotransmitter reducing arousal and the activity of cerebral neurons, especially of the noradrenergic (i.e. norepinephrine-containing), right-hemisphere-dominant arousal system (Tucker and Williamson 1984). A similar process takes place in the left hemisphere, where dopamine inhibits stimulus-evoked acetylcholine release from cholinergic interneurons (Stoof et al. 1992).
  • noradrenergic i.e. norepinephrine-containing
  • the main feedforward messenger is dopamine, a neurotransmitter that is known to help us to control our movements and to focus. Both control and focus require inhibition. In order to control one's movements, other non-deliberate movements need to be suppressed. And in order to focus, the remainder should not get attention.
  • dopamine a neurotransmitter that is known to help us to control our movements and to focus. Both control and focus require inhibition. In order to control one's movements, other non-deliberate movements need to be suppressed. And in order to focus, the remainder should not get attention.
  • the same feedforward inhibition of dopamine, combined with the excitatory feedback of acetylcholine is proposed as the underlying mechanism of convergent thinking.
  • the left frontal cortex reduces overall attention to give attention to the analytically reduced essence of an experience instead of to the overview.
  • Norepinephrine works in this mechanism as a positive feedforward messenger that increases arousal, activating wider parts of the brain, thus allowing us to see the whole or the big picture.
  • Serotonin again reduces the arousal as negative feedback to control the level of arousal or frequency of neuronal activation.
  • 1997 neurologist Gazzaniga described an experiment involving pictured paintings of faces made out of fruit. They were painted in such a way, that one could easily recognize a human face in the overall image.
  • the individual fruit items were easily recognizable as well.
  • the image was presented to the left visual field of a split brain patient, and thus was processed by his right brain hemisphere, the patient recognized the face of a person.
  • the image was presented to the right visual field connected with the left cortex the patient recognized and named the individual fruit items.
  • Positive emotions are mental dispositions that attract or subject a person to another status.
  • Negative emotions are mental dispositions that reluct or object a change of personal status.
  • emotions can be modeled in two perpendicular dimensions, rather than in one dimension where positive and negative emotion would be correlated negatively.
  • positive and negative emotions can be represented as two independent or perpendicular basis vectors in mathematical emotion space, allowing for decomposition of any emotion in its positive and negative emotion components, represented in a 2-dimensional domain, where emotions can be represented as vectors, coordinates or real and imaginary parts of complex numbers, such as represented in Figure 1C.
  • the desire part is the real part of the complex emotion z
  • fear is its imaginary part.
  • d and f are the Desire and Fear coordinates represented as (d,f) of a specific emotion . They result from projecting the emotion E on the orthogonal basis of
  • Emotional states are most naturally represented on a logarithmic scale. This is in line with how e.g . the human perception of auditive and visual stimuli is characterized, i.e. by a logarithmic transmission from physical phenomenon to brain representation, as expressed by the Weber-Fechner law. This law, which applies to both experience and cognition, states that the relationship between the physical magnitude of stimuli and their perceived intensity is logarithmic.
  • Emotions further instruct the frontal cortex to think or reflect upon the emotionally associated cortical representations.
  • This emotional feedforward starts at the orbitofrontal cortex, which is therefore described as the neocortical representation of the limbic system (Nauta 1971).
  • Other parts of the prefrontal cortex then further direct the thinking about the emotionally associated representations (Tucker et al. 1995; Davidson et al. 2000), creating attention.
  • the cortex reflects upon the emotionally associated information and literally reflects it, i.e. sends processed information back to the limbic system, where emotional valence can subsequently be altered.
  • This mechanism forms an emotional cortical limbic feedforward-feedback loop. Accordingly emotions are iteratively reflected and updated.
  • This iterative process may converge to a certain mood, where a mood is defined as a more consciously perceived and more stable emotion, spanning a certain period of time.
  • the first type of consciousness is physiologically linked with the activity of the brain stem.
  • Phenomenal consciousness is the experience of phenomena, as being aware of an emotion or a representation, without it being consciously accessed by cognitive attention (Block 1996). In phenomenal consciousness one can be conscious of subconscious representations and their associated emotions, as expressed in the statement "A subconscious feeling withheld me”.
  • the third type of consciousness includes cognitive awareness.
  • the statement "Sure I knew it, I just didn't think of it” expresses the existence of subconscious knowledge made conscious by thinking of it. Thinking of it means paying cognitive attention to it. Since cognition is a function of the frontal cortex (Bianchi 1922; Kraeplin 1950; Luria 1969), the involvement of the frontal cortex is a prerequisite for this type of consciousness.
  • the disclosed physiological mood model is best illustrated with the concrete example of visual perception.
  • feedforward is launched to the primary visual cortex in the occipital lobes. Abstraction from stimulus to neural patterns is done in the occipital, parietal and temporal cortex. In the left temporal lobe objective categories are recognized, while the right temporal lobe recognizes subjects.
  • the experienced phenomenon emerges from the subcortical limbic association of emotional valence with the temporal cortical representations of the stimuli derived from the occipital cortex.
  • the emotion is associated with the cortical representation of the phenomenon by the amygdala and hippocampus, causing the association to remain, even after the stimulus has disappeared.
  • the emotion associated with the phenomenon is feed forward through the orbitofrontal cortex of the limbic brain entering the pre-frontal cortex.
  • the attention and working memory of the pre-frontal cortex direct the cognition process based on the emotional input received from the limbic brain.
  • the left pre frontal cortex directs objective, converging, language based cognitive consciousness, while the right pre-frontal cortex brings the subjective, diverging, holistic imaginary consciousness. Both the left and the right cognitive consciousness solicit the other areas of the brain through positive and negative feedforward and feedback as described earlier.
  • the cortical representation and its associated emotions are updated, each time new stimuli are experienced. These new stimuli may result from a changed physical phenomenon caused by the actions taken. However, the changing physical phenomenon may also be independent of actions taken, because emotions are not only updated when new stimuli are presented, but also when simply thinking about emotional representations, as further discussed.
  • Emotional consciousness is part of cognitive consciousness when under attention of the pre-frontal cortex but emotional consciousness is also part of phenomenal consciousness, when sensory input associated with the phenomenon is active or when the phenomenon is remembered and attended.
  • the divergent subjective thinking attention of the right pre-frontal cortex leads to an increase in emotion intensity, the arousal associated with that emotion.
  • the objective convergent thinking attention of the left prefrontal cortex leads to decrease of emotion intensity.
  • Cognitive attention of the frontal cortex is focused. The amount of phenomena that get attention is limited. Multiple emotions however can exist in parallel, explaining why we can have mixed feelings.
  • Emotions and moods are well represented in the logarithmic complex emotion plane.
  • the emotion and mood space can be represented as the positive quadrant of the complex plane where (1, 1) represents the individuals' average level of Fear and Desire.
  • the complex plane representing the emotion domain can be mapped onto a mood disk.
  • a mood disk In order to represent this logarithmic complex mood plane in a compacter way, without using the notion of infinity, we represent moods on a unity disk, called the mood disk, as shown in Figure IE.
  • the logarithmic complex mood plane is first mapped onto a Riemann sphere using the inverse stereoscopic projection, indicated in Figure IF.
  • the points A and B are projected through the stereoscopic projection onto the Riemann sphere as S(A) and S(B).
  • this complex plane is projected as well on a half Riemann sphere with center S(°°), as shown in Figure 6.
  • a and S(A), as well as B and S(B) are projected onto HS(A) and HS(B) respectively.
  • this half Riemann sphere is projected to the unit disk as the bottom of the half Riemann sphere by projecting from 0, projecting HS(A) onto A M D and HS(B) onto B M D-
  • the resulting unit disk projection allows for the natural representation of moods on the mood disk, without representing the intuitively less accessible notion of infinity.
  • the projections from the positive quadrant of the complex plane onto the complex plane and further onto the half Riemann sphere and the mood disk are based on the projections our eyes and brain perform on physical visual stimuli. From these transformations the cortical right-left and left-right transformation is derived.
  • S(°°) be the eye's pupil and the Riemann sphere the retina of the eye.
  • S(°°) on the surface of the retina is indeed the projection point of the physical points laying on ⁇ when physical reality is seen in the eye's focal plane translated parallel to itself as the complex plane of Figure IF.
  • S(0) is the projection point of the center of the focal plane represented as the complex plane with center 0, which is as a static image of the focal plane of one eye represented as the half Riemann sphere.
  • This mapping of the mathematical points of 0 and to physiological points of the human eye is shown in Figure 1G.
  • the mapping of 0 on ⁇ and of ⁇ on 0 is done by the complex function 1/z, projecting the complex plane on itself.
  • the stereographic projection is the physical projection of light at certain angles of incidence alpha and beta on the retina.
  • the overview image of spatial representation is created under the direction of the right frontal cortex.
  • the light projection through the pupil is imaginarily reversed, i.e. the physical projection of the human eye is inverted, mathematically resulting in the half Riemann sphere.
  • This inverted right cortical whole static image of visual stimuli of one eye has therefore two dimensions, the two dimensions of the surface of the retina.
  • Three dimensional sight occurs when static images, seen from different angels are combined. Its characteristic transformation is based on simple trigonometry but is not relevant here.
  • Logic causes the left cortex reasoning to be linear. Looking at one aspect of a phenomenon, a language-like category is projected onto the whole image, resulting in a dimension. The entire space of the spatial right cortex representation is projected onto one dimension. Only in this linear reduction of the whole image negation becomes possible by enforcing the law of non-contradiction.
  • the left frontal cortex logically thinks in one dimension and by repeating its characteristic reducing language projection on the space that remained after projecting the first dimension, more linearly independent dimensions are projected, resulting in multidimensional left cortex thinking such as lines, planes and cubes.
  • the complex plane is a left brain projection and the half Riemann sphere is a right brain projection. More specifically, the complex plane is the left pre-frontal brain representation of the spatial representation created under the direction of the right pre-frontal cortex.
  • 1/z is the transformation between left and right cortex representation.
  • the 1/z projection allows for the transformation of a left brain thinking analysis into an intuitively more accessible right brain image. Therefore this 1/z stereoscopic projection has been applied to represent the result of the mathematical decomposition of an emotion vector in the more intuitively accessible domain of the emotion disk.
  • This emotion and mood unit square is both easily accessible for the right and the left brain consciousness and therefore the preferred domain to represent emotions and in fact any two dimensional phenomenon.
  • the right hemisphere on the contrary is specialized for stress responsiveness and mastering acute demands of the external environment (Wittling 2001).
  • the right hemisphere is e.g. typically active during stress anticipation (Davidson 2000).
  • the right hemisphere out of desire for a solution, searches a route to escape the negative emotion.
  • the typical subsequent focusing happens under the direction of the left hemisphere.
  • the right cortex subjects to emotions as well as projecting subjects, as persons, onto emotions.
  • Successful Desire nurturing happens when subjective, holistic, divergent thinking under the direction of the right pre-frontal cortex, pays attention to the phenomenal representation associated with the Desire.
  • Desire may not necessarily only be felt for persons, but also for objects. However, one would probably also agree with us that Desire for material objects is most often Fear for loss of these objects or Desire to be like another person.
  • the cortex is unsuccessfully used and therefore misused when the right cortex is used to subjectively diverge negative emotion or when the left cortex is used to objectively converge positive emotion.
  • Unsuccessful stress coping strategies typically increase negative emotion by subjecting to the negative emotion, typically using subjective holistic right cortex thinking.
  • Anger is such an unsuccessful surrendering to negative emotions, which are projected onto a subject, the person who is characterized as being bad or evil. E.g. lynching a person after the occurrence of an accident does not hedge fear and does not prevent further accidents. Outing of negative emotion in anger does not reduce the negative emotion and does not lead to objective measures to object the cause of the Fear. Moreover, it's not healthy. It has been shown that people who lose their temper are 19 per cent more likely to die of a heart attack than those who keep their emotions under control (Chida and Steptoe 2009).
  • the cortical transformation of the dominant emotion pair can be modeled by a 2x2 matrix. Changes in emotion and thus moods are either a result of new sensory input from the own body or environment or from interaction with other brain regions that change emotion. In the case where there is no new emotional input, the emotion is mainly changed by cortical reflection under direction of the pre-frontal cortices.
  • the cortical transformations can be modeled by a 2x2 matrix describing the transformation of the dominant positive and negative emotional components (d r f) through reflection on the left and right cortex, where (d,f) are the emotional coordinates in the positive quadrant of the complex emotion plane at the left side of Figure 4, prior to the transformations to the emotion disk or square.
  • d t+ i and f i+ 1 are the dominant basic emotions at the time just after reflection
  • d t and f t are the same dominant basic emotions at the time just before reflection.
  • RC? and LC? are respectively the amplification factor of the right cortex and the reduction factor of the left cortex on the Fear component, during the time of reflection.
  • RC 1 , LC? , RC? and LC? 1 depend on the starting conditions, as discussed further, and on how effective and efficient (or quick) one iteration is.
  • Subject representations are not effective in reducing Fear, as object representations are not effective in increasing Desire. And not all object and subject representations are equally efficient in increasing Desire or in decreasing Fear.
  • a relatively high d component typically coincides with a high
  • RC* To nurture the Desire, the right cortex executes an effective and efficient RC 1 , causing the blood flow in the right cortex to increase.
  • the mood square X/Y axis of Figure 1H can also be related to or replaced by the level of lateralized cortical activity as shown in Figure II. This does not however apply, in case the stable mood resulted from unsuccessful transient behavior, as will be discussed in next section.
  • Unsuccessful transient behavior typically occurs when an active right hemisphere is confronted with Fear, or an active left hemisphere with Desire.
  • RC or LC 1 can be different from zero and equal to the RC 1 or LC? of the previous dominant mood.
  • the left cortex does not know or remember a pattern to decrease Fear while the right cortex surrenders to and increases Fear, ultimately causing panic as the highest level of Fear and the lowest level of Desire.
  • the angry or panic emotion was caused by a transient behavior with a high level of right cortex activity and a low level of left cortex activity.
  • Emotional changes do not require sensory input. They can also occur when existing, but unattended cortical representations become attended. As discussed earlier, new emotionally associated sensory input can conquer dominance and ultimately change our mood. However, we do not necessarily need new input for our emotions and mood to change. The attention of our working memory can shift from a certain cortical representation to an associated, already existing but unattended other cortical representation. When the emotional coordinates (d,f) associated with the latter cortical representation gain dominance in our limbic system, they will change our emotions and ultimately our mood.
  • the naming of the mood domains shown in Figure 10 are not exclusive neither exhaustive. More mood nuances or alternatives can be given.
  • the (d,f) coordinates corresponding to the mood Anger can also result in feelings of guilt or self-hatred when the subject onto whom the negative emotion is projected is the self and not the other.
  • any human mood besides the eight basic mood names used in Figure 1J and IK, can be mapped on the mood disk.
  • Figure 1M shows a non-exhaustive list of moods, with their corresponding position on the mood disk.
  • the pleasant feeling of being in control is a less intense form of the mood Dominant.
  • the position of the control mood, as well as the position of any of the other moods in the list, is based on personal introspection and empathic understanding.
  • one or more of the eight basic moods may be refined in one of its variants.
  • Figure 1M illustrates a prior art mental state square published by Csikszentmihalyi in his theory of motivation at work.
  • the two dimensions of Csikszentmihalyi's model are challenge level and skill level.
  • Csikszentmihalyi's square is a special case for motivation at work where when challenge is high, uncertainty for social rejection as a form of Fear is high and the left brain cortex needs to be active to hedge such Fear.
  • skill level is high, the desire for self-realization by socially contributing as a form of Desire is high, resulting from a higher level of right brain cortical activity.
  • business-to-consumer sales is best done in the - n /8 to -3 n /8 area, whereas business-to-business sales can be best positioned in the n /8 to + 3 n /8 area.
  • sales and/or marketing will be most successful if they bring consumers in a relaxed mood and businesses in a passionate mood. Indeed, fear is usually greater when purchasing on behalf of a business than when purchasing as a consumer.
  • the reason is twofold : first, the purchasing sum in B2B sales is usually (much) higher, so the risk of loss is bigger. Second, social pressure is usually higher in B2B purchase. If a consumer purchases a product, e.g.
  • a value should be located at the - 3n/4 mood in the mood disk. Meaning; it's usually in the advantage of a business/consumer to buy from an individual who feels apathy towards the product you want to buy from him. In all other cases, the selling price will usually end up higher. For example, when the consumer/seller is concerned he may still need the product or regret the sale, or when he absolutely loves the product (passionate mood), he will probably be less willing to sell it.
  • a value should be located at the + 3n/4 mood in the mood disk. For example, it's usually in the advantage of a purchaser to buy from a B2B sales person who is anxious (e.g. about not hitting his target). Similarly, it is usually in the advantage of a consumer to buy from a brand who lowered their prices because they are anxious about competition.
  • TV viewing is visual and therefore a specialization or virtuous habit of the right brain hemisphere. This has been confirmed by brain research : in 1979 Herbert E. Krugman showed TV is relatively a right-brain medium, with the right brain in general being about twice as active as the left brain during TV watching. Thus TV viewing should be positioned in the bottom right quadrant of the mood disk. Indeed, the main reason people watch TV is to relax (Barbara & Robert Lee, 1995). An important consequence of the above, is that TV user interfacing should be right brain interfacing, i.e. it should excite the right brain and not the left.
  • Figure 6A illustrates conceptually the effect of an undesirable TV user interface experience, represented as a path in the mood disk starting at relaxed mood and ending in an angry mood. If the viewer's user interface contains too much textual content or requires the user to navigate sequentially through pulldown menus, wizards or other typical personal computer operating system based user interfaces, the left brain will have to be activated. Therefore the position in the mood disk moves up from the right bottom to the right top quadrant. When the left brain has been used in work all day, this causes frustration which is a negative emotion categorized under Fear. In 1980 Herbert E. Krugman showed that, indeed, interruption during TV watching causes frustration, which appears to be related to the left brain being 'turned on' again, thereby interrupting right brain relaxation.
  • the viewer will stop being relaxed, reducing his or her motivational strength.
  • the left brain typically controls and therefore suppresses the right brain. Therefore the desire coordinate will be reduced.
  • the viewer's fear coordinate dramatically rises, bringing the viewer finally over the path indicated in Figure 6A to a position in the Angry area.
  • the viewer will get angry at the provider of the TV services or content who is forcing him through a user interface that is perceived as hostile. Successfully soliciting a purchase from an angry person is not entirely impossible, although very difficult.
  • a left brain interface inhibits Video on Demand (VOD) sales and other sales over TV from growing Desire, actively frustrates existing Desire and creates Fear.
  • VOD Video on Demand
  • the management of one's television system including such things as choosing the content of virtual channels, choosing which Facebook users can post recommendations on one's Facebook channel, setting a channels' order number, managing one's recommendation list, etc. is typically done in an excited mood. Therefore, systems for television management are located in the top right quadrant. As a result, they should excite both the right and left brain, by balancing visual with textual or analytical data.
  • Reading and researching on an e-reader device is preferably done in a relaxed mood.
  • E-reader user who want or need to buy material, don't want to be interrupted by the typical operations required for online order placement and payment, such as selecting a supplier, entering credentials, etc. These latter kind of actions are typically done in another mood, which is characterized by a higher level of fear, and thus, is located in the Dominant or Passionate mood area of the mood disk.
  • Banking systems should not create value, but secure it.
  • Good banking is an objective left brain activity that does not subjectively speculate (which is a right brain activity) and therefor does not desire profit or value, it only hedges the fear of money not being trustworthy.
  • Good banking is not entrepreneurial, but is a collaborative effort of objectively securing value in money. Therefore, banking is located in the top left quadrant of the mood disk.
  • the mood optimal for the type of sale at hand, differs from the mood, typically related to the platform at the hand.
  • the type of sale has a stronger influence on the design of a purchasing platform/process, than the type of platform.
  • Modeling a user's Fear and/or Desire component towards a specific product or content can be done through collecting conscious user feedback and/or through unconscious measuring of e.g. viewing and surfing behavior during TV watching or website browsing.
  • Reducing the Fear component f is preferably done in a Left Brain Activity environment, such as with a text based work environment, and not e.g. during TV viewing. Accordingly, viewing and surfing behavior is well suited to model the d coordinate, while active text based input is suited to model the /" coordinate.
  • viewing and surfing behavior is well suited to model the d coordinate
  • active text based input is suited to model the /" coordinate.
  • viewing and surfing behavior is well suited to model the d coordinate
  • active text based input is suited to model the /" coordinate.
  • For the example of television watching a show entirely will increase the desire component associated with that show, whereas zapping away from the show decreases that same desire component.
  • ordering one's list of preferred TV programs on an internet site on a computer allows modeling of the f coordinate: moving a program up in the list, decreases its fear coordinate, moving it down increases its fear coordinate.
  • the right prefrontal cortex of the human brain has evolutionarily been developing to deal with visual data. Not the actual reception and ordering of this data, which is done in the left and right basal cortex for the right and left eye, but in order to imagine a three-dimensional space outside the brain.
  • the right brain prefrontal cortex imagines: projects an image outside us.
  • the transformation of imagination projecting from basal to frontal cortex is a 1/X transformation.
  • Light which is passing through the diaphragm formed by the pupil in the centre of the iris of the eye and projected on the retina follows exactly the same path as projecting the complex Riemann sphere onto another Riemann sphere, where zero is projected on infinite and vice versa.
  • the right brain had no other option than to develop the 1/X function physically in the projection the basal cortex onto the prefrontal cortex.
  • This projection simulates the inverse projection of light through the pupil onto the retina, which is a 1/X transformation by itself and the 1/X transformation executed after another 1/X gives the identity transformation, meaning humans imagine the world as it is.
  • This 1/X transformation is known to be divergent, creating the notion of infinity and zero in the brain.
  • the left brain pre frontal cortex has been developing to cope with sound (not music, music is a combined effort of left and right brain, such as mathematics is). Instead of at each period of the brain wave inverting by projecting a two- dimensional map of the whole picture on the prefrontal cortex, the left brain has been specialized to find patterns in details when analyzing two-dimensional maps found by writing a column of Fourier transformed (by the cochlea) amplitudes in function of frequency. Therefore the left brain is specialized in detail and control.
  • the best way to deal with Fear is to control it by a detailed analysis and action. Fear starts bigger and through hard labor of the left brain can be controlled or hedged. Therefore the dynamic of Fear is virtuously convergent and viciously divergent.
  • Fear and Desire are independent and together constitute the entire human (and probably also animal) state space of emotions, called the psychology space, which can be mathematically translated as Fear and Desire are the eigenvectors of the psychology space.
  • Any psychological transformation such transformation may occur for example as a result of one's interaction with a potential customer when trying to sell something to him/her, can be decomposed in two components, one in the Fear and one in the Desire dimension, that are independent of each other and together constitute the entire psychological transformation or process.
  • the psychology space therefore can be represented by a two dimensional surface, more specifically, a function range of the two dimensional surface of the brain cortex. Therefore, the dimension of the psychology eigen space is two.
  • the right brain hemisphere is specialized in dealing with Desire and therefore is most virtuously used to deal with Desire, but can also deal with Fear, but then typically viciously.
  • the left brain hemisphere is specialized in dealing with Fear and therefore is most virtuously used to deal with Fear, but can also deal with Desire, but then typically viciously.
  • E E d + E f Every Emotion E is a vector with a magnitude E and a direction ⁇ which is graphically represented in Figure 1C.
  • FIG. 1C illustrates graphically the decomposition of an Emotion into its independent and fully constituent components of Fear and Desire which may be mathematically expressed as follows:
  • Figure 1C is a conceptual illustration of the natural representation of the state space of human psychology.
  • the starting space spanned by Fear and Desire is a single quadrant of the complex plane since no negative attraction or reluctance exists, since Fear and Desire are independent and not each other's opposite or inverse.
  • the human ear or eye which, together with the internal states of the cortex, are the source of a certain phenomenon yielding a specific perception, are both characterized by a logarithmic transmission from senses to brain. Therefore, internal emotion states can more naturally be represented on a logarithmic scale.
  • Representing the X an Y axis of Figure 1C logarithmically is a conformal, holomorphic transformation, substracting (0, ⁇ ) mappings in both directions meaning this transformation can be executed without losing validity of the final psychological space in the real life emotional world.
  • the representation of such transformation is given in Figure 3.
  • Figure 3 is a graph illustrating morphing of the single quadrant phenomenon to the entire complex plane of the perception.
  • Figure 4 is a graph illustrating morphing of the entire complex plane of the perception to the cortical experience, represented by a Riemann complex half sphere.
  • the half complex Riemann sphere can be transformed holomorphically and conformally to the complex unit circle, being the equatorial circle of the Riemann sphere, using a projective Poincare model, yielding the end result of Figure 2.
  • the mathematical characteristics of these transformation suffice to secure mathematical validity of this new representation of Emotion and its eigenvector decomposition through the Fear coordinate f, the Desire coordinate d, the mood ⁇ and the motivational strength m, all being scalars.
  • Orthogonal projections through circles, rather than straight lines are necessary to correctly determine the d and f coordinates.
  • Psychology, emotions, subjective buying and selling behavior and also the viewer-consumer psychology can therefore validly be described in terms of mood ⁇ and motivational strength m. Mapping Emotions to Right and Left Brain
  • FIG. 2 illustrates conceptually the Emotion disc with brain activity varying in function relative to the Real and Imaginary axis, therefore alternating per quadrant. Transformations Yielding a State Space Path
  • TV viewing is visual and therefore a specialization or virtuous habit of the right brain hemisphere.
  • the right brain is specialized to deal virtuously with Desire, and, when it deals with Fear, it does it typically viciously, meaning TV viewing should be positioned in the bottom right quadrant of the Emotion disc, where the right brain is active and the left brain is passive.
  • An important consequence, therefore, is that TV user interfacing should be right brain interfacing .
  • Figure 6A illustrates conceptually the effect of an undesirable TV user interface experience, represented as a path in the Emotion disc starting at relaxed mood and ending in an angry mood .
  • the viewer's user interface contains too much textual content or requires the user to navigate sequentially through pu II- down menus, wizards or other typical personal computer operating system based user interfaces
  • the left brain will have to be activated . Therefore the position in the Emotion disc moves up from the right bottom to the right top quadrant.
  • this causes frustration which is a negative emotion categorized under Fear.
  • the viewer will stop being relaxed, reducing his or her motivational strength.
  • the left brain typically controls and therefore suppresses the right brain. Therefore the desire coord inate will be reduced .
  • the viewer will get angry at the provider of the TV services or content who is forcing him through a user interface that is perceived as hostile. Successfully soliciting a purchase from an angry person is not entirely impossible, although very difficult.
  • a left brain interface inhibits Video on Demand (VOD) sales and other sales over TV from growing Desire, actively frustrates existing Desire and creates Fear.
  • VOD Video on Demand
  • Desire coordinate d a certain level of Desire which is represented by the Desire coordinate d
  • Fear represented by the Fear coordinate f
  • m the motivational strength
  • Desire For business-to-business sales Desire should be seeded, and, when Desire starts growing, Fear should be actively hedged and sometimes created to close a business sale. In consumer sales of e.g. distributed non-proprietary products, Desire can simply be harvested but Fear should still be hedged. Aggregating of content across all channels, including the Internet and other media sources to screen the entire contents market is first performed followed by ranking based on the viewer's Desire, that is the desire coordinate attached at the reference to the content.
  • Reducing the Fear component f is preferably done in a Left Brain Activity environment, such as with a text based work environment, and not during viewing . Accordingly, viewing and surfing behavior is well suited to model the d coordinate, while active text based input is suited to model the /" coordinate.
  • the foregoing concepts for modeling of Desire and Fear vectors relative to their mapping on the Emotion disc can be performed with a unique neuropsychological modeling engine as described herein.
  • Such modeling engine serves as a mechanism by which content objects may be ranked given a subject's (viewer's) unconscious measuring of viewing and surfing behavior and/or conscious user feedback.
  • the specialized set of user interfaces described herein may be utilized to enable multidimensional surfing of the previously ranked content objects.
  • a channel in accordance with the disclosure comprises one or more groups of content objects which have been specifically selected according to a viewer's subjective preferences and mood and arranged in order from lowest to highest emotional motivation for the viewer to select and view such content.
  • a viewer or group of viewers such as a family, may have multiple personalized channels that comprise content programs which has been autonomously aggregated and screened according to their personal interests using the modeling system 35 disclosed herein and which are viewable using the user interfaces application controls associated with the viewer system 32 described herein or are created by using individual channels to mix.
  • the foregoing concepts for modeling of Desire and Fear vectors relative to their mapping on the mood disk can be performed with a unique neuropsychological modeling engine as described herein.
  • a unique neuropsychological modeling engine as described herein.
  • the modeling engine described here serves as a mechanism by which content objects may be ranked given a subject's (viewer's) unconscious measuring of viewing and surfing behavior and/or conscious user feedback.
  • the specialized set of user interfaces described herein may be utilized to enable multidimensional surfing of the previously ranked content objects.
  • a channel in accordance with the disclosure comprises one or more groups of content objects which have been specifically selected according to a viewer's subjective preferences and mood and arranged in order from lowest to highest emotional motivation for the viewer to select and view such content.
  • a viewer or group of viewers such as a family, may have multiple personalized channels that comprise content programs which has been autonomously aggregated and screened according to their personal interests using the modeling system 35 disclosed herein and which are viewable using the user interfaces application controls associated with the viewer system 32 described herein or are created by using individual channels to mix.
  • FIG 7 illustrates conceptually a network environment 38 in which the neuropsychological modeling engine disclosed herein may be implemented.
  • Network environment 38 comprises one or more private networks 31 and a public wide area network (WAN) 30, such as the Internet.
  • Private networks 31 may be implemented with any known networking technology such as a cable packet network from a cable service provider or a packet-switched local area network (LAN), or wireless network.
  • Public network 30 may comprise a married collection of other networks utilizing any currently known networking technology including both wireless, optical, etc.
  • Operably coupled to each of networks 31 and 30 is a content provider 34, a viewer system 32 and a modeling system 35 which contains the neuropsychological modeling engine disclosed herein.
  • additional content providers 36 and 37 are also connected to network 30 as well as an additional viewer system 33.
  • the viewer systems 32 and 33 may be implemented as described with reference to Figure 11.
  • Figure 8 illustrates conceptually a block diagram of modeling system 35 which contains neuropsychological modeling engine 41.
  • system 35 outlined in phantom, comprises a pair of gateways 44 and 45 connecting system 35 to networks 30 and 31, respectively.
  • system 35 further comprises a server platform 40 and one or more databases 46-48.
  • Server 40 which may be implemented with a single server or multiple servers, executes neuropsychological modeling engine 41 and ranking application 42, behavior modeler 49 all of which communicate with each other as well as with databases 46-48 and other entities through network interface 43 which couples server 40 to databases 46-48, as well as networks 30 and 31.
  • Database 46 may be utilized to store records or other data structures representing the neuropsychological model of one or more viewers associated with the viewer system 32, as well as other viewer systems.
  • Database 47 may be utilized to store the content objects, e.g. the files of various multimedia content, available for viewing by the viewer systems 32. Database 47 may also store metadata associated with the respective content files.
  • Figure IOC illustrates conceptually a exemplary content object metadata file 75.
  • Database 48 may be utilized to store one or more channels 90A-C which hold the rankings or orders of multiple content objects associated with channel model(s) 72 and viewer model 70 .
  • each of databases 46-48 are illustrated as a single database, it is contemplated here in that any of them may be implemented with a number of databases in different configurations, including distributed, redundant and peer-to-peer continuously migrating configurations.
  • the data from one or more of databases 46-48 may be combined into a single database. For example, the ranking of content associated with a specific viewer channel model may be stored along with the data defining the viewer model.
  • each of databases 46-48 may include their own respective database server for interfacing with server 40 or may share a database server.
  • Figure 9D illustrates conceptually the elements of an embodiment of modeling system 35 necessary for the derivation of the relationship between metadata associated with a content object and an individual viewer model relative to the ranking of the content object associated with the particular channel model.
  • each content object stored in database 47 has associated therewith a metadata file 75 which contains various data parameters describing the content of the file, such as the format, duration, title, genre, actor, producer, year of initial release, etc. Any number of different data structure formats may be utilized for this particular structure.
  • Such content file metadata files may also be stored in database 47.
  • each individual viewer (or group of viewers, e.g. a family) associated with viewer system 32 has associated therewith a viewer model 70 which contains data describing the behavior model comprising viewer metadata such as gender, age, occupation, product/description service level, etc. and idealized preferences for the viewer (or groups of viewers) in terms of genre, actors, specific series, area of interest, past selection history, viewing duration or other parameters.
  • Figure 10A illustrates a sample data structure which may be used to implement the behavior model 70 for a specific viewer (or groups of viewers).
  • viewer metadata files may be stored in database 46.
  • search engines such as Google, Bing, Yahoo, etc. create ontologies of reality.
  • Ontologies are used in artificial intelligence, the Semantic Web, systems engineering, software engineering, biomedical informatics, library science, enterprise bookmarking, and information architecture as a form of knowledge representation about the world or some part of it.
  • search engines create an objective index of content representing reality, such indexed content may be stored in one or more databases as represented in Figure 9A by database 60.
  • database 60 and the indexed content may or may not be part of modeling system 35 but maybe accessible thereby through a public or private network.
  • Figures 9E-F illustrate the process flow between components of modeling system 35 to update a viewer's model and channel model, retrieve new content and determine if such content is suitable for ranking according to the system model of the viewer's emotional motivation.
  • viewer behavior including events such as requesting a specific program, completion of the viewing of a content object, storing, or purchasing of content, management of a channel, causes viewer system 32 to send event data packet(s) to behavior modeler 49 of modeling system 35 as illustrated by arrow A of Figure 9D and decisional blocks 61 of Figure 9E.
  • behavior modeler 49 modifies the viewer model 70 associated with the specific viewer, as illustrated by process blocks 62A and, if necessary channel model(s) 72, as illustrated by process blocks 62B both of Figure 9E.
  • the event data received by behavior modeler 49 may include an identifier of the content object which was the subject of the event, the elapsed viewing time of the content object, a descriptor of an action such as storing, purchasing, changing the order of, specifying a like/dislike of, or deleting such content object, and identifiers of the channel by which the content object was manipulated, and an identifier of the subject viewer or viewers.
  • the event data received by behavior modeler 49 may include the channel by which the content object was manipulated (since a content object may belong to multiple channels). Also, If the event is implicit event the event data received by behavior modeler 49 may include the timestamp of the action (elapsed time may be calculated at the source of the content object data stream since actions such as fast-forward and/or rewind are mapped to start/stop in order to calculate the cumulative viewing time) and the position in the content object, e.g. after x seconds. If the event is an explicit event regarding channel management, the event data may contain an identifier of the channel that is being added or removed or changed and/or the search term/keyword associated with the change. If the event is an explicit event regarding one of the dedicated feedback (colored button) commands described herein the event data may contain identifiers of any of the command/button, content object and channel.
  • Figure 10B1 illustrates conceptually a data structure defining an exemplary channel model 72C.
  • Behavior modeler 49 then retrieves from database 46 the model associated with the specific viewer and the metadata file 72C defining the channel.
  • behavior modeler 49 also retrieves from database 47, the metadata file describing the content object.
  • behavior modeler 49 compares the received event data with metadata file 75 of the content object and the current viewer model 70 and modifies the channel model(s) 72c appropriately, (indicated by the circular arrow within behavior modeler 49), as illustrated by process block 62A and 62B of Figure 9E.
  • the viewer model 70 is modified and optionally the channel model could also be modified, as would be in case of channel management and search term change.
  • modifying the viewer model 70 may be performed with the following algorithm.
  • Each event is mapped onto the mood disc 20 according to a prescribed rule, e.g. purchase of a content object results in a predefined ⁇ and m value (or equivalent Fear coordinate f and Desire coordinate d).
  • a first step the location on the mood disc 20 of the content object is determined for a particular user.
  • the position of that content object on the personal mood disk of that viewer may differ from its default, general starting position.
  • the default starting positions themselves may also shift, based on collaborative data as outlined in the following examples.
  • individual refinements are based on implicit and explicit data. Imagine a viewer who mainly watches content objects, which are typically considered relaxing, and in between also regularly watches the daily news. If he/she displays similar viewing patterns for both the series and news, a presumption can be made that the daily news is also relaxing for him, and (gradually) move the daily news from the passionate area to the relaxed area of the mood disk of that particular viewer.
  • a linear combination of the metadata of the content object results in the defined Fear coordinate f and the Desire coordinate d.
  • matrix O may look like:
  • this system of equations must be solved. Based on the sizes of m and n and/or the rank of matrix O an algorithmic routine is applied (either a direct or iterative solver from numerical linear algebra, e.g. least squares solution) to determine each coefficient x,.
  • An analogous system of variables may be used to calculate the desire coefficients. Due to the fact that the coefficients of the viewer model are updated based on new events, a change in the fear and/or desire of the viewer can be made by giving a lower weight to the oldest equations or discarding them from the system to be solved.
  • modifying the channel model 70 can be performed upon explicit events, such as a viewer's initiated modification of the channel model with the left brain user interface, described herein.
  • a viewer initiated event to create/update/delete a channel results in creating/updating or deleting the channel record.
  • a viewer initiated event to modify the search terms/ keywords associated with the channel results in updating the filter values associated with that channel.
  • a viewer initiated event to explicitly modify the "mood" associated with the channel results in updating the Fear and Desire coordinate value associated with that channel (a default value assumption is that the viewer watches the channel in "relaxed" mood).
  • Modifying the channel model 70 can be performed upon implicit events as well. For example, if it is determined that the content objects that a user watches in a certain channel tend to be located in another region of the mood disk than the region associated with the channels' mood vector the mood vector may be changed, e.g. from the "relaxed mood" to the "passionate” area.. If the modified viewer model has strong coefficient values for a number of ontology components that are not yet part of a channel's filter criteria, a new channel is created (for suggestion to the viewer) that has these components as the filter values.
  • modeling engine 41 Upon certain events, e.g. periodically (a timer event), viewer event, content event (e.g. new VOD content available) the modeling engine 41 is run. As first step, modeling engine 41 performs content based filtering based on the viewer and channel model. The modeling engine 41 requests from database 60 any indexed content material that may be relevant, as illustrated by arrow D of Figure 9D and process blocks 63 of Figure 9F. In an exemplary embodiment, modeling engine 41 formulates and formats the database queries provided to database 60. Referring to figure 10A1, queries can be based on any combination of ontology components (having strong coefficient values in the viewer model) and filter criteria from the channel model (ranging from simple criteria like "broadcasted by X" to criteria linked to viewing context stored in the viewer model e.g. "similar to items I like to watch on Friday evening").
  • ontology components having strong coefficient values in the viewer model
  • filter criteria from the channel model ranging from simple criteria like "broadcasted by X" to criteria linked to viewing context stored in the viewer
  • modeling engine 41 may be programmed to interact with the querying format of any number of different indexed content sources or content libraries, such as YouTube and various popular Web search engines, in addition to more traditional content providers such as cable service providers, VOD providers, etc....
  • Database 60 or other content source returns the metafiles for one or more content object satisfying the query to modeling engine 41, as illustrated by arrow E of Figure 9D.
  • Neuropsychological modeling engine 41 examines the metadata file for each content object retrieved, and, in conjunction with the viewer's metadata file and/or channel model, calculates where on the mathematical model of human emotion, i.e. the mood disc 20, described previously with reference to Figures 1A- 6D, the viewer's mood and motivational strength are relative to that specific content object. Specifically, modeling engine 41 examines the various values of the parameters within the metadata file for the content object, such as the genre of the program, actor, title, director, etc. and maps these onto the corresponding components of the ontology used. Based on the coefficient corresponding to each selected component available in the viewer model (as calculated by process block 62A), the mood disk Fear coordinate f, and Desire coordinate d for this content object are computed.
  • each channel model associated with the viewer model ranking application 42 assesses whether the content object satisfies the filter criteria for the channel.
  • the similarity of each selected content object's mood vector to the mood vector associated with this channel is calculated using the "cosine similarity measure". This measure allows the application 42 to rank the content objects selected for this channel relatively according to their similarity with the channel's mood vector.
  • a "collaborative filtering" post-processing step to update the rank of content objects in the selection of engine 41 for this channel and viewer - similar to traditional hybrid recommendation algorithms (collaborative and content based filtering) algorithms. Specifically, ranks of objects (a certain selection of e.g. low rank objects) based on the viewing behavior (e.g.
  • the "similarity" of the viewers is calculated not only based on preferred content objects and a correspondence in preferred content object metadata but also the correspondence in the mood disk stored in the viewer's model.
  • viewer similarity is calculated using the "cosine similarity” applied to both a vector comprising the fear and desire coefficients of the respective viewers.
  • the "content based” and “collaborative filtering” mechanisms may be combined in different ways e.g . a different sequence of steps or parallel.
  • the next step is a cut-off of the lower ranked content objects according to certain cut-off criteria.
  • this criteria can be "after a certain similarity measure value all content is omitted" or "after a certain number of content objects" ; such value can also be dynamically calculated by the system.
  • a sorting operation can be done on the remaining content objects for this channel, given a certain sorting criteria (e.g. time of broadcasting, oldest first or last). Note that the channel content may also be enriched with content added by the program director.
  • Channel 90 may be implemented using the data structure 95 illustrated in Figure 12C in conjunction with any number of other data structures, including bidirectional stacks, doubly linked lists, relational database records, etc. and contains a plurality of entries for holding any of an address, identifier or link to the actual file containing the multimedia content in database 47. Note any number of different channels may be associated with the same viewer.
  • the process performed by modeling engine 41 is performed for each content object and for each channel associated with a specific subject viewer.
  • the rankings of content objects for a specific viewer can be updated periodically, for example, daily, every 8 hours, etc.
  • neuropsychological modeling engine 41 rather than computing values for the Fear coordinate f and Desire coordinate d, for every content object may utilize a look-up table which, given the weighted input values of the dominant preferences from a channel model 72 and viewer model 70 generates appropriate values for Fear coordinate f and Desire coordinate d.
  • Figures 10A, 10A1, 10B, 10B1, IOC, and 10C1 illustrate conceptually the data structures utilized by neuropsychological modeling engine 41, ranking application 42 and behavior modeler 49 to create ranking of content objects.
  • Figures 10A and 10A1 collectively illustrate a conceptual viewer metadata file 70.
  • the viewer metadata file 70 also contains information useful to behavior modeler 49 and neuropsychological modeling engine 41, such data as a list of preferences to any of actors, genres, producers, specific topics of interest, specific topics of disinterest, any of which has associated therewith a type identifier and a weighted preference value, usually an integer value selected from a range of possible values, e.g.
  • viewer metadata file 70 may further comprise a list of specific system events, typically arranged in reverse chronological order, with each entry defining the nature of the event, the date the action taken and, optionally, an elapsed time value.
  • Figures 9B-C illustrate the process flow between components of modeling system 35 to update a viewer's model and channel model, retrieve new content and determine if such content is suitable for ranking according to the system model of the viewer's emotional motivation according to another embodiment of the disclosure.
  • viewer behavior including events such as requesting a specific program, completion of the viewing of a content object, storing, or purchasing of content causes viewer system 32 to send event data packet(s) to behavior modeler 49 of modeling system 35 as illustrated by arrow A of Figure 9A and decisional blocks 61 of Figure 9B.
  • behavior modeler 49 modifies the channel model(s) 72, and, if necessary, viewer model 70 associated with the specific viewer, as illustrated by process blocks 62.
  • the event data received by behavior modeler 49 may include an identifier of the content object which was the subject of the event, the elapsed viewing time of the content object, a descriptor of an action such as storing, purchasing, changing the order of, specifying a like/dislike of, or deleting such content object, and identifiers of the channel to which the content object belongs, along with its ranking, and an identifier of the subject viewer or viewers.
  • Behavior modeler 49 then retrieves from database 46 the model associated with the specific viewer and the metadata file 72A defining the channel.
  • behavior modeler 49 also retrieves from database 47, the metadata file describing the content object.
  • behavior modeler 49 compares the received event data with metadata file 75 of the content object and the current viewer model 70 and modifies the channel model(s) 72 appropriately, (indicated by the circular arrow within behavior modeler 49), as illustrated by process block 62 of Figure 9B.
  • neuropsychological modeling engine 41 periodically requests the metadata file describing the current channel model associated with the viewer, as illustrated by arrows B and C of Figure 9A.
  • neuropsychological modeling engine 41 uses the metadata file describing the current channel to request from database 60 any indexed content material that may be relevant, as illustrated by arrow D of Figure 9A and process blocks 63 of Figure 9B.
  • neuropsychological modeling engine 41 examines the metadata file describing the current channel model and formulates and formats the database queries provided to database 60.
  • modeling engine 41 may be programmed to interact with the querying format of any number of different indexed content sources or content libraries, such as YouTube and various popular Web search engines, in addition to more traditional content providers such as cable service providers.
  • Database 60 or other content source returns the metafiles for one or more content object satisfying the query to modeling engine 41, as illustrated by arrow E of Figure 9A.
  • Neuropsychological modeling engine 41 examines the metadata file for the content object, and, in conjunction with the viewer's metadata file and/or channel model, calculates where on the mathematical model of human emotion, i.e. the emotion disc, described previously with reference to Figures 1-6, the viewers mood and motivational strength are relative to that specific content object. Specifically, modeling engine 41 examines the various values of the parameters within the metadata file for the content object, such as the genre of the program, actor, title, serious etc. and, in light of the metadata file associated with the viewer, specifically any preferences, and the channel model, having been updated in light of any preceding behavioral events computes where on the emotion disc Fear coordinate f, the Desire coordinate d, reside.
  • the angular position representing the viewers mood ⁇ and the effect of the object on the viewer's mood and the motivational strength m are determined using the mathematical relationships disclosed herein, as illustrated by process block 64 of Figure 9B. If the resulting mood value ⁇ is located in a desirable angular position on the emotion disc, based on the desired result, i.e. selection of the program or purchasing of the content, the content object qualifies for the channel in question and neuropsychological modeling engine 41 provides the motivational strength value m and the content object metafile to ranking application 42, as illustrated by arrow F of Figure 9A and process block 68 and the "Y" branch of decisional block 65 of Figure 9B.
  • modeling engine 41 recomputes the mood value ⁇ for any other channeling model associated with the same viewer model using the previously described process until there are no more channel models associated with the viewer, as illustrated by process block 67 and the "Y" branch of decisional block 66 and the "N" branch of decisional block 65 of Figure 9B.
  • neuropsychological modeling engine 41 compares the next content object within the query results from database 60 to each of the channel models 72, as indicated by "Y" branch of decisional block 71 and process block 73 of Figure 9C. Once all content objects have been compared to all channel model 72 associated with a particular viewer, modeling engine 41 then utilizes the model of the next channel associated with the viewer model to generate another set of queries to database 60, in the manner as previously described. Thereafter, the process from process blocks 63 and thereafter repeats, as described previously relative to the next channel model associated with the same viewer model.
  • Ranking application 42 examines the m value provided by neuropsychological modeling engine 41 and generates a value representing the relative ranking of the content object relative to other content objects in the data structure associated with the specific viewer channel 90.
  • Channel 90 may be implemented using the data structure 95 illustrated in Figure 12C in conjunction with any number of other data structures, including bidirectional stacks, doubly linked lists, etc. and contains a plurality of entries for holding any of an address, identifier or link to the actual file containing the multimedia content in database 47. Note any number of different channels may be associated with the same viewer.
  • the process performed by modeling engine 41 is performed for each content object and for each channel associated with a specific subject viewer.
  • the rankings of content objects for a specific viewer can be updated periodically, for example, daily, every 8 hours, etc.
  • neuropsychological modeling engine 41 rather than computing values for the Fear coordinate f and Desire coordinate d, for every content object may utilize a look-up table which, given the weighted input values of the dominant preferences from a channel model 72 and viewer model 70 generates appropriate values for Fear coordinate f and Desire coordinate d.
  • Figures lOA-C illustrate conceptually the data structures utilized by neuropsychological modeling engine 41, ranking application 42 and behavior modeler 49 to create ranking of content objects.
  • Figure 10A illustrates conceptually a viewer metadata file 70.
  • the viewer metadata file 70 also contains information useful to behavior modeler 49 and neuropsychological modeling engine 41, such data as a list of preferences to any of actors, genres, producers, specific topics of interest, specific topics of disinterest, any of which has associated therewith a type identifier and a weighted preference value, usually an integer value selected from a range of possible values, e.g. on a scale of 0 to 100.
  • viewer metadata file 70 may further comprise a list of specific system events, typically arranged in reverse chronological order, with each entry defining the nature of the event, the date the action taken and, optionally, an elapsed time value.
  • Figure 10B illustrates conceptually an exemplary channel model 72 comprising metadata file portion 72A and accompanying bucket buffer area 72B for data relevant to a particular viewers viewing history, but delineated on a preference by preference basis.
  • the metadata file portion 72A of channel model 72 contains a list of dominant preferences and accompanying values, usually an integer value selected from a range of possible values, e.g. on a scale of 0 to 100, as well as sub-dominant preferences and respective accompanying values.
  • Bucket portion 72B of channel model 72 in an exemplary embodiment, contains multiple sub-bucket areas each containing its own preference identifier and storage area for event data.
  • Such data may be contained within the bucket in an unsorted or chronological order, but in a format which is recognizable by behavior modeler 49 and neuropsychological modeling engine 41.
  • specific parameters such as favorite actor, favorite genre, specifically requested topics, content most purchased or stored, etc., may have historical data factored into a respective preference value, and a determination of which parameters will be weighted most heavily within a specific viewers channel identified by behavior modeler 49 accordingly.
  • behavior modeler 49 will determine the nature of each event from viewer system 32 and consider the metadata associated with the content object, the viewer model, and the dominant preferences of channel model metadata file 72A, the relationships between which may have been previously derived and embodied into predetermined formula to achieve the most accurate representation of a viewer's emotional motivation for a particular content object. Behavior modeler 49 then manipulates the respective weight of one or more dominant and subdominant preferences within channel model metadata file 72A. For example, the repeated viewing of the movie with a particular actor will cause an increase in the weighted value of the dominant preference for that actor relative to other dominant and/or subdominant preferences, such as producer, specific genre, or category of interest within the channel model metadata.
  • the combined weight of dominant and subdominant preferences within channel model metafile 72A remains substantially constant while the respective weights of the individual constituent preferences may vary dynamically per viewing events.
  • the metadata parameters of a channel model 72 are being continually updated and compared with each other to determine which preferences are currently more heavily weighted given the immediate past viewing history of the viewer.
  • behavior modeler 49 will update the appropriate preference bucket areas within section 72B or instantiate a new bucket region within the model, the model being dynamically expandable. Behavior modeler 49 then determines based on the event whether any of the preference values associated with either the dominant or sub dominant preferences need to be modified, and makes any changes to the preference values in section 72A, if appropriate.
  • behavior modeler 49 transmits the metadata portion 72A from which modeling engine 41 generates request queries using the dominant and sub- dominant preferences, after reviewing their respective accompanying values.
  • the recommendation and modeling system 35 disclosed herein may utilize both content based filtering and collaborative filtering techniques for identifying potential content objects of interest.
  • the synergistic combination of the respective functionalities forms as a discovery engine-like functionality capable of identifying content objects which have a higher probability of selection by a viewer utilizing the systems described herein.
  • FIG 11A illustrates conceptually a viewer interface system 32 relative to public network 30, content provider sources 34 and 36 and modeling system 35 in accordance with the disclosure. Also illustrated in Figure 11A is the remote control 88 associated with display 80.
  • the viewer system 32 comprises a first or right brain user interface display 80, used predominantly for viewing of video content which, in the illustrative embodiment, may be implemented with television display 80 and an accompanying remote control 88.
  • Display 80 may be implemented with a "connected TV" or other devices that connect the TV to the networks 30 or 31 such as a connected Blu-ray player or a connected game console, e.g. a device capable of connecting directly to the Internet, e.g. network 30, as well as a cable packet network or satellite network, e.g. network 31.
  • Viewer system 32 further comprises a second or left brain user interface 84 which presents a content surfing interface and purchasing interface and may be implemented on a Personal Digital Assistant (PDA) or smart phone, tablet computer or even laptop computer.
  • PDA Personal Digital Assistant
  • Such second user interface predominantly uses and/or stimulates activity in the left hemisphere of the human brain, and also, to a limited extent, the right hemisphere of the human brain.
  • a viewer will typically utilize the second user interface 84 to perform activities such as storing, purchasing, changing the order of, specifying a like/dislike for a particular content object within the rankings of a channel 90.
  • Viewer system 32 further comprises optional, third and fourth user interface 86 and 87, respectively, capable of presenting both the textual based interfaces for content surfing and purchasing, as well as visual content and may be implemented with a traditional personal computer, including a desktop or laptop system, as well as other systems.
  • display 80 presents visual, non-textual information while one, two or all three of phone/PDA 84, personal computer 86, and/or tablet computer 87 display textual information, such as a representation of the content contained with channels 90A-C of Figure 12B, or other textual based data.
  • personal computer 86 and tablet 87 may also be used to display visual information.
  • the predominance of brain activity for the various user interfaces in viewer system 32 is indicated in the table below:
  • Tablet 87 mainly Left, limited Left, full Right optionally
  • - Smartphone/PDA 84 mainly Left/ limited Left, limited Right optionally
  • Personal Computer 86 full Left, limited Right optionally
  • the elements of viewer system 32 may be implemented with existing commercially available technology.
  • display 84 may be implemented with any number of smartphones or personal digital assistant devices including, but not limited to the Apple iPhone and Android operating system based smartphones commercially available from any number of manufacturers including Samsung, HTC, Alcatel, Acer, Sony Ericsson, HTC, LG, Google Nexus, ZTE, Motorola, etc.
  • This display 87 may be implemented with the tablet computer including, but not limited to the Apple iPad and Android operating system based tablets, commercially available from any number of manufacturers including Acer, Archos, Dell, Motorola, Samsung, Sony, Toshiba, ZTE, etc....
  • display 80 may be implemented with a connected TV, as well as a traditional television display devices which rely on supplemental equipment, such as set top box 82, for connection to a source of content, including, but not limited to those commercially available from any number of manufacturers including LG, JVC, Panasonic, Philips, Samsung, Sharp, Sony, etc.
  • Display 86 may be implemented with any number of computer systems including, but not limited to the Apple iMac and IBM PC compatible personal computers, commercially available from any number of manufacturers including Acer, Hewlett-Packard, Asus, Samsung, Sony, Dell, Toshiba, etc.
  • Set top box 82 may be implemented with any number of commercially available set-top box devices or gaming platforms of either an open architecture or proprietary architecture, depending on the source of the content accessed thereby, including those commercially available from any number of manufacturers including Sony Playstation, Apple Mac Mini, Nintendo Wii, Microsoft Xbox, etc.
  • Remote 88 may be implemented with any number of standard design remote controls from TV manufacturers, or, alternatively, may be implemented with an if market remote such as those manufactured by Logitech, Inc.
  • the traditional cursor navigation controls of remote 88 are utilized as the primary mechanism for surfing the channel(s) of previously aggregated and ranked content associated with the viewer's neuropsychological profile, as described previously.
  • the traditional functions of the cursor navigation control commands generated by remote control 88 may be overridden and/or redirected utilizing a redirection application 85 selectable with the remote or directly from the front panel of display 80.
  • Such programs may execute either directly on the processor and operating system of display 80 in case of a connected TV or other connected devices, or, alternatively, on the set top box 82 associated with display 80, or remotely on server 40 of modeling system 35 remotely connected to viewing system 32 through public network 30.
  • each of the cursor navigation controls are redirected to initiate retrieval and review of a content object which has been previously ranked within a channeling, as described herein.
  • Figure 11B illustrates the algorithmic processes performed by redirection application 85.
  • application 85 waits for commands signals sent remotely from remote control 88.
  • signals may be transmitted through either tangible electrical conductors or wirelessly through any number of technologies, including optical, microwave, etc.
  • Application 85 examines the data of a received signal, typically the field within a header file or data stream which identifies a command, to determine if the received signal associated with a received command identifies one of the signals to be redirected, such as the Up, Down, Left and Right cursor navigation signals of remote 88. If so, depending on which cursor navigation command is received, the redirection application 85 transmits to modeling system 35 the data necessary to identify the new content object to be viewed.
  • Such data may be implemented in any number of different techniques, such as with a memory off-set to a currently or recently viewed content object, with a sequence number identifying the next content object within the channel data structure 95, or with a resolvable link retrieved from the metadata file contents associated with the currently displayed object, as stored locally within viewer system 32 or remotely within modeling system 35.
  • FIG 11C illustrates the algorithmic processes performed by server application 51 of modeling system 35 upon receipt of handle or reference data from redirection application 85 identifying the next content object to be displayed .
  • server application 51 resolves any addresses, links or references to the next content object to be displayed and then retrieves the metadata file associated with such content object, typically from database 47. Thereafter, the actual data associated with content object is retrieved from database 47 and streamed to first user interface 80 of viewer system 35 via either public network 30 or private network 31, depending on the exact implementation of the system.
  • server application 49 may start a timer to determine the last time until streaming is terminated, typically when the next content object to be viewed is selected.
  • server application 51 Upon receipt of a command to terminate streaming, server application 51 transmits a value representing the elapsed time of the previously reviewed content object along with the metadata of the content object to behavioral model module 49 for updating of the viewer's behavioral model.
  • Other available commands may similarly cause content streaming to terminate and the viewer's behavioral model to be updated with the elapsed time, including, but not limited to, channel up/down, back button (results in starting another content object), pause, fast-forward, rewind (within the content object), etc.
  • server application 51 may examine the time code embedded within the header of the last streamed data packet to determine approximately how much of the content object was viewed by the viewer before streaming was terminated. Data representing the elapsed time based on this value can then similarly be sent to behavioral model module 49. Thereafter, a similar process occurs for identifying, retrieving and streaming the next content object to be viewed.
  • Implicit data/events may include:
  • Explicit data/events may include: • Provide feedback using the colored buttons on the remote control 88 (or equivalent right brain user interface element of display 84, 86, 87)
  • Additional commands that may result in transmission of a new content object includeDouble arrow left, Double arrow right, Back button, and "OK" button (if it is an item that should be purchased only a trailer is retrieved when accessing this item using the arrows; OK triggers the transmission of paid content).
  • a multidimensional channel 90 is shown conceptually to illustrate the concept of multidimensional surfing of content along desire and time vectors 92 and 94, respectively, using traditional cursor navigation controls 91, 93, 95, and 97.
  • channel 90 associated with a specific subject/viewer includes a first plurality of content objects Clt, C2t , C3t, C4t, C5t,...Cnt along a first dimension
  • activation by the viewer of the Up cursor control 91 initiates viewing of the next content object in dimension 92 of channel 90 for which the subject/viewer will have an increased motivational desire to view or purchase the content thereof.
  • activation by the viewer of the Down cursor control 97 initiates viewing of the next content object in dimension 92 of channel 90 for which the subject/viewer will have an decreased motivational desire to view or purchase the content.
  • One or more of the first plurality of content objects Clt - Cnt have associated there with through links or references, a second plurality of content objects related chronologically along a second dimension 94 and which share one or more common metadata parameters.
  • content object C4t has associated therewith a plurality of content objects C4t-1, C4t-2 , C4t-3, C4t-4,...C4t-n arranged chronologically in a first direction, for example, sequentially in order of increasing age in the leftward direction.
  • Content object C4t also has associated therewith a plurality of content objects C4t+ 1, C4t+2 , C4t+3, C4t+4,...C4t+p arranged chronologically in a second direction, opposite the first direction, for example, in order of decreasing age in the rightward direction.
  • activation by the viewer of the Left cursor control activation by the viewer of the Left cursor control
  • the Up and Down cursor navigation controls 91 and 97, respectively, of remote 88 may be utilized to move through the content objects in the first dimension 92 that have been previously ranked by modeling system 35 associated with the currently viewed channel 90 while the Left and Right cursor navigation controls 93 and 95, respectively, of remote 88 may be utilized to surf backward or forward in time, respectively for content, for example, for past or future episodes of the same program currently being viewed or just viewed.
  • the second interface 84, third user interface 86, or fourth user interface 87 of viewer system 32 may also be utilized to access the content objects of either dimension 92 or 94 of a channel 90.
  • Figure 12B illustrates conceptually the implementation of channel 90 associated with a specific subject/viewer within database 48.
  • Channel 90 may comprise a plurality of channels 90A-C, stored in database 48 of modeling system 35.
  • channel 90A comprises a plurality of groups.
  • first dimension 92 of channel 90 in Figure 12A is illustrated by Group 1 in Figure 12B while second dimension 94 is represented by Group 2 of Figure 12B.
  • the content objects within Groups 1 and 2 may be linked depending on the nature of the implementation of each slot or ranking location within the channel data structure.
  • each of Groups 1-n may represent a single dimension. Note that a group may have multiple or single items therein.
  • Channels 90B and 90C may be implemented similar to or different than channel 90A.
  • Figure 12C illustrates conceptually a sample data structure 96 from which the groups within channels 90A-C may be constructed.
  • the structure 96 may be implemented as an object, record, file or other storage construct and may comprise a field or parameter identifying its associated content object, and an address or link resolvable to a storage location at which the actual content object may be retrieved.
  • data structure 96 may further comprise, optionally, a position value, identifying its position within the group/channel, as well as one or more links references or pointers to adjacent data structures.
  • Such adjacent data structures represent those content objects accessible within channel 90 along the first dimension 92 or second dimension 94 utilizing the cursor navigation controls of remote control 88 in conjunction with redirection application 85, as disclosed herein.
  • Data structure 96 may have none, one or multiple pointers or references associated therewith.
  • Data structure 95 may further comprise a field or parameter identifying the viewer and/or channeling with which the content object is associated.
  • Clt, ...Cnt is further referred to as "the horizontal dimension"; this is the main dimension of a channel; content in this channel is indeed selected according to the ranking of the content ; however, the ordering of the content could be motivational in which case then Cnt is the content with the highest rank, or time-based : in which case Cnt is the most recent item.
  • C4t- 3,....C4t-l is the dimension that is entered when pressing the double left arrow once in the position of item C4t; content is related according to a certain metadata item e.g.
  • C4tul, ...C4tu3 is the dimension that is navigated to when pressing the up button when based on item C4t; note that in this "upper" dimension content with the highest motivation for viewing/buying is in the most accessible position i.e. C4tul, with decreasing motivation when going up.
  • C4td l,....,C4td3 is the dimension that is navigated to when pressing the down button when based on item C4t; note that in this "down" dimension content with the highest motivation is in the most accessible position i.e.
  • C4td l motivation is decreased when going down.
  • content in the up dimension is from one source (i.e. VOD), content in the down dimension from another (i.e. YouTube).
  • Figure 11D illustrates conceptually the algorithmic processes performed viewer system 32 to perform the above-described navigation and display of content objects.
  • Figure 13A illustrates a plurality of viewer systems 32a-n operably coupled to both a content source 36 and a modeling system 35.
  • Viewer systems 32a-n may be implemented as described previously herein with the additional modification as described below.
  • modeling system 35 may be implemented as described previously herein.
  • Content source 36 may be implemented as previously described herein with reference to source 60 of Figure 9A which contains indexed content material, or, any of content providers 34 or 37 of Figure 7, or, may comprise any of Cable TV service provider through cable packet network, Satellite TV service provider through satellite network, or live broadcast over the internet (internet TV).
  • Figure 13B illustrates an alternative conceptual network configuration, similar to Figure 13A, except that content file source 30 communicates with modeling system 35, in addition to, or in place of viewer systems 32a-n.
  • FIG 14 illustrates conceptually selected elements of viewer interface system 32 relative to public network 30, content provider source 36 and modeling system 35 in accordance with the disclosure.
  • the viewer system 32 comprises a first or right brain user interface display 80, used predominantly for viewing of video content which, in the illustrative embodiment, may be implemented with television display 80 and an accompanying remote control 88.
  • Display 80 may be implemented with a "connected TV" or other devices that connect the TV to the networks 30 such as a connected Blu-ray player or a connected game console, e.g. a device capable of connecting directly to the Internet, e.g . network 30, as well as a cable packet network or satellite network, e.g. network 31.
  • Viewer system 32 further comprises a second or left brain user interface 84 which presents a content surfing interface and purchasing interface and may be implemented on a Personal Digital Assistant (PDA) or smart phone, tablet computer or even laptop computer.
  • PDA Personal Digital Assistant
  • Such second user interface predominantly uses and/or stimulates activity in the left hemisphere of the human brain, and also, to a limited extent, the right hemisphere of the human brain.
  • television display 80 further comprises an application process 100 for interfacing with content provider source 36 and modeling system 35.
  • application 100 comprises modeling system interface process 102 and crawler process 104.
  • Modeling system interface process 102 enables viewer system 32 to interact with source 36 and modeling system 35 in a manner described hereafter with reference to Figures 13A-B.
  • Crawler process 104 interacts with process 102 and content source 36, and, where applicable, a scheduling application or electronic program guide function 106 associated with content source 36 in a manner described hereafter.
  • Crawler process 104 interacts with content source 36 and modeling system 35, via process 102, in the following manner.
  • Crawler process 104 continuously queries scheduling function 106 associated with content source 36 to determine which content programs are currently accessible for download streaming from the content source 36 to viewer system 32. The determination of such accessibility will typically be defined by the viewer's subscription agreement with the content source provider.
  • crawler process 104 initiates download streaming of the content to display 80 and buffers a fractional percentage of the content in memory associated with display 80, along with selected metadata associated with content, including data identifying the content, and one or more temporal or sequential identifiers or markers identifying the specific portion of the content contained within the buffer, as illustrated by arrow A of Figure 13A.
  • Figure 13C illustrates conceptually an algorithmic processes to capture and upload of content object fractions y viewer system 32.
  • Crawler process 104 then transmits to process 102, one or more packets of data containing the buffer content along with the information identifying the content, or, alternatively, provides the addresses in memory where such information is stored and accessible by both processes.
  • Process 102 appends to this information, a data structure 120, as illustrated in Figure 15 and transmits or streams such information to modeling system 35, as illustrated by arrow B of Figure 13A.
  • process 102 may query aggregation server 110 of modeling system 35 to determine if a complete copy of the content object already resides with the aggregation server database 112 or database 47.
  • process 102 will send only the data structure 120 to the aggregation server 110 to eliminate unnecessary network bandwidth utilization. If aggregation server 110 requires a specific segment of the content object, it will specify to process 102 the specific segment(s), identifiable by temporal or sequential identifiers. Process 102 will provide such information to crawler process 104 for forwarding and acquisition of the content to/from the source 36.
  • data structure 120A may comprise data identifying a the content object and/or a portion thereof 122A, temporal or sequential identifiers associated with the content object 124A, and authorization indicia 126A identifying a viewer process.
  • data structure 120A may further optionally comprise data 128A identifying a user defined channel associated with the viewer process 127A and data identifying an encryption key 129A for decrypting the content object.
  • the authorization indicia 126A may take any number of different forms including one or more binary values arranged in a mask, special codes, keys, hash values, etc.
  • authorization indicia 126A may be received from the content source 36 or may be derived therefrom by process 102.
  • decryption keys or codes may be similarly provided to modeling system 35 by process 102 as part of the authorization indicia 126A.
  • crawler processes 104 The functionality performed by crawler processes 104 is repeated, continuously while display device 80 is operably connected to content source 36, for all content to which the viewer process has access.
  • Process 104 may utilize the channel selection drivers associated with display 80 or any associated cable box 82, as applicable, to query source 36.
  • the functionality performed by crawler process 104 occurs typically without any video or audio content being read from the display buffer to the actual display itself. In this manner, such process may be conducted while the viewer is not utilizing the system, e.g. during system "down time” and transparently without the viewer being aware.
  • modeling system 35 further comprises an aggregation server 110 and accompanying database 112 and network streaming interface 114.
  • the data contained within the structure 120 received from process 102 of the viewer system 32 is utilized by aggregation server 110 to assemble a complete copy of the content object for retention within database 112 or 47, as applicable.
  • an application process within aggregation server 110 utilizes the temporal or sequential identifiers or markers associated with the content and arranges the received portion of the content according to its relationship to other portions previously received.
  • a complete copy of the content object is assembled from a plurality of viewer systems 32a-n and retained by modeling system 35 for later viewing upon request of any of the viewer systems 32a-n authorized to view such content.
  • aggregation server 110 determines if the identified content object is stored in database 112. If so, the streaming interface 114 will verify that the requesting viewer is authorized to view such content, and, upon confirmation thereof, begins streaming the content to the requesting system 32, as illustrated by arrow C in Figure 13A.
  • Figure 13D illustrates conceptually an algorithmic processe of a request from viewing system to modeling system for viewing content object(s).
  • Aggregation server 110 maintains within database 112 records for each viewer system 32 indicating which content objects within database 112 the viewer is authorized to download, such records being continually updated via processes 102 and 104 for each of the viewer systems 32a-n. In this manner, each of the viewer systems 32a-n authorized to view a specific content may view the content at will, upon request, at a time which is not the same as the time frame in which the content provider, such as a cable service, make such content available.
  • Figure 13B illustrates a second embodiment of the disclosed technique in which the content source 36 is operably coupled over a network with modeling system 35, and, specifically, aggregation server 110.
  • content source 36 can upload to aggregation server 110 at least one copy of all or select content objects, thereby eliminating the need for each of viewer systems 32a-n to upload fractional portions of content to viewing system 35 in the previously described manner.
  • Figure 13E illustrates conceptually an algorithmic process to upload of content object metadata and fraction to aggregation server.
  • crawler process 104 also continuously queries scheduling application 106 associated with content source 36 to determine which content programs are currently accessible for download streaming from content source 36 to viewer system 32. Again, the determination of such accessibility will typically be defined by the viewer's subscription agreement with the content source provider.
  • crawler process 104 Each time process 104 identifies content to which the viewer has legally authorized access, crawler process 104 initiates download of just the metadata associated with content, including data identifying the content, as illustrated by arrow A of Figure 13B. Crawler process 104 then transmits to process 102, the information identifying the content. Process 102 appends to this information, the data structure 120, and transmits such information to modeling system 35, as illustrated by arrow B of Figure 13B.
  • data structure 120 may comprise authorization indicia 126 received from the content source 36 or generated by process 102.
  • corresponding decryption keys or codes may be provided to modeling system 35 by process 102 as part of the authorization indicia 126.
  • the content available from source 36 is also stored in database 112 associated with aggregation server 110 and streaming interface 114.
  • aggregation server 110 maintains within database 112 records for each viewer system 32 indicating which content objects within database 112 the viewer is authorized to download, such records being continually updated via processes 104 and 102 of each of the viewer systems 32a-n.
  • streaming interface 114 will verify that the requesting viewer is authorized to view such content and, upon confirmation, begin streaming the content to the requesting viewer system 32, as illustrated by arrow C in Figure 13B.
  • FIG 39 illustrates conceptually a collaborative cloud DVR system 1133 which may be implemented in a network environment.
  • ccDVR system 1133 comprises at least one cloud storage system 1135 and a plurality of viewer interface systems 32a-n.
  • Cloud storage system 1135 may be implemented with any number of network storage technologies including those described previously herein.
  • cloud storage system 1135 may comprise a plurality of mass storage devices 1112A-C accessible by a server 1180 executing one or more control programs 1185.
  • One such cloud based computing and storage infrastructure service useful with the disclosed system is Amazon S3, commercially available from Amazon.com, Seattle, WA.
  • Cloud storage system 135 may be implemented with mass storage devices such as the EMC Atmos, line of products commercially available from EMC Corporation, Hopkinton, MA, andUSA.
  • FIG 38A illustrates conceptually a viewer interface system 32, similar to that described with reference to Figure 11A, however viewer system 32 further comprises a Digital Video Recorder (DVR) device 1182 which is operably connected, directly or via public network 30, to content provider sources 34 and 36, modeling system 35, and as well a cloud storage system 1135 in accordance with the disclosure.
  • DVR Digital Video Recorder
  • FIG 38B illustrates conceptually the internal architecture of DVR device 1182 which, in one embodiment, comprises a central processing unit 1502 (CPU), a system memory 1530, including one or both of a random access memory 1532 (RAM) and a read-only memory 1534 (ROM), and a system bus 1510 that couples the system memory 1530 to the CPU 1502.
  • CPU central processing unit
  • RAM random access memory
  • ROM read-only memory
  • DVR 1182 may further include a mass storage device 1520 for storing an operating system 1522, software, data, and various program modules, as described herein.
  • the mass storage device 1520 may be connected to the CPU 1502 through a mass storage controller (not illustrated) connected to the bus 1510.
  • the mass storage device 1520 and its associated computer-readable media can provide nonvolatile storage for DVR 1182.
  • computer-readable media can be any available computer storage media that can be accessed by DVR 1182.
  • computer-readable media may include volatile and non-volatile, removable and non-removable media implemented in any method or technology for the non-transitory storage of information such as computer-readable instructions, data structures, program modules or other data.
  • computer-readable media includes, but is not limited to, RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, CD-ROM, digital versatile disks (DVD), HD-DVD, BLU-RAY, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by DVR 1182.
  • DVR 1182 may operate in a networked environment using logical connections to remote physical or virtual entities through a network such as the network 30 through a network interface unit 1504 connected to the bus 1510. It will be appreciated that the network interface unit 1504 may also be utilized to connect to other types of networks and remote computer systems, such as a cloud storage system 1135 and modeling system 35. Network interface unit 1504 may comprise a number of input output ports including, but not limited to, both a co-axle high-frequency input as well as an ethernet or HDMI input and either an HDMI or SCART or component VGA output to a television/display device 80, as illustrated in Figure 38C.
  • the co-axle high- frequency input may function as an input for sources of content in accordance with the packet protocol/standard, such as the Data Over Cable Service Interface Specification (DOCSIS) cable modem standard, utilized by a cable television and/or a satellite television content provider.
  • DOCSIS Data Over Cable Service Interface Specification
  • Another standard by which content sources may provide streamed content objects to viewer systems 32 is the ClearQAM or QAM (quadrature amplitude modulation), format by which digital cable channels are encoded and transmitted via cable television providers.
  • the network interface of DVR 1182 may be provided with USB port for interfacing with other devices including modems, other computers, or other network interoperable components including CI card readers.
  • DVR 1182 may be provided with wireless transceiver for interfacing with other wireless devices according to one of the plurality of standard wireless protocols, including Wi-Fi for either uploading or downloading of content over a network.
  • Wi-Fi wireless local area network
  • an internet upload connection may be the same or different from the download connection, e.g. Ethernet glass fiber based.
  • DVR 1182 is coupled to a media player 1506, such as a DVD and Blu-ray playback and recording apparatus.
  • DVR 122 may have built-in Common Interface (CI) functionality or connectable to network interface unit 1504 via a USB port or other peripheral interface such as a PCMCIA interface.
  • CI Common Interface
  • the various interfaces of network interface unit 1504 may be designed for communicating with a remote docking station for any of an iPhone, iPad, Personal Computer or Android smartphone, or other similar devices, which enables streamed transmission of content and command instructions therefrom to DVR 1182.
  • DVR 1182 may also include an input/output controller for receiving and processing input from a number of other devices, including remote 1188 and, possibly, any of a keyboard, mouse, game controller or device.
  • the network interface unit 1504 of DVR 1182 may further comprise a wireless remote 1188 and interface for communicating therewith, similar to the other remotes described herein which may be implemented with any type of technology, including, infrared, radiofrequency, or wired analog or digital signals, etc.
  • dedicated color-coded controls on DVR remote 1188 which may be similar in construction and function to other remotes described herein, may enable the following dedicated functions:
  • an input/output controller 1115 may provide output to a video display 80 through a standard display connection, such as any of a HDMI, VGA, S- video, YPbPr, SCART and component, EuroSCART, Euroconector, EuroAV, or EIA Multiport etc. Further, input/output controller 1115 may be connected to a printer, or other type of peripheral device. DVR 1182 may further comprise an additional processor unit 1525, such as a Chip SmartTV, connected to the bus 1510 which may be utilized for decoding encoded content objects received from a content source.
  • a number of program modules and data files may be stored in the mass storage device 1520 and RAM 1532 of DVR 1182, including an operating system 1522 suitable for controlling the operation of DVR 1182 in a network computing environment.
  • the mass storage device 1520, ROM 1534, and RAM 1532 may also store one or more program modules for execution by the CPU 1502.
  • the mass storage device 1520, the ROM 1534, and the RAM 1532 may store software instructions that, when loaded into the CPU 1502 and executed, transform a general-purpose computing system into a special-purpose computing system customized to facilitate all, or part of, the techniques disclosed herein.
  • DVR 1182 comprises a client application process 1186 for interfacing with content provider sources 36 or 34 and cloud storage system 1135.
  • application 1186 comprises an interface process 1102 and upload/download streaming process 1104 and optional electronic program guide process 1106.
  • Interface process 1102 enables viewer system 32 to interact with sources 36 or 34 and cloud storage system 1135 in a manner similar to that described herein while process 1104 interacts with process 1102 and content sources 36 or 34, and, where applicable, a scheduling application or electronic program guide function 1106 associated with content source 36 in a manner described herein.
  • the CPU 1502 may be constructed from any number of transistors or other circuit elements, which may individually or collectively assume any number of states. More specifically, the CPU 1502 may operate as a state machine or finite- state machine. Such a machine may be transformed to a second machine, or specific machine by loading executable instructions contained within the program modules. These computer-executable instructions may transform the CPU 1502 by specifying how the CPU 1502 transitions between states, thereby transforming the transistors or other circuit elements constituting the CPU 1502 from a first machine to a second machine, wherein the second machine may be specifically configured to manage the generation of indices.
  • the states of either machine may also be transformed by receiving input from one or more user input devices associated with the input/output controller, the network interface unit 1504, other peripherals, other interfaces, or one or more users or other actors.
  • Either machine may also transform states, or various physical characteristics of various output devices such as printers, speakers, video displays, or otherwise.
  • Encoding of executable computer program code modules may also transform the physical structure of the storage media. The specific transformation of physical structure may depend on various factors, in different implementations of this description. Examples of such factors may include, but are not limited to, the technology used to implement the storage media, whether the storage media are characterized as primary or secondary storage, and the like.
  • the program modules may transform the physical state of the system memory 1530 when the software is encoded therein.
  • the software may transform the state of transistors, capacitors, or other discrete circuit elements constituting the system memory 1530.
  • the storage media may be implemented using magnetic or optical technology.
  • the program modules may transform the physical state of magnetic or optical media, when the software is encoded therein. These transformations may include altering the magnetic characteristics of particular locations within given magnetic media. These transformations may also include altering the physical features or characteristics of particular locations within given optical media, to change the optical characteristics of those locations. It should be appreciated that various other transformations of physical media are possible without departing from the scope and spirit of the present description.
  • Figure 39 illustrates a plurality of viewer systems 32a-n operably coupled to both a content source 36 and a cloud storage system 1135.
  • Viewer systems 32a-n may be implemented as described previously herein with reference to Figures 38A-C, including the addition of DVR 1182 and remote 1188.
  • Content source 36 may be implemented as previously described herein with reference to source 60 of Figure 9A which contains indexed content material, or, any of content providers 34 or 37 of Figure 7, or, may comprise any of Cable TV service provider through cable packet network, Satellite TV service provider through satellite network, or live broadcast over the internet (internet TV), Internet protocol-based TV subscriptions, such as those available from Verizon, and Free To Air TV.
  • Public network 30 may have a network topology as described elsewhere here in or any other configuration which interoperatively couples the disclosed network components.
  • the content storage configuration for either of the original content source or cloud storage server may be centralized or distributed or continuously migrating in a peer-to-peer fashion to achieve content storage at any single instant.
  • the content is captured at a viewer system, either unencrypted or post decryption, and provided to the cloud storage device in an unencrypted format.
  • the content is provided to the cloud storage server in an encrypted format along with or without a decryption key data which may be stored separately from the encrypted content.
  • the algorithm for uploading and downloading of content data packets at the cloud storage server may utilize temporal or sequential identifiers associated with the content.
  • the end-user/viewer buys a DVR 1182, which has preinstalled thereon client application 1186.
  • the end-user registers DVR 1182 with a server 1180 associated with the cloud storage system 1135.
  • Such registration process may be performed online with a browser application executing in any of DVR 1182 or any of devices 84, 86, 87 or 80, by accessing the server 1180.
  • Such registration process may include uploading of any number of user identification indicia including, but not limited to, name, serial number of the DVR 1182, billing address, payment information, identification of current content subscription sources, network access protocol identifiers and addresses, as applicable, initial recording settings and preferences for the DVR 1182, identifiers of content to be recorded, acknowledgment of the license terms and conditions for the collaborative upload service and/or the client software on the DVR 1182.
  • server 1180 creates a record or profile associated with the subscriber/DVR 1182. The user can also at any time change the current recording instructions through the website interface provided by server 1180, as described in registration procedure.
  • the client application 1186 on the DVR 1182 receives the recording settings from the server 1180 over a network connection, e.g. a Wi- Fi or ethernet Internet connection.
  • a network connection e.g. a Wi- Fi or ethernet Internet connection.
  • the user can at any time change the recording instructions using a remote control 1188 associated with the DVR 1182 by, for example, pushing the red button on the remote control, when viewing an episode of the series, to stop recording that series or by pushing the blue button on his remote control, while viewing an episode, to start recording that series.
  • the client application 1186 receives the commands from the remote 1188 and transmits the new recording instruction to the server 1180, where content is stored in association with cloud storage system 1135.
  • DVR 1182 interacts with content source 36 and cloud storage system 1135, via process 1102 and 1104, in the following manner.
  • the viewer requests, through process 1104, one or more content objects representing programs which are currently accessible for streaming from the content source 36 to viewer system 32.
  • the determination of such accessibility will typically be defined by the viewer's subscription agreement with the content source provider and the availability of specific content as set forth by the content provider.
  • Optional electronic programming guide process 1106 may assist the viewer in selection of available content.
  • process 1104 transmits the metadata identifying such content to cloud storage server 1180 which stores information with the profile of the registered DVR 1182.
  • server 1180 establishes a network connection with process 1102 and waits for process 1104 to initiate download streaming of the content from content source 36 to one or more buffer memories within DVR 1182, along with selected metadata associated with content, including data identifying the content, and one or more temporal or sequential identifiers or markers identifying the specific portion of the content contained within the buffer, as illustrated by arrow A of Figure 39.
  • content can be uploaded directly from a DVR device 1182 to cloud storage system 1135 without the need for a substantial buffering of the streamed data representing the content object.
  • Process 1104 transmits to process 1102, one or more packets of data along with the information identifying the content, or, alternatively, provides the addresses in memory where such information is stored locally within DVR 1182 and accessible by both processes.
  • Process 1102 appends to this information, a data structure 1120 and transmits or streams such information to cloud storage system 1135, as illustrated by arrow B of Figure 39.
  • the data structure 120 utilized by DVR 1182 may be similar to data structure 120A described with reference to Figure 15 and may comprise data identifying the content object and/or a portion thereof, temporal or sequential identifiers associated with the content object, and authorization indicia identifying a viewer process and DVR 1182.
  • data structure may further optionally comprise data identifying a viewer process and data identifying an encryption key for decrypting the content object.
  • process 1102 may query server 1180 of cloud storage system 1135 to determine if a complete copy of the requested content was successfully received and stored therein. If server 1180 determines that a specific segment of the content object is missing, server 1180 will modify the metadata in the data structure associated with the content object, e.g. by setting a flag variable, as being incomplete.
  • processes 1104 and 1102 are repeated, as needed, and as authorized by server 1180, while display device 80 is operably connected to content source 36, for all content to which the viewer process has requested.
  • the functionality performed by process 1104 occurs typically without any video or audio content being provided the actual display 80. In this manner, such process may be conducted while the viewer is not utilizing the system, e.g. during system "down time” or watching other content and transparently without the viewer being aware.
  • cloud storage system 1135 comprises a server 1180 and accompanying database(s) 1112A-C and network streaming interface 1185.
  • the data contained within the data structure 120 received from process 1102 of the viewer system 32 is utilized by server 1180 to store a complete copy of the content object for retention within one of databases 1112A-C.
  • process 1185 within server 1180 utilizes the temporal or sequential identifiers or markers associated with the content and arranges the received portion of the content according to its relationship to other portions previously received. In this manner, a complete copy of the content object (program) is assembled from any of viewer systems 32a-n and retained by system 1135 for later viewing upon request by the viewer who is authorized to view such content.
  • server 1180 determines if the identified content object is stored in databases 1112A-C. If so, the streaming interface 1185 will verify that the requesting viewer is authorized to view such content, and, upon confirmation thereof, begins streaming the content to the requesting system 32, as illustrated by arrow C in Figure 39.
  • the algorithmic process of a request from a viewing system to cloud storage system 1135 for viewing content object(s) is similar to that illustrated in Figure 13D.
  • Server 1180 maintains within databases 1112A-C records for each viewer system 32a-n indicating which content objects within databases 1112A-C the viewer is authorized to download, such records being continually updated via processes 1102 and 1104 of each of the viewer systems 32a-n.
  • each of the viewer systems 32a-n is authorized to view specific content, which they have uploaded in accordance with their respective license, at will, upon request, and at a time which is not the same as the time frame in which the content provider, such as a cable service, make such content available.
  • server 1180 may first check to see if the copy uploaded by the requesting viewer system is complete, such as by checking the value of a flag variable, parity value, hash value or other data integrity mechanism, and, if the content object copy is not complete, then utilizing any of the other copies uploaded by viewer systems 32a-n stored in databases 1112A-C for downloading to the requesting viewer system.
  • the server 1180 periodically schedules collaborative recording, as necessary. Depending on centrally needed data redundancy and downstream performance as well as the locally available upstream capacity, the periodic schedule is automatically calculated and the upstream effort is distributed collaboratively among the various DVR devices associated with viewer systems 32a-n participating in the collaborative upstream effort. To effect such collaboration, each DVR 1182 receives its particular plan, according to which it will upload a part or the total content which the viewer/user/owner of that DVR has instructed to record. While uploading this content, the server 1180 will also check the validity of the registered subscriptions through the availability of particular content fragment, as the contribution in the collaborative upload effort.
  • one particular DVR 1182 is not capable of uploading the entire content the user requested to record, such DVR, upon issuing a viewing request, is allowed to subsequently download from cloud storage system 1135, a collaborative copy, if the user has registered a valid subscription or the upload validity check has otherwise reasonably confirmed the validity of the subscription.
  • the client DVR is instructed by its user to downstream for time shifting purposes the recorded collaborative copy, the client DVR will downstream from the cloud storage system 1135, once the server 1180 authenticates that the license to that particular content is valid.
  • the authorization indicia utilized by cloud storage system 1135 may be similar to authorization indicia 126A, described herein, and may take any number of different forms including one or more binary values arranged in a mask, special codes, keys, hash values, etc.
  • authorization indicia may be generated by the content source 36 or may be derived from the streamed content by process 1102.
  • decryption keys or codes may be similarly provided to cloud storage system 1135 by process 1102 as part of the authorization indicia.
  • DVR device software in conjunction with cloud storage system verifies that viewer has authority to upload/record content objects.
  • the viewer/user may download and view a content object, e.g. a program series, in a time shifted manner, if licensed and actively uploaded from the viewer's DVR device, or if licensable.
  • a content object e.g. a program series
  • the user only views his own copies made by the ccDVR system under his personal instruction.
  • the viewer can only skip unwanted commercials if replacing them with other possibly, but not necessarily, personalized, commercials originating from the same broadcaster whose commercials were skipped.
  • the collaborative subscription service may also broker other services to the viewer, such as: internet subscriptions, TV broadcasting subscriptions, VOD services, other over the top TV subscriptions, storage and streaming capacity in the cloud, physical goods such as books, wine, food, legal services, etc., typically through third-party service providers.
  • the ccDVR system comprises a plurality of cloud systems and DVR devices 1182a-n, likely distributed in geographically disparate locations, but interconnected over a wide area network, such as the Internet, and owned or leased by valid subscribers of content.
  • the ccDVR subscription user agreement authorizes the server 1180 and any DVR 1182 within the collaborative cloud system to record content from a content source on a subscriber's behalf, as part of the collaborative upload effort.
  • the collaborative upload system has been described, in the illustrative embodiment, with content objects which may be in the nature of streamed video or other data, such example should not be limiting.
  • the various data transmission protocols between the system components, as described herein are not limited to a particular protocol or transmission direction, with component capable of being either the transmitter or receiver in a push or pull manner, respectively, as would be understood by those reasonably skilled in the arts.
  • the content objects uploaded or downloaded by the DVR or other devices disclosed herein may include textual, graphic, photographic, audio, haptic or other data type, in a streamed or a stream format, regardless of data format or protocol, content objects containing such data types being equally applicable to the system described herein.
  • a system and technique for presenting multiple, simultaneous content object data streams on a user interface is provided in a manner that facilitates surfing by the viewer in multiple dimensions.
  • a primary content stream representing the currently selected content object within a dimension of a viewer channel, is presented in a substantial portion of the right brain user interface display area while a plurality of secondary content object data streams, representing selectable content objects to which the viewer may navigate, are presented in smaller sized or thumbnail format in the balance of the display area of user interface.
  • the multiple secondary content streams presented on the user interface each represent selectable content objects having a queued relationship to the currently selected primary content object data stream.
  • Such a queued relationship may exist between and among different content object streams in the same dimension of a viewer channel or between separately selectable portions of a single content object stream or program, or between different content objects in this dimensions of a viewer channel, e.g. chronologically arranged episodes of the same program.
  • Figure 12A illustrates conceptually a multidimensional channel 90, which facilitates multidimensional surfing of content along desire and time vectors 92 and 94, respectively, using traditional cursor navigation controls.
  • Figure 12B illustrates conceptually the implementation of channel 90 associated with a specific subject/viewer within database 48.
  • Channel 90 may comprise a plurality of channels 90A-C, stored in database 48 of modeling system 35.
  • navigation controls may be utilized to perform multidimensional surfing and viewing of content object streams displayed on viewer system 32 within a particular viewer channel 90 is described with reference to Figures 16-22 referring to Figure 16, database 48 of modeling system 35 interacts with content database 47 or other content sources 34, 36 to ensure that a data stream representing the content object(s) within viewer channel 90 are buffered in memory associated with viewer system 32 for rendering and display on display 80.
  • Viewer interface system 32 comprises the right brain user interface display 80, used predominantly for viewing of video content and an accompanying remote control 88.
  • display 80 may be implemented with a "connected TV" or other devices that connect the TV to the networks 30 or 31 such as a connected Blu-ray player or a connected game console, e.g. a device capable of connecting directly to the Internet, e.g. network 30, as well as a cable packet network or satellite network, e.g. network 31.
  • Figure 16 illustrates conceptually the relationship between the components of display 80 (in phantom), including User Interface (UI) display area 120, graphics engine 115, a primary stream buffer 116 and multiple secondary stream buffers 118a-n associated with the content objects comprising a viewer channel.
  • UI User Interface
  • Graphics engine 115 is typically part of display 80 and controls the streaming, decryption, windowing, and rendering of multiple data streams based on the content data and command/formatting data contained within the data packets associated with each stream.
  • Buffers 116 and 118 may be implemented as segmented sections of local memory associated with graphics engine 115, or, alternatively, may be stored separately and remotely from display 80.
  • Display 80 and viewer system 32 are connected through the network 30, represented as a cloud in Figure 16, to modeling system 35 and the source of the content object data streams, typically any of database 34, 36, 37 or 47.
  • a multitasking/multithreaded operating system may be used in viewer system 32 to control the streaming, buffering and rendering of the content object data stream.
  • each stream may have associated therewith multiple threads of execution, including a thread for buffering and one or more threads for formatting and rendering the content object data on display area of display 80.
  • the primary content object stream has a buffer 116 associated therewith and one or more threads, labeled collectively as 117.
  • the plurality of secondary content object streams each have a perspective buffer 118a-n associated therewith and respective sets of one or more threads, labeled collectively as 119a-n, as illustrated.
  • primary content object data stream 128 is continuously streamed from its original source via its respective buffer while secondary content object data streams 121-126 may optionally loop through a portion of their respective content, typically the first several minutes or another amount stored in each of the respective buffers. In this manner, the presentation of visual information to the viewer on UI display area 120 is more informative, particularly regarding secondary content object data streams 121 - 126, will efficiently using processor resources within graphic engine 115 and network bandwidth into and out of viewer system 32.
  • Each content object having data streamed to display 80 has associated therewith a data structure 111, as illustrated in Figure 17, which comprises information relating to the viewable parameters of the content object, including, but not limited to formatting parameters, status, navigation options and proprietary rights data.
  • data structure 111 further comprises data fields indicating the license status of the object, whether free (prepaid), pay- per-view, or pay for limited use, elapsed viewing time, whether the content object was compiled by modeling system 35, the name of someone recommending the content object, an image of the person recommending the content object, and other data necessary for representation of the various graphical elements and indicia surrounding the rendering of the content object, as explained in more detail with reference to Figures 18 - 22.
  • the UI display area 120 of display 80 is illustrated.
  • Multiple content object data streams are capable of being simultaneously presented in UI display area 120.
  • the multiple secondary content object streams presented on the user interface each represent selectable content having a relationship to the currently selected primary content object stream.
  • the plurality of secondary content object data streams 121-126, and icon 127 representing the primary content object data stream, arranged along the bottom dimension of UI display area 120, and may be associated, for illustrative purposes, with the time or second dimension is described elsewhere herein.
  • icon 127 and the secondary content object data streams 121-126 may be arranged vertically along either the left or the right side of UI display area 120.
  • thumbnail frames representing the content object streams of a dimension may be arranged linearly along any portion of UI display area 120 including any of the left, right, top, and bottom sides of UI display area 120.
  • other arrangements of the thumbnail frames may be utilized within UI display area 120, for example circular or cluster arrangements of the thumbnail frames to provided the viewer with navigable options representative of the dimensions available for surfing relative to the currently displayed primary content object data stream 128.
  • secondary content object data streams 121-126 may represent successively ordered content objects 131-136, respectively, relative to the primary content object stream 128, which represents the currently selected content object 138 in second dimension 94 in a viewer channel 90.
  • secondary content object streams 121-126 may represent successively ordered content objects representing a viewer selectable segments of the currently viewed content object in display area 120.
  • a primary content object stream representing a news program may have separately selectable secondary content object streams for program segments directed to weather, sports, business/finance, consumer reporting, etc.
  • a primary content object stream representing the sports section of a news program may have multiple separately selectable secondary content object streams representing different video clips of sports highlights within the sports segment.
  • a queued relationship may exist between and among different content object streams or between separately selectable portions of a single content object stream or program.
  • secondary content object data streams 121-126 may represent successively ordered content objects 131-136, respectively, relative to the primary content object stream 128, which represents the currently selected content object 138 in first dimension 92 in a viewer channel 90.
  • secondary content object streams 121-126 may represent successively ordered content objects representing a viewer selectable segments of the currently viewed content object in display area 120.
  • secondary content object data streams 121-126 are displayed on UI display area 120 for a predetermined period of time, e.g. between 2 to 20 seconds after the last navigation command, or for some other predetermined period of time, so as not to distract the viewer from the primary content object data stream 128. Pressing of a navigation command button on the remote 88 will cause secondary content object data streams 121-126 to reappear, therefore providing the viewer with the necessary video cues to facilitate surfing among the various content objects within a dimension of a viewer channel.
  • each of the secondary content object data streams 121-126 either: a) move gradually from its currently displayed window to an adjacent window; b) moves in substantially instantaneously from its currently displayed window to an adjacent window, or c) the frames or thumbnail window in which the secondary content object data streams 121-126 are currently displayed actually move across the screen 120, all under any of the foregoing techniques, either to the right or to the left depending on the nature of the navigation command selected by the viewer, as illustrated conceptually by the bidirectional phantom arrow in Figure 20 of secondary content object data streams 121 - 123.
  • any of the supplemental graphic indicia associated with the content objects such as sidebars navigation indicators or icons will similarly scroll along with the content object with which they are associated.
  • information relevant to identification of the currently viewed primary content object stream may be displayed on-screen, either temporarily or persistently, within UI display area 120, such information including, but not limited to, any of program name, type, date of original airing, current date and time, on- air status, current viewing start time, estimated viewing end time (based on current time), duration/elapsed viewing time, and recommendation posting time and name of third-party recommender or recommendation source if other than system 35 (in the case of content recommended from a third party through a social media channel, such as Facebook, etc.).
  • such information is indicated by the box 113 within display area 120.
  • Such information is typically stored within data structure 111 and may be displayed upon selection of the content object for viewing as the primary content object data stream 128 or upon selection of an appropriate command button on the remote control 88 of viewer system 32.
  • information may be presented in various colors, fonts, formats and with a level of opacity as determined by the system designer so as not to interfere with the viewers enjoyment of the presented video data stream.
  • the information designated by box 113 may be presented not on display 80, but on any of displays 84, 86, or 87 of viewer system 32, so as to avoid textual data on the right brain interface.
  • a subset of the information typically stored within data structure 111 associated with each of secondary content object streams 121-126 may be displayed within their respective frame or thumbnail windows, such information comprising any of the information described above as displayable in box 113 and in a format similar to that described above.
  • UI display area 120 including the icon 127 representing primary content object stream 128 and the secondary content object streams 121-123.
  • viewer system 32 in conjunction with the graphics engine 115, utilizes various other graphic indicia associated with each content object data stream to provide further useful information to the viewer during his viewing/surfing experience in a manner that remains essentially true to the right brain experience, i.e. with a minimum of textual information.
  • Icon 127 represents the primary content object stream 128 and its conceptual position within the viewer channel relative to the secondary content object data streams.
  • icon 127 may represent both the primary content object stream 128 and each of the secondary content object streams 121-126 displays on user interface 120 when the source of both the primary and secondary content objects is the same, for example, when all content objects are from the same broadcast or network source, icon 127 may represent the logo of such source, or, alternatively, when all content objects are from system 35, icon 127 may comprise an icon or other graphic element associated with system 135.
  • the positions of secondary content object streams 121-123 within UI display area 120 relative to icon 127 conceptually indicate the position of secondary content objects along a dimension of the viewer channel relative to the currently selected primary content object stream 128, and provides the viewer with a point of reference from which to navigate in the current dimension of the viewer channel or two different dimensions using the navigation controls of the remote 88, as described previously. For example, pressing the left navigation button on remote 88, e.g. " ⁇ ", will cause the primary content object stream 128 to change to the secondary content object data stream 123 to the left of icon 127. The former primary content object stream will then assume the position of secondary content object stream 124 and the other secondary content object streams will be reorder accordingly within the appropriate dimension of the viewer channel.
  • double-clicking of one of the navigation command buttons of remote 88 may be utilized to navigate either a chronological order of a content object from chronologically ordered content objects or a vertical fear/desire dimension.
  • the left navigation button on remote 88 e.g. " ⁇ "
  • the primary content object stream 128 will change to secondary data content object data stream 123.
  • double-clicking the left navigation button on remote 88 e.g.
  • the surfing paradigm or dimension will change so that the new set of primary and secondary content object data streams represent episodes of the same program, including previously aired episodes of the same program currently being viewed as the primary content object data stream 128, as well as, if available, any as yet un-aired episodes, which may be available on pay per view basis, as represented by streams 124-126.
  • the use of the double-clicking of the directional navigation control is not limited to a particular dimension, e.g. either time or association, but may be utilized to access content objects within any nested dimension associated with a current primary content object stream.
  • a particular dimension e.g. either time or association
  • Any dimension of a channel may have multiple dimensions which may be successively accessed in a recursive manner.
  • icon 127 may be utilized to indicate to the viewer the status of the primary content object stream.
  • any of the color, shape, transparency, size, or other visual aspects of icon 127 may be associated with a specific parameter of the primary and secondary content object stream and may be manipulated by color, animation or in another manner, to indicate a change in the parameter value.
  • icon 127 may have a first shape or color for content objects recommended by system 35 and a second shape or color for content objects recommended by a third party or from a source other than system 35.
  • the icon or other graphic element may be used to indicate that the use or license status of the primary content object is about to change, for example, viewing more than a threshold percentage of the primary content object may automatically cause status of a content object representing a recorded broadcast program to change from "unviewed" to "viewed” or may automatically cause the purchase of content objects offered on a single or limited view basis.
  • the icon or other graphic element may begin to blink, pulse, modulate between colors, or change in any of shape, size, color or opacity, or may be associated with a sound or audio wave file, or any combination thereof, to indicate that a threshold condition is about to be met.
  • the visual characteristics associated with secondary content object streams 121-126 may be utilized to indicate to the viewer various parameters of the secondary content object streams. For example, any of the color, shape, transparency, size, or other visual aspects of any frame or border surrounding the actual display area in which the secondary content object data stream is rendered may be associated with a specific parameter of the secondary content object stream and may be manipulated by color, shape, animation or in another manner, to indicate a change in the parameter value.
  • a colored sidebar 129 associated with each of the selectable secondary content object streams indicates the license status of the content, e.g. blue for free, red for pay per view, etc.
  • each of the thumbnail frames representing selectable secondary content contains graphic indicia 139 indicating the navigational options to other queued content within a viewer channel, e.g. "v", " ⁇ ", “>” characters or symbols arranged around the thumbnail frame, as illustrated in Figure 20.
  • the " ⁇ " symbol 139a above stream 121 or 123 indicates that the viewer, once having navigated to streams 121 or 123 for viewing as the primary content stream 128, may navigate since from the currently viewed primary content stream to another content object in the first dimension (e.g. association), while the "v” symbol 139c below streams 121or 123 indicates that the viewer may navigate to another content object in the first dimension but in an opposite direction.
  • the " ⁇ " symbol 139b to the left of stream 121 indicates that the viewer, once having navigated to streams 121 for viewing as the primary content stream 128, may navigate to another content object in the second dimension (e.g. time), while a ">" symbol 139d (not shown in Figure 20) to the right of stream 126 indicates that the viewer may navigate from the currently viewed primary content stream to another content object in the second dimension, but in an opposite direction.
  • navigational directions and commands may be used to select free content versus paid content.
  • a vertical navigation dimension if the viewer pushes the down arrow navigation control on remote control 88, the viewer will be presented with free content. Conversely, if the viewer pushed the up arrow navigation control, the viewer will be presented with pay (pay per view) content.
  • a horizontal navigation dimension if the viewer pushes the left arrow navigation control on remote control 88, the viewer will be offered free content of a previously broadcasted program. Conversely, if the viewer pushes the right arrow navigation control, the viewer will be presented with pay (pay per view) content, e.g. content that has not yet been broadcasted and which is viewable only for a fee.
  • navigation commands used to surf through time, desirability /fear and other dimensions may originate from display remotes having accelerometers for detecting horizontal, vertical and other gesture patterns for use as navigation and selection commands on the right brain interface and/or left brain inter face, as well as from traditional remote control 88 with a standard up, down, right, left, and enter button command set.
  • a translation program similar to redirection application 85 is utilized to translate the outputs from a controller having either an accelerometer or gyroscope into commands which may be utilized by modeling system 35 and viewer system 32.
  • a channel may be associated with system 35 for instructional materials which the viewer to access regarding various functions and procedures associated with the system.
  • channel 0 is the instructional channel for system 35.
  • the primary viewing stream will switch to one or more specific content objects associated with channel 0 and their instructional content for use of the system.
  • such instructional content objects may be associated with another specific channel designator or icon for display on screen 128.
  • both primary and secondary content objects may be recommended from third parties or sources other than modeling system 35.
  • the presentation format for such recommended content objects is illustrated in Figure 21, where UI display area 120 presents a primary content object data stream 128 and multiple secondary content object data streams 121-126 of Internet content from YouTube or other Internet sources, each having been recommended by a source other than modeling system 35.
  • the manner in which the viewer may navigate between and among the primary and secondary content object data streams 121-126 and 128 is similar as previously described herein, using navigation controls of remote 80 or other navigation input device.
  • the viewer in addition to navigating between and among the primary and secondary content object data streams, the viewer may navigate in a separate dimension among recommendation sources which may be either individuals, e.g., friends, family, etc., or specific sites on the Internet, e.g., YouTube, Facebook, etc.
  • a plurality of images 150, 152, and 154, representing the recommendation sources are arranged on one UI display area 120 in a manner which allows the viewer to navigate among the recommendation sources using navigation commands from remote control 88.
  • the currently displayed set of primary and secondary content object data streams 121 - 126 and 128 may be associated with a recommender having an associated image 152.
  • Images 150, 152 and 154 may have frames or orders which provide additional information to the viewer, similar to that previously described with content object data streams 121 - 126, for example, border around the image of the currently selected recommendation source may have a different shape, color and animation than that around the other images.
  • the loop buffering of any secondary content object data streams may likewise be implemented with content from such recommendation sources, as described previously.
  • modeling system 35 Although the system described herein is intended to be utilized to display content compiled by modeling system 35, the reader can appreciate and understand that any content object may be utilized as the initial point of the viewing experience, including commercially broadcast channels from cable providers or other sources, including one or more virtual channels as described herein, and, thereafter, using the system described herein, the user may navigate to content objects which are either compiled by modeling system 35 or recommended from sources outside modeling system 35.
  • Virtual channels 160-230 are illustrated conceptually relative to viewer systems 32a-b and a modeling system 35, as described herein, and other sources of content.
  • Virtual channels 160-230 enable content objects from sources considered to have possible left brain content to be implemented in a right brain user interface in accordance with the objectives of the disclosure.
  • Virtual channels 160-230 may be logically arranged similar to channels 90A-C of Figure 12B and may contain content objects from a single source or multiple sources as described in greater detail with reference to Figures 24-31.
  • a first type of virtual channel, Recommendation Channel 160 allows the posting of recommendations of friends and/or family or other individuals from other sources such as TWITTER, FACEBOOK, PICASA, VIMEO, groups within FACEBOOK, LINKEDIN, or any other website or networking mechanisms 162a-n to modeling system 35 for display via viewer system 32.
  • One or more recommendation channels may be associated with a particular viewer profile.
  • a single Recommendation Channel 160a may be defined by the user for posting all recommendations of friends/groups independent of the source, or, multiple recommendation channels may exist and may be defined per source, per group of sources, per friend, or per group of friends, illustrated in phantom as recommendation panels 160b-n.
  • Such recommendation channel comprising content object recommendations from friends and/or family, colleagues, etc. may arranged in a queued manner and displayed with viewer system 32 as illustrated in and previously described with reference to Figures 21 and 23. Specifically, the viewer may navigate Recommendation Channel 160 in a separate dimension among recommendation sources which may be either individuals, e.g., friends, family, etc., or specific sites on the Internet, e.g., YOUTUBE, TWITTER, FACEBOOK, PICASA, VIMEO, groups within FACEBOOK, LINKEDIN, etc.
  • recommendation sources may be either individuals, e.g., friends, family, etc., or specific sites on the Internet, e.g., YOUTUBE, TWITTER, FACEBOOK, PICASA, VIMEO, groups within FACEBOOK, LINKEDIN, etc.
  • a plurality of images 150, 152, and 154, representing the recommendation sources are arranged on one UI display area 120 in a manner which allows the viewer to navigate among the recommendation sources using navigation commands from remote control 88, in a manner as described herein.
  • recommendations may be forwarded to a viewer's Recommendation Channel 160 via a specific electronic mail address or other handle mechanism associated with the particular viewer system 32.
  • a Program Director Channel 170 enables explicit (left brain) control over the experience of the viewing session and active control of the content of that channel.
  • Management and set up of the Program Director Channel 170 may be performed on any of the left brain user interfaces 84, 86, or 87 of Figure 11A to enable selection of content objects, posting of that content object in channel, ranking of the content object in the channel, and upfront payment of content, e.g. pay per view, if applicable, prior to display on the right brain display 80, via modeling system 35 and viewer system 32.
  • Control commands and data from the left brain interface are provided to modeling system 35 which in turn generates the arrangement of content objects within the Program Director Channel 170 prior to its displayed on the right brain display 80.
  • sources of content objects for the program Director Channel 170 may be content sources within the system, such as database 47 of Figure 8 or from external sources 172a-n which may be selected content providers 34, 36 or 37 or sources 162a-n.
  • Program Director Channel 170 when used in conjunction with a Recommendation Channel 160 of another viewer, or a social media facility such as YOUTUBE, TWITTER, FACEBOOK, groups within FACEBOOK, LINKEDIN, etc., enables the viewer/director to act as program director in a broadcast - like manner enabling recommendations of content from a viewer to groups viewer/recipient's using a content object recommendation via others Recommendation Channels 160 or a social media facility such as Twitter.
  • Such functionality may be useful to a viewer/director who Is an expert in a certain subject matter, enabling the viewer to compose and maintain a complete expert channel via system 35 and/or subscriptions to social media facilities, as applicable.
  • Director Channel 170 may be useful for viewers who love film, viewers who want to plan a specific viewing session, professionals who want to schedule a specific presentation sequence, such as a demo for a customer.
  • Viewers who also subscribe to third party content subscriptions such as Netflix or Lovefilm (UK) can have content from such sources integrated into the viewer's regular channel through the recommendation system 35 described herein according to the calculation of the fear and desire component of the content object for that particular viewer's profile, in a manner as previously described herein.
  • viewers can actively schedule content objects coming from sources such as Netflix or Lovefilm into a dedicated Program Director Channel 170 and determine the location in queue of each content object in that channel.
  • system 35 enables a viewer to take an "option" to view video on demand content objects by scheduling them to one of the virtual channels described herein, using either remote control 88 of the viewer system 32 or utilizing the Program Director Channel 170.
  • a content object recommended by system 35 or a content object actively retrieved from a remote source such as either Netflix or Lovefilm may be a movie which a viewer would like to see but for which he/she is either not in the current mood or does not have the time or money to commit to purchasing at that exact instance.
  • the viewer create an option, typically in the form of a link which includes access data and the metadata describing the content object, including its price and viewing availability, into one of the channels described herein or into a separate virtual option channel similar in logical structure and function to Program Director Channel 170.
  • Such option will then show up in queued format within the channel in the same manner as other content objects and may be purchased at the time of viewing, in a manner similar to that described elsewhere herein.
  • the Third Party Channel 180 enables content that is sourced from the third party applications or data streams 182a-n to be available for display, via modeling system 35 and viewer system 32, in conjunction with the viewer's current channel.
  • an application related to a sporting event may provide or stream additional background information for a specific game, for example all goals scored by the player who scored a goal during a match that is viewed live through the display 120 of viewer system 32.
  • Such background information can be posted on a separate Third Party Channel 180 or integrated with the on-screen viewing of the current content object in box 113 of screen 120, as illustrated in any of Figures 18, 21 and 22.
  • Figure 26B illustrates conceptually an algorithmic process that enables content aggregation for the Third Party Channel 180.
  • the Library Channel 190 enables access to content objects which are privately owned in the viewer's library, such content objects being a collection of previously paid for materials which are therefore always permanently available for viewing.
  • the library comprising the viewer's privately owned content objects may be stored locally on the viewer system 32, as indicated by storage mechanism 193, which may be similar to database 47, or stored remotely over a network on a dedicated storage mechanism 194 or retained on any of content sources 192a-n.
  • the content objects within the viewer's private library may be recommended and arranged or queued within the Library Channel 190 by recommendation system 35 and distributed for viewing via viewer system 32 in a manner as previously described with regard to other content objects.
  • Library Channel 190 may be stored in modified formats, i.e. for privacy and security reasons as well as for network accessibility reasons.
  • Library Channel 190 provides a "view" on all the content that is available in the viewer's library arranged into one channel, Such content could be arranged according to dominant preferences, metadata (e.g. genre) and ranked according to viewer's mood or sorted according to a certain predefined or dynamically defined criteria.
  • the Library Channel 190 may be implemented with three modes of the use: active, inactive and exclusive or library only.
  • active mode the library is used by the recommendation system disclosed herein as one of the content sources for creating content recommendations in a manner as previously described.
  • inactive mode the library is not used as one of the content sources for creating content recommendations.
  • exclusive or library only mode only content from the viewer's private library or a private library to which a viewer has access is used as one of the content sources for creating content recommendations in a manner as previously described.
  • a fifth type of virtual channel, the Off-Line Channel 200 may be implemented not as a channel having a specific content source(s), similar to the other of the virtual channels 160-190 and 210-230 described herein, but as a mechanism for viewing content objects associated with another channel when not actively or operatively coupled to either a network or to recommendation system 35, such as when the viewer is on an extended plane flight, as is illustrated by the lack of connection between viewer system 32 and recommendation system 35 and content sources 202a-n in Figure 28A.
  • the content objects within a particular channel are stored locally on storage mechanism 203 of viewer system 32 all of which may be implemented within an apparatus such as a PDA, tablet computer or laptop, and are available for viewing therefrom.
  • the apparatus on which the viewer system 32 is implemented may serve as both the left brain interface and, typically sequentially, as the right brain interface for the viewer.
  • Figure 28B illustrates conceptually an algorithmic process that enables viewing of content off-line via Off-Line Channel 200.
  • the particular viewing habits of the viewer may be stored locally and loaded to recommendation system 35 in an asynchronous manner for updating of the viewers profile and viewing history once the viewer is reconnected to the system.
  • the content objects within a particular viewer channel are limited to those items already queued within such particular channel or channels.
  • the ability to have content objects reordered within a viewer channel in synchronization with immediately preceding viewing habit events is also limited.
  • the format in which content objects are stored for off-line viewing may be modified for increased security to prevent unauthorized viewing, in comparison to other storage formats utilized for normal online viewing from a specific viewing device or platform.
  • browsing and rewinding/fast forwarding through the locally stored content objects while a viewer is off-line is allowed, but substantive viewing of a content object is allowed only once, unless such content object is part of the viewer's private library or the viewer is authorized to view a content object multiple times.
  • Content selection for off-line mode can be done in a number of ways, for example: A) viewer selects from each channel the content he would like to view off-line using the left brain user interface; B) content with highest recommendation according to the viewer's preferences and mood is selected by the recommendation system 35; or C) viewer manages his/her Program Director Channel and content therein is selected for off-line mode.
  • Picture/UGC Channel 210 is used to post pictures and UGC, movies, audio, etc., created by the viewer(s), from any of other internal or external sources and to view such pictures and UGC with the appropriate viewing player depending on the file type of the content object as posted to the channel.
  • Picture/UGC Channel 210 may be similar in construction and function to Library Channel 190 as described herein with reference to Figure 27.
  • the content objects representing UGC may be stored locally on the viewer system 32, as indicated by storage mechanism 216, which may be similar to or database 47, or stored remotely over a network on a dedicated storage mechanism 213 or retained on any of content sources 212a-n.
  • the UGC content objects may be recommended and arranged or queued within the Picture/UGC Channel 210 by recommendation system 35 and distributed for viewing via viewer system 32 in a manner as previously described with regard to other content objects.
  • content objects within the Picture/UGC Channel 210 may be stored in modified formats, i.e. for privacy and security reasons as well as for network accessibility reasons.
  • a viewer is able to edit Picture/UGC Channel 210 channel using the left brain interface for changing order, deleting items, etc....
  • Figures 29B illustrates conceptually an algorithmic process that enables content collection and creation of a Picture/UGC Channel 210.
  • a seventh type of virtual channel, the Post Channel 220 enables friends, family, coworkers, etc. and other third parties to actively post their pictures or UGC to a channel associated with the viewer and allow viewing of such pictures and UGC with the appropriate viewing player depending on the file type of the content object as posted to the channel.
  • the Post Channel 220 may be similar in construction and function to Picture/UGC Channel 210 as described herein with reference to Figure 29A.
  • the content objects representing third-party or externally generated UGC may be stored locally on the viewer system 32, as indicated by storage mechanism 226, which may be similar to or database 47, or stored remotely over a network on a dedicated storage mechanism 223 or retained on any of content sources 222a-n.
  • the UGC content objects may be recommended and arranged or queued within the Post Channel 220 by recommendation system 35 and distributed for viewing via viewer system 32 in a manner as previously described with regard to other content objects.
  • the Post Channel 220 is useful for viewers who wish to enjoy viewing content objects from multiple sources without having an established relationship with such source. For example, grandparents may have a Post Channel 220 on reserved for the pictures and the UGC movies posted by their children, grandchildren and/or other family members to Facebook, Twitter, or other media sites. In this way, such viewers can enjoy content sourced from Facebook and Twitter without having to access the internet and establish Facebook, Twitter, or other accounts.
  • recommendations may be forwarded to a viewer's Post Channel 220 via a specific electronic mail address or other handle mechanism associated with the particular viewer system 32.
  • Figure 30B illustrates conceptually an algorithmic process that enables a virtual Post Channel 220.
  • An eighth type of virtual channel, the Mail Channel 230 which is operatively coupled with one or more of the viewer's electronic mail service, enables right brain hemisphere type content objects, typically attachments associated with electronic messages, e.g. those that contain pictures, graphics, video material, etc. to be viewed on the right brain display 80 of the viewer system 32, as illustrated in Figure 31A.
  • the viewer may be given the option of entering a command with, for example, remote control 88, which enables the complete text of the relevant email message to be viewed as well as email messages which have no attachments.
  • Figure 31B illustrates conceptually an algorithmic process that enables a Mail Channel 230.
  • Virtual channels 160-230 described herein may be presented to the viewer via display 80 of viewer system 32 either as the primary content object data streams or secondary content object data stream, similar to other channels 90A-C, stored in database 48 of modeling system 35 or locally within viewer system 32 and which facilitates multidimensional surfing of content, using traditional cursor navigation controls as described herein with reference to Figures 16-22.
  • the viewer may navigate in a separate dimension any of the virtual channels 160-230 described herein in addition to the primary and secondary content object data streams on screen 120 of display 80 in a similar manner as described with reference to the recommended content illustrated in Figure 21.
  • the disclosed system also affords the opportunity to provide explicit feedback to the recommendation system in a manner which requires little left brain activity.
  • traditional navigation controls originating from display remotes e.g. specifically colored coded controls, may be utilized to provide explicit feedback to the recommendation system in a manner which requires little left brain activity.
  • Selection of different color coded buttons may be used to associate each of a negative or positive valence emotion with the instances of a certain recurrently broadcasted content (e.g. a series) and/or its metadata.
  • selection of a different color coded control may be used to socially share the link to the currently viewed content with the applicable social networks or to provide a gratuity to the author(s) of the content currently viewed or to the recommender of that content.
  • the command controls 240-246 of a typical TV remote 88 or other device are given new functions, as illustrated in Figure 32.
  • the existing typical remote control command controls are part of the available interface hardware and therefore pose a minimal set-up and learning curve effort to use.
  • the new functions that are associated with the existing command control are chosen based on the disclosed neuropsychological modeling technique to support the natural relaxing TV experience. A description of command controls and their assigned operation, based on the neuropsychological modeling technique are given below.
  • selection of a first colored control 240 may be used to associate negative valence emotion with the instances of a certain recurrently broadcasted content (e.g. a series) and/or its metadata.
  • a certain recurrently broadcasted content e.g. a series
  • Such negative valence emotion association may result in that particular recurrent content not be scheduled in a personalized channel and/or a time-shifted content list and therefore the content is not recorded for that user.
  • This can be implemented as the red button meaning : "Do not record for time shifting purpose for my profile anymore".
  • Selection of a second colored control 242, e.g. a blue button may associate positive valence emotion with the instances of a certain recurrently broadcasted content (e.g. a series) and/or its metadata.
  • positive valence emotion association results in that particular recurrent content being scheduled in a personalized channel and/or a time-shifted content list and therefore the content is recorded for that user.
  • This can be implemented as the blue button meaning : "Do record for time shifting purpose for my profile".
  • Selection of a third colored control 244, e.g. a yellow button, may socially share the link to the currently viewed content with the applicable social networks.
  • the applicable social networks may be Facebook, Linkedin, Twitter, blog, email or other.
  • a practical implementation may be a preformatted email or other electronic message that is sent from a general or personalized account to a user predetermined account, which may be his own account, for manual processing and actual publishing or communication or an account which causes the publishing or communication to occur automatically.
  • Selection of a fourth colored control 246, e.g. a green button may associate gratitude with the author(s) of the content currently viewed or to the recommender of that content. Such gratitude may have as a result the donation of gratuity or thank you fee.
  • the distinguisment between author and recommender may be made based on the home content of a recommendation channel being viewed or the recommended content itself or may be based on a simple iconic viewable interface popping up after the button has been pushed.
  • the amount of gratuity can be pre-set automatically and changed based on a left brain interface as part of the TV tandem interface.
  • the backend payment and management system is created in order to manage correct and confidential management of author, recommender and service provider (the license holder to this patent) credentials. In case donations are not correctly attributable to authors or recommender, they can flow to a non-profit fund.
  • Explicit right brain feedback becomes even more powerful when the red and blue button are not just specified in association with a particular content object, but with one or more metadata values associated with the content object.
  • the metadata associated with that content object may be visually displayed at the bottom of the screen, e.g. a menu bar.
  • Such bar may show a picture of the leading actor, e.g. Jack Nicholson, next to a graphic representation characterizing a genre, e.g. horror movie, etc.
  • the user can then select what in particular he likes or dislikes about the content object using the explicit feedback buttons or commands and thereafter, the fear and desire components related to the selected metadata are subsequently updated accordingly.
  • a two-position rocker switch may be utilized in which one position is used to designate a negative valence emotion with content and/or its metadata while the other position is used to designate a positive negative valence emotion with content and/or its metadata.
  • a control itself need not be colored but could have a color designation of any shape, color, graphic pattern or image affixed thereto.
  • the choice of colors, patterns or images may be at designer's discretion.
  • any physical control on either the remote 88 or a virtual control on the user interface such as a PDA or laptop through which the viewer communicates with the primary right brain display 80 may be utilized, including the traditional navigation cursor controls in a configuration allowing for multi- mode functionality, as well as traditional keyboards, gesture recognition user interfaces or voice command user interfaces.
  • B2B sales An important distinction is made between two types of B2B sales: new and known application sales.
  • new application sales the buyer sees the offering of the sales person as something that is new to him, either because the type of product/service or its application is new to him.
  • known application sales the buyer sees the offering of the sales person as something he's familiar with, either because he is familiar with the type of product/service or with the kind of application.
  • whether a particular sales project is considered a new or known application sales project depends on the view of the buyer. It is up to the sales person to assess the buyers' view.
  • new application B2B sales the buying cycle starts with the seeding and nurturing of Desire and that this is optimally done using mainly visual sales/marketing material and storytelling, which appeals to the right hemisphere and allows Desire to grow.
  • new application B2B sales are referred to as desire-based B2B sales. This does not mean however, this type of sales does not involve any hedging of fears.
  • Desire has grown up to a significant level, and the buyer buys into the vision and is willing to change, Fear still need to be hedged.
  • the buying cycle for desire- based B2B sales is represented in Figure 34.
  • Desire may be required, but usually to a much lesser extent.
  • Known application B2B sales is mostly about hedging fears, hence it is referred to as "fear-based" selling.
  • the buying cycle for fear-based B2B sales is represented in Figure 35.
  • Fear consists of both private and social Fears. These private Fears are typically hedged during the second phase.
  • the buyers typically wants to find out if a product or service will actually work for him and/or if the option, proposed to him by the sales person, is the overall best option, taking into account alternatives, competitive offerings, etc.
  • the buyer is best served with data and results that address his Fears and that are mostly textual and/or analytic, like specification lists, demo reports and the like, since these will mostly appeal to his left hemisphere and allow him to converge his Fears down to an acceptable level.
  • it is the job of the sales person to assess the buyers' Fears and then help him address them. While the focus in this phase lies on the reduction of Fears, the sales person still needs to keep an eye on the Desire level, making sure it stays high enough.
  • the hedging of Fear can be e.g. be done by going through the concrete lists of needs and showing that each one of them is covered. It's important to note that the sales person needs to keep monitoring the Fear and Desire levels throughout the complete buying cycle. E.g. in the third phase, the sales person may actually need to increase Fear in order to be able to close the deal, since a B2B buyer, who feels too much in control or too relaxed, may unnecessarily delay a purchasing decision or put a too high pressure on the price.
  • the buying cycle of B2C sales is represented in Figure 36.
  • Desire needs to grow as fast as Fear diminishes.
  • Social fear hedging is limited to non-existing.
  • the different buying cycles with their respective, numbered stages can also be mapped onto the mood disk, as shown in Figure 6C.
  • the purchase and sales of a company, as part of an M&A transaction resembles a desire-based B2B sales process.
  • the selling party may lead the purchasing party through the B2B sales process; however it may also be the buying party who leads the selling stakeholders through the stages of the buying process, to sell an integrated vision for both companies and create buy-in for a common cause.
  • Such process is very similar to how a B2B sales person leads a buying organization through the buying cycle in a classic B2B sales process.
  • Figure 37 illustrates conceptually the elements of an embodiment of a modeling system 35A necessary for the derivation of the relationship between metadata associated with a sales object and an individual buyer model relative to the ranking of the sales object associated with the particular sales channel model.
  • B2B buyer application 32A, sales offerings 60A, buyer models 46A, rankings/sales channels 48A, sale objects 47A, behavior modeler 49A, ranking application 42A and neuropsychological modeling engine 41A may be structurally and functionally similar to viewer application 32, content material 60, viewer models 46, rankings/channels models48, content objects 47, behavior modeler 49, ranking application 42 and neuropsychological modeling engine 41, respectively, described with reference to Figures 9A and 9D disclosed herein, including the respective algorithmic processes and communication protocols with either similar or dissimilar data structures.
  • each sales object stored in database 47A has associated therewith a metadata file, which may be similar or dissimilar to file 75, which contains various data parameters describing the content of the file, such as the format, product ID, specifications, target customer description, price, special pricing/discounts, duration (subscription services), special terms and conditions, licenses/working information, etc. Any number of different data structure formats may be utilized for this particular structure.
  • Such content file metadata files may also be stored in database 47A.
  • each individual buyer associated with a B2B buyer application 32A has associated therewith a buyer model, which may be similar or dissimilar to model 70 which contains data describing the behavior model.
  • the process flow between components of modeling system 35A to update a buyer's model and sales channel model, retrieve new sales objects and determine if such objects are suitable for ranking according to the system model of the buyer's emotional motivation may be similar to those described previously with reference to Figures 9B-C and 9E-F.
  • Behavior modeler 49A retrieves from database 46A the model associated with a specific buyer and the metadata file defining the sales channel. In addition, behavior modeler 49A also retrieves from database 47A, the metadata file describing the sales object.
  • behavior modeler 49A compares the received event data with metadata file of the sales object and the current buyer model and modifies the sales channel model(s) appropriately, (indicated by the circular arrow within behavior modeler 49)
  • buyer model 70 is modified and optionally the sales channel model could also be modified, as would be in case of sales channel management.
  • modifying the buyer model may be performed by mapping each event onto the mood disc 20 according to a prescribed rule, e.g. purchase of sales object results in a predefined ⁇ and m value (or equivalent Fear coordinate f and Desire coordinate d), described previously.
  • neuropsychological model derived herein and the modeling system 35 disclosed herein may be applied, including, but not limited to any of
  • an automatic internet bank or investment fund 2) a tandem interface for reading and/or researching and/or writing, 3) a tandem user interface for an automatic internet enabled buying system for recurrent consumer purchases, or 4) an automatic trading system for securities, may utilize systems which are structurally and functionally similar to those described with reference to Figures 9A, 9D and 37 disclosed herein, including the respective algorithmic processes and communication protocols with either similar or dissimilar data structures.
  • any two elements which communicate over a network or directly may utilize either a push or a pull technique in addition to any specific communication protocol or technique described herein.
  • any existing or future network or communications infrastructure technologies may be utilized, including any combination of public and private networks.
  • any existing or future network or communications infrastructure technologies may be utilized, including any combination of public and private networks.
  • specific algorithmic flow diagrams or data structures may have been illustrated, these are for exemplary purposes only, other processes which achieve the same functions or utilized different data structures or formats are contemplated to be within the scope of the concepts described herein. As such, the exemplary embodiments described herein are for illustrative purposes and are not meant to be limiting.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • Databases & Information Systems (AREA)
  • Marketing (AREA)
  • Human Computer Interaction (AREA)
  • Tourism & Hospitality (AREA)
  • General Health & Medical Sciences (AREA)
  • Economics (AREA)
  • Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Theoretical Computer Science (AREA)
  • Strategic Management (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Primary Health Care (AREA)
  • Child & Adolescent Psychology (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • User Interface Of Digital Computer (AREA)
  • Information Transfer Between Computers (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

Un système de magnétoscope numérique en nuage en collaboration (ccDVR), qui comprend un système de stockage en nuage et une pluralité de dispositifs clients de magnétoscope numérique participants, agit en collaboration en tant qu'entité commune unique dans laquelle les membres de la communauté s'autorisent mutuellement à télécharger en amont, à mémoriser à distance et à télécharger en aval un contenu autorisé pour une visualisation décalée dans le temps, d'une manière qui protège rigoureusement les droits légaux des propriétaires de contenu tout en surmontant les obstacles physiques potentiels de bande passante limitée, de défaillances d'alimentation, de téléchargements en amont ou en aval incomplets de contenu, de capacité de stockage en nuage limitée, etc. La communauté de magnétoscope numérique en nuage en collaboration partage en collaboration la bande passante et la capacité de stockage en nuage entre une pluralité de spectateurs/utilisateurs de magnétoscope numérique avec chaque propriétaire/utilisateur d'un dispositif client de magnétoscope numérique autorisant l'utilisation de son dispositif client de magnétoscope numérique individuel par un serveur de système de stockage en nuage et n'importe quel autre propriétaire/utilisateur d'un dispositif client de magnétoscope numérique dans la communauté de services respective, et recevant une permission similaire en retour pour favoriser la commodité du stockage en nuage d'une manière autorisée.
PCT/EP2012/057585 2011-04-27 2012-04-25 Procédé et appareil de téléchargement de contenu en collaboration WO2012146627A1 (fr)

Priority Applications (6)

Application Number Priority Date Filing Date Title
CN201280030109.9A CN103718205A (zh) 2011-04-27 2012-04-25 用于内容的协同上载的方法及装置
SG2013079389A SG194633A1 (en) 2011-04-27 2012-04-25 Method and apparatus for collaborative upload of content
CA2834351A CA2834351A1 (fr) 2011-04-27 2012-04-25 Procede et appareil de telechargement de contenu en collaboration
EP12716455.6A EP2702539A1 (fr) 2011-04-27 2012-04-25 Procédé et appareil de téléchargement de contenu en collaboration
JP2014506851A JP2014516503A (ja) 2011-04-27 2012-04-25 コンテンツの協調型アップロードの方法および装置
KR1020137031406A KR20140041500A (ko) 2011-04-27 2012-04-25 콘텐츠의 공동 업로드를 위한 방법 및 장치

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
US201161479648P 2011-04-27 2011-04-27
US61/479,648 2011-04-27
US201161540259P 2011-09-28 2011-09-28
US61/540,259 2011-09-28
US201161540812P 2011-09-29 2011-09-29
US61/540,812 2011-09-29
PCT/EP2011/068485 WO2012052559A1 (fr) 2010-10-21 2011-10-21 Procédé et appareil de modélisation neuropsychologique d'expérience humaine et de comportement d'achat
EPPCT/EP2011/068485 2011-10-21

Publications (1)

Publication Number Publication Date
WO2012146627A1 true WO2012146627A1 (fr) 2012-11-01

Family

ID=47071621

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2012/057585 WO2012146627A1 (fr) 2011-04-27 2012-04-25 Procédé et appareil de téléchargement de contenu en collaboration

Country Status (6)

Country Link
JP (1) JP2014516503A (fr)
KR (1) KR20140041500A (fr)
CN (1) CN103718205A (fr)
CA (1) CA2834351A1 (fr)
SG (1) SG194633A1 (fr)
WO (1) WO2012146627A1 (fr)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150144189A (ko) 2014-06-16 2015-12-24 주식회사 테라텍 클라우드 기반 스마트워크 ecm에서 프로젝트관리를 위한 시스템 및 제어방법
US9257120B1 (en) * 2014-07-18 2016-02-09 Google Inc. Speaker verification using co-location information
JP6540348B2 (ja) 2015-08-07 2019-07-10 コニカミノルタ株式会社 放射線撮影システム
JP2019147036A (ja) * 2019-06-13 2019-09-05 コニカミノルタ株式会社 放射線撮影システム、コンソール及びプログラム
CN113377945B (zh) * 2021-06-11 2023-04-07 成都工物科云科技有限公司 一种面向项目需求的科技专家智能推荐方法
CN113468386B (zh) * 2021-07-01 2023-10-20 南京邮电大学 一种基于哈希学习的跨模态材料表面检索方法及装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020147975A1 (en) * 2001-04-06 2002-10-10 Seo Beom Joo System and method of providing television program sharing service
US20030086023A1 (en) * 2001-11-06 2003-05-08 Lg Electronics Inc. Personal video recorder including a network interface
US20070009235A1 (en) * 2005-07-07 2007-01-11 Eric Walters System and method for digital content retrieval
US20070094702A1 (en) * 2005-10-24 2007-04-26 Broadcom Corporation Method and apparatus for remote personal video storage and retrieval
WO2009110909A1 (fr) * 2008-03-07 2009-09-11 Hewlett-Packard Development Company L.P. Élément de déchargement d'enregistreur personnel de vidéo
US20110038613A1 (en) * 2009-08-13 2011-02-17 Buchheit Brian K Remote storage of digital media broadcasts for time-shifted playback on personal digital media devices

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060109854A1 (en) * 2004-11-22 2006-05-25 Cancel Ramon C Systems and methods to share information between digital video recorders

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020147975A1 (en) * 2001-04-06 2002-10-10 Seo Beom Joo System and method of providing television program sharing service
US20030086023A1 (en) * 2001-11-06 2003-05-08 Lg Electronics Inc. Personal video recorder including a network interface
US20070009235A1 (en) * 2005-07-07 2007-01-11 Eric Walters System and method for digital content retrieval
US20070094702A1 (en) * 2005-10-24 2007-04-26 Broadcom Corporation Method and apparatus for remote personal video storage and retrieval
WO2009110909A1 (fr) * 2008-03-07 2009-09-11 Hewlett-Packard Development Company L.P. Élément de déchargement d'enregistreur personnel de vidéo
US20110038613A1 (en) * 2009-08-13 2011-02-17 Buchheit Brian K Remote storage of digital media broadcasts for time-shifted playback on personal digital media devices

Also Published As

Publication number Publication date
CN103718205A (zh) 2014-04-09
KR20140041500A (ko) 2014-04-04
CA2834351A1 (fr) 2012-11-01
SG194633A1 (en) 2013-12-30
JP2014516503A (ja) 2014-07-10

Similar Documents

Publication Publication Date Title
US9141982B2 (en) Method and apparatus for collaborative upload of content
US8433815B2 (en) Method and apparatus for collaborative upload of content
US9628539B2 (en) Method and apparatus for distributed upload of content
Lotz Netflix and streaming video: The business of subscriber-funded video on demand
McKelvey et al. Discoverability: Toward a definition of content discovery through platforms
Colbjørnsen The streaming network: Conceptualizing distribution economy, technology, and power in streaming media services
Hallinan et al. Recommended for you: The Netflix Prize and the production of algorithmic culture
US20130144727A1 (en) Comprehensive method and apparatus to enable viewers to immediately purchase or reserve for future purchase goods and services which appear on a public broadcast
US20130166382A1 (en) System For Selling Products Based On Product Collections Represented In Video
US20100299603A1 (en) User-Customized Subject-Categorized Website Entertainment Database
WO2012146627A1 (fr) Procédé et appareil de téléchargement de contenu en collaboration
BE1020637A3 (nl) Een werkwijze voor verdeeld uploaden van inhoud.
EP2702539A1 (fr) Procédé et appareil de téléchargement de contenu en collaboration
Ruiz et al. UX Aspects of AI Principles: The Recommender System of VoD Platforms
Ramachandran Behavior-based popularity ranking on amazon video
BE1020638A3 (nl) Een werkwijze voor gedistribueerde vertraagde streaming van inhoud.
BE1020636A3 (nl) Een werkwijze voor verdeeld vertraagde streaming van inhoud.
BE1020639A3 (nl) Een systeem voor het selecteren en het bekijken van programma-inhoud met behulp van gebruikersinterfaces.
US20220335507A1 (en) Systems and methods for an integrated video content discovery, selling, and buying platform
BE1020635A3 (nl) Een werkwijze voor gebruik met een video-weergavesysteem bestaande uit een videoscherm en een veelvoud aan cursornavigatiebedieningen.
Hoàng et al. Factors affecting customers' intention to use FPT Play platform in Hanoi area

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12716455

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2834351

Country of ref document: CA

ENP Entry into the national phase

Ref document number: 2014506851

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 20137031406

Country of ref document: KR

Kind code of ref document: A