US20230097803A1 - Hybrid Audio/Visual Imagery Entertainment System With Live Audio Stream Playout And Separate Live Or Prerecorded Visual Imagery Stream Playout - Google Patents

Hybrid Audio/Visual Imagery Entertainment System With Live Audio Stream Playout And Separate Live Or Prerecorded Visual Imagery Stream Playout Download PDF

Info

Publication number
US20230097803A1
US20230097803A1 US17/952,724 US202217952724A US2023097803A1 US 20230097803 A1 US20230097803 A1 US 20230097803A1 US 202217952724 A US202217952724 A US 202217952724A US 2023097803 A1 US2023097803 A1 US 2023097803A1
Authority
US
United States
Prior art keywords
visual imagery
audio
feed
live
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US17/952,724
Inventor
Robert A. Oklejas
Dragan Cerovcevic
Roy Radakovich
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Escapes Network LLC
Original Assignee
Escapes Network LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Escapes Network LLC filed Critical Escapes Network LLC
Priority to US17/952,724 priority Critical patent/US20230097803A1/en
Assigned to eScapes Network LLC reassignment eScapes Network LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CEROVCEVIC, DRAGAN, OKLEJAS, ROBERT A., RADAKOVICH, ROY
Publication of US20230097803A1 publication Critical patent/US20230097803A1/en
Priority to CA3201092A priority patent/CA3201092A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/462Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities
    • H04N21/4622Retrieving content or additional data from different sources, e.g. from a broadcast channel and the Internet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/262Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists
    • H04N21/26208Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists the scheduling operation being performed under constraints
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/262Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists
    • H04N21/2625Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists for delaying content or additional data distribution, e.g. because of an extended sport event
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/262Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists
    • H04N21/26275Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists for distributing content or additional data in a staggered manner, e.g. repeating movies on different channels in a time-staggered manner in a near video on demand system
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/266Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
    • H04N21/2665Gathering content from different sources, e.g. Internet and satellite
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47202End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting content on demand, e.g. video on demand
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/812Monomedia components thereof involving advertisement data

Definitions

  • the subject disclosure relates to a hybrid audio/visual imagery entertainment system, particularly one adapted to delivering therapeutic benefits to the viewer.
  • Frantic programming seeks to draw the viewer to ever narrower fields of interests, but with more intensely focused programming. The result is that there are dozens of entire networks devoted 24 hours per day, seven days per week to a single subject; i.e., The Food Network, History Channel, HGTV, etc. Far from producing a mental “time out”, current programming adds gasoline to the fire of stress and information overload.
  • This demographic segment includes a mature, upscale audience that appreciates and desires to experience tranquil, relaxing places, preferably with beautiful, spectacular scenery. Still further, another segment seeks to enhance their ability to combine audio and visual imagery content in a more aesthetically pleasing manner from what can be seen or selected using traditional television and cable channels.
  • a hybrid audio and visual imagery system provide separate and unsynchronized audio and visual imagery streams to a display at a remote location for viewing by a user.
  • the system includes a first feed (i.e., an audio feed) including an audio signal conveying continuously live sound of a first subject.
  • the system also includes a second feed (i.e., a live visual imagery stream or a prerecorded video feed) including a visual imagery signal conveying live or prerecorded visual imagery of a second subject supplemental and separate to the conveyed live sound of the first feed, as well as a control device located at the remote location, the control signals from the user received by the control device.
  • the system also includes a first control node in communication with the control device and located at a central location at which the first feed is received, the first control node having a first input comprising the first feed and a first portion of the control signals associated with the audio signal and having a first output signal comprising a first user feed, the first output signal from the first control node received by the control device.
  • the system includes a second control node in communication with the control device and located at the central location at which the second feed is received, the second control node having a second input comprising the second feed and a second portion of the control signals associated with the visual imagery signal, and having a second output signal distinct and separate from the first output signal comprising a second user feed, the second output signal from the second control node received by the control device.
  • the system includes a display located at the remote location and coupled to the control device, the sound of the first subject conveyed by the first user feed and the visual imagery of the second subject conveyed by the second user feed independently outputted by the display.
  • the user or viewer controls the operation of the system by separately selecting one or both of the live audio feed and either the live visual imagery feed or prerecorded video feed. In this way, the reproduced audio via the audio feed is not synchronized to the reproduced visual imagery via the visual imagery feed.
  • FIG. 1 shows a diagram of one embodiment of the subject disclosure as described below.
  • the subject disclosure provides a hybrid audio and visual imagery system 10 for providing separate, and unsynchronized, live audio (i.e., a live audio stream), and either prerecorded or live visual imagery (i.e., a live visual imagery stream or a prerecorded video stream), to a display 90 , which includes an associated speaker 94 , viewable at a remote location 92 by a user in response to a control signal sent by a user through a control device 22 .
  • live audio i.e., a live audio stream
  • live visual imagery i.e., a live visual imagery stream or a prerecorded video stream
  • the system 10 includes an audio source 16 , a visual imagery source 18 , a first control node 50 , a second control node 60 , and the control device 22 .
  • the control device 22 may includes a hand-held remote controller as well as an associated digital cable or satellite transceiver unit respectively controlled thereby to which the user display 90 is connected.
  • the control devices 22 allow the user to navigate through a series of menus (not shown) presented on the respective display 90 .
  • the audio source 16 is in the form of “live” sound captured at a respective first location 24 (such as a sound studio or the like) by an audio capturing device 26 from a subject 12 (i.e., a first subject 12 ).
  • a respective first location 24 such as a sound studio or the like
  • the audio source 16 refers to the “live” sound captured by the audio capturing device 26 that is contemporaneously heard through the display by the user, as described further below.
  • “Live” could also be referred to as any source whereby the user/listener is not directly selecting the audio but such audio is coming from a separate or remote source such as through a computer-generated audio stream.
  • the audio source 16 refers to the “live” sound captured by the audio capturing device 26 from a single first subject 12 .
  • the audio source 16 refers to the “live” sound from multiple distinct first subjects 12 that are each be captured individually by a single audio capturing device 26 or multiple distinct audio capturing devices 26 (i.e., the audio source 16 refers to “n” audio sources 16 , with “n” being a number one (for a single audio source 16 ) or any number greater than one (for multiple audio sources 16 ) being respectively captured by one or more audio capturing devices 26 ).
  • the “audio source 16 ” may refer to any one or more of the “n” audio sources 16 provided herein.
  • Each audio capturing device 26 can be at least one stand-alone microphone 28 , and/or at least one microphone provided as part of at least one camera 30 (i.e., a camera 30 including a microphone and illustrated in FIG. 1 as a “camera microphone” with lead line 30 ) which can hereinafter be referred to as either camera 30 or camera microphone 30 , or any other audio device 32 or devices that can capture “live” sound generated from the first subject 12 and contemporaneously transmit the captured sound in the form of an audio output signal (i.e., an audio feed 36 including an audio signal 38 ) to a first control node 50 at a central location 55 , described further below.
  • the number of distinct audio sources 16 corresponds to the number of distinct audio feeds 36 and audio signals 38 (i.e., when there are “n” audio sources 16 , there is “n” distinct audio feeds 36 and audio signals 38 ).
  • the cameras 30 utilized that include the microphones may be hand-held cameras or can be remotely controlled High Definition Television system camera viewing the first subject 12 .
  • the microphone may be a microphone working in conjunction with the HDT system camera 30 at the first location 24 to acquire the local environmental sounds that are being produced by the first subject 12 .
  • Still further audio devices 32 for capturing audio may include, for example, digital radio or internet radio or a live curator/creator audio stream.
  • the sound captured by any or all of the audio capturing devices 26 from the first subject 12 may include, but is not limited to, musical selections and a possible optional human voice spoken by an “on-air” personality that are produced live at the first location 24 .
  • the first subject 12 does not refer strictly to a sound generated by a human (such as talking or singing), but any source that is capable of generating sound captured by the audio capturing device 26 and the first location 24 .
  • Such sound from the first subject 12 may be deliberately selected for its aesthetically appealing qualities that produce a relaxation or calming effect on the human psyche.
  • the visual imagery source 18 is in the form of visual imagery that may also be produced at the respective first location 24 or at another location 34 (i.e., a second location 34 such as a television studio or a film set or the like) of a second subject 14 at the respective first location 24 or second location 34 .
  • a second location 34 such as a television studio or a film set or the like
  • the visual imagery of the visual imagery source 18 may be in the form of a “live” visual imagery of the second subject 14 that is captured by a visual imagery capturing device 40 , or alternatively may be in the form of “prerecorded” visual imagery that has previously been captured and stored on the visual imagery capturing device 40 .
  • the “prerecorded” visual imagery may be a still shot or a video recording of a predetermined length. Similar to the audio sound, the visual imagery from second subject 14 may be deliberately selected for its aesthetically appealing qualities that produce a relaxation or calming effect on the human psyche.
  • the visual imagery capturing device 40 is in the form of one or more live video capturing devices 42 , such as live video capturing cameras 42 or any other device or devices that can capture “live” visual imagery generated from the second subject 14 and contemporaneously transmit the captured visual imagery in the form of a visual imagery output signal (i.e., a visual imagery feed 46 including a visual imagery signal 48 ) as directed by the control signal from the user to a second control node 60 at the central location 55 , described further below.
  • the cameras 42 utilized as a live video capturing device 42 may be hand-held cameras or can be remotely controlled High Definition Television system camera viewing the second subject 14 .
  • the “live” visual imagery source 18 refers to the “live” visual imagery sources 18 from multiple distinct second subjects 14 that are each be captured individually by a single visual imagery capturing device 40 or multiple distinct visual imagery capturing devices 40 (i.e., the “live” visual imagery source 18 refers to “m” visual imagery sources 18 , with “m” being one (for a single visual imagery source 18 ) or greater than one (for multiple visual imagery sources 18 ) captured by the one or more visual imagery capturing devices 40 and in particular from the one or more live video capturing devices 42 ).
  • the “visual imagery source 18 ” may refer to any one or more of the “n” visual imagery sources 18 provided herein.
  • the visual imagery capturing device 40 includes one or more prerecorded visual imagery storage devices 44 that store prerecorded (i.e., previously recorded) visual imagery, as described above, generated from the second subject 14 and can subsequently transmit the captured visual imagery in the form of a visual imagery output signal (i.e., the visual imagery feed 46 including the visual imagery signal 48 ) as directed by the control signal from the user to a second control node 60 at the central location 55 , described further below.
  • the term “subsequently transmit” as it relates to the prerecorded visual includes a delayed transmission time from the time in which the video was recorded and stored onto the device 44 , which may be as short as a few seconds or as long as multiple years or more.
  • the system 10 also includes a first control node 50 and a second control node 60 that may be each located at a central location 55 as shown in FIG. 1 , such as a central studio.
  • the first and second control nodes 50 , 60 may be located in two distinct locations, such as a first and second studio (not shown).
  • the first control node 50 is coupled to the audio source 16 , and in particular is coupled to the audio capturing device 26 of each of the “n” audio sources 16 (i.e., is coupled to one or more of the one or microphones 28 , the microphones of the one or more cameras 30 , or the other audio device 32 of each respective one of the “n” audio sources 16 as described above) including, for example, digital radio or internet radio or a live curator/creator audio stream as described above.
  • the first control node 50 receives a first portion of the control signals from the control device 22 associated with audio control of the display 90 and also receives the “n” audio feeds 36 (i.e., a first feed 36 ) in the form of the “n” audio signals 38 from the “n” audio sources 16 by either digital satellite or digital cable and contemporaneously provides an output signal including the user audio feed 70 to the display 90 as a function of the received first portion of the control signal.
  • the “n” audio feeds 36 i.e., a first feed 36
  • the first portion of the control signals selects one of the “n” audio signals 38 received at the first control node 50 , with the first control node 50 contemporaneously providing an output signal including the user audio feed 70 to the display 90 corresponding to the selected one audio signal 38 of the audio feed 36 from the respective one audio source 16 as a function of the received first portion of the control signal.
  • the display 90 reproduces the “live” sounds conveyed by the user audio feed 70 that correspond to the first portion of the control signal received (and corresponding to the selected one audio signal 38 ) and broadcasts the audio via a speaker or the like that are included on the display 90 that can be heard by the user.
  • the audio that is heard by the user through the display 90 is the “live” sound that is produced by the first subject 12 as one of the respective “n” audio sources 16 captured by the audio capturing device 26 associated with the respective one chosen audio source 16 , contemporaneously sent from the respective one audio source 16 to the first control node 50 via the first feed 36 corresponding to the selected one audio signal 38 , contemporaneously sent from the first control node 50 to the display 90 via the user audio feed 70 , and contemporaneously heard through the speakers of the display 90 .
  • the portion of the control signals sent by the user through the control device 22 in addition to being able to select one of the respective “n” audio feeds 36 from the respective one audio source 16 , can be used to turn on or off the audio on the display 90 or to control the volume of the generated audio from the display 90 .
  • the user does not control the content of the “live” audio of the audio feed 36 from the respective one audio source 16 , but instead simply hears the sound contemporaneously captured by the audio capturing device 26 from the first subject 12 of the audio feed 36 from the respective one audio source 16 .
  • the audio that is heard by the user through the display 90 is akin to a “live” radio broadcast from the first subject 12 of the audio feed 36 from the respective one audio source 16 .
  • the user can control what audio is actually heard through the display 90 by selecting from the potential “live” audio sources 16 through the control device 22 , and thus different genres of audio that are available from the first subject 12 and associated with the different “live” audio sources can be selected by sending additional control signals from the control device 22 that are received by the first control node 50 to alter the audio feed 70 that is sent to the display 90 .
  • the second control node 60 receives a second portion of the control signals from the control device 22 (distinct from the first portion of the control signals from the control device 22 ) associated with visual imagery control of the display 90 and also receives the visual imagery feed 46 (i.e., a second feed 46 ) in the form of the visual imagery signal 48 from the visual imagery source 18 by either digital satellite or digital cable and contemporaneously provides an output signal including the user visual imagery feed 80 to the display 90 as a function of the received second portion of control signal.
  • the display 90 reproduces the visual imagery conveyed by the user visual imagery feed 80 that correspond to the second portion of the control signal received by the control device 22 and broadcasts the visual imagery via a monitor or the like that are included on the display 90 that can be viewed by the user.
  • the visual display that is desired to be viewed by the user through the display 90 is the one of the “live” visual displays that is produced by the second subject 14 , captured by the visual imagery capturing device 40 of one of the respective “m” visual imagery sources 18 , contemporaneously sent from the one visual imagery source 18 to the second control node 60 via the second feed 46 , contemporaneously sent from the second control node 60 to the display 90 via the user visual imagery feed 80 , and contemporaneously reproduced and viewed by the user through the display 90 and associated with the selected one of the “m” live visual imagery sources 18 .
  • the second portion of the control signals sent by the user through the control device 22 can be used to turn on or off the “live” visual display on the display 90 .
  • the user does not control the content of the “live” visual display through the display 90 , as this visual display is the same “live” visual display that is being contemporaneously produced by the second subject 14 . Accordingly, the visual display that is viewed by the user through the display 90 is akin to a “live” visual display viewed from a television or computer or the like.
  • the visual display that is desired to be viewed by the user through the display 90 is the “prerecorded” visual imagery (i.e., “prerecorded video”) stored on the prerecorded visual imagery storage device 44
  • the user has enhanced control over what can be displayed through the display 90 .
  • the system 10 can be implemented wherein the user can send the second portion of the control signal via the control device 22 and select any prerecorded video that has been stored on the visual imagery source 18 , and in particular on the prerecorded visual imagery storage device 44 and adjust the timing of the viewing of this selected prerecorded video to start or stop at any time.
  • the user can vary the visual content displayed through the display 90 as desired to correspond to the “live” visual imagery that may also be simultaneously but separately displayed and create a desired listening and viewing effect.
  • This segment tends to be mature and includes a discerning, upscale audience that can appreciate, and recognizes value in, products and services of various types (e.g., luxury-market automobiles or timepieces, premium luggage, business clothing, jewelry, luxury hotel chains and resorts, cruise lines, travel bureaus, etc.) associated with the prestigious, carefully-selected brand advertising that lends itself well to the television format and system of the subject disclosure.
  • this segment includes anyone who seeds a break from stress and is not in any way limited to any demographic. Such viewers often seek to experience, and appreciate the benefits of, tranquility and relaxation.
  • an embodiment of the subject disclosure provides an entertainment system and format that includes appropriate “continuously live” audio and/or appropriate “live” or “prerecorded” visual imagery for the goal of an aesthetically and aurally pleasing and relaxing experience for the user.
  • it may be a therapeutic experience for the user, which may provide in certain circumstances a mental and/or physical benefit to the user.
  • the audio experience provides a continuously live listening experience for the user and may utilize live locale sounds such as nature, surf, running water, rain, foghorn, or other soothing therapeutic sounds such as music or musical interludes. It may also include sounds from humans alone or in combination with the other live locale or soothing therapeutic sounds as describe.
  • live locale sounds such as nature, surf, running water, rain, foghorn, or other soothing therapeutic sounds such as music or musical interludes. It may also include sounds from humans alone or in combination with the other live locale or soothing therapeutic sounds as describe.
  • the visual entertainment production method captures the most aesthetically pleasing live visuals available at any particular time. It may alternatively use prerecorded visual imagery with aesthetically pleasing visual presentations, and thus provides a low cost method for providing unique video and audio content.
  • the system provides separate and unsynchronized audio and visual imagery streams to a display at a remote location for viewing by a user. Stated another way, the audio signal is not embedded in the video signal, and hence the resulting product that results is not a unified audio/visual product.
  • the resulting audio streams of the product according to the subject disclosure is akin to traditional live radio in which the audio signal is consumed in the instant that it is broadcast and is not intended to be recorded and repeated for rebroadcast at a future date.
  • the subject disclosure is aimed at providing the positive therapeutic effects of a “time out” that would enhance mental and physical health, by virtually transporting the viewer to another, more relaxing scene, which can be experienced visually and/or audibly.
  • the audio and visual imagery experience can be independently controlled by the user, who uses the control device 22 to send control signals associated with the independent selection of the audio and visual imagery content.

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Astronomy & Astrophysics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

A hybrid audio and visual imagery entertainment system that combines visual imagery obtained as live visual imagery or prerecorded visual imagery with the transmission of live audio displayed on a display for therapeutic benefit to a user. Each of the visual imagery and audio of the system can be viewed or heard by the user on its own, however, it is designed to be viewed and heard simultaneously but where the audio and visual imagery streams are separate and unsynchronized and displayed on a display for viewing and hearing by a user.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • The present disclosure claims priority from U.S. Provisional App. Ser. No. 63/249,223, filed Sep. 28, 2021, the entirety of which is herein incorporated by reference.
  • FIELD OF THE DISCLOSURE
  • The subject disclosure relates to a hybrid audio/visual imagery entertainment system, particularly one adapted to delivering therapeutic benefits to the viewer.
  • BACKGROUND OF THE DISCLOSURE AND ADVANTAGES
  • In Enerchi Health, a web based health information service, it was stated that “People generally feel that they live cluttered, hectic, overwhelming lives; between work, family, and friends, or any of the dozens of things that fill our days and tax our body and mind, rarely do we take even a small part of our waking life entirely for ourselves, apart from unhealthy “escapist” kinds of relaxation like TV that don't allow the mind to settle down. The almost permanent state of stimulation and stress inevitably has dire consequences for both mental and physical health, from higher blood pressure to compromised immune systems, leaving us vulnerable to any number of conditions. Making a priority of taking “time out” every day to simply withdraw from the whole mess can be a big step toward improving health.”
  • Frantic programming seeks to draw the viewer to ever narrower fields of interests, but with more intensely focused programming. The result is that there are dozens of entire networks devoted 24 hours per day, seven days per week to a single subject; i.e., The Food Network, History Channel, HGTV, etc. Far from producing a mental “time out”, current programming adds gasoline to the fire of stress and information overload.
  • Moreover, there is a large and growing demographic segment that seeks to escape TV entirely, or to use it only occasionally as a quick source of news, in favor of a more tranquil and relaxing lifestyle. This demographic segment includes a mature, upscale audience that appreciates and desires to experience tranquil, relaxing places, preferably with beautiful, breathtaking scenery. Still further, another segment seeks to enhance their ability to combine audio and visual imagery content in a more aesthetically pleasing manner from what can be seen or selected using traditional television and cable channels.
  • SUMMARY OF THE DISCLOSURE
  • In one aspect of the subject disclosure, a hybrid audio and visual imagery system provide separate and unsynchronized audio and visual imagery streams to a display at a remote location for viewing by a user.
  • The system includes a first feed (i.e., an audio feed) including an audio signal conveying continuously live sound of a first subject. The system also includes a second feed (i.e., a live visual imagery stream or a prerecorded video feed) including a visual imagery signal conveying live or prerecorded visual imagery of a second subject supplemental and separate to the conveyed live sound of the first feed, as well as a control device located at the remote location, the control signals from the user received by the control device.
  • The system also includes a first control node in communication with the control device and located at a central location at which the first feed is received, the first control node having a first input comprising the first feed and a first portion of the control signals associated with the audio signal and having a first output signal comprising a first user feed, the first output signal from the first control node received by the control device. Still further, the system includes a second control node in communication with the control device and located at the central location at which the second feed is received, the second control node having a second input comprising the second feed and a second portion of the control signals associated with the visual imagery signal, and having a second output signal distinct and separate from the first output signal comprising a second user feed, the second output signal from the second control node received by the control device.
  • Finally, the system includes a display located at the remote location and coupled to the control device, the sound of the first subject conveyed by the first user feed and the visual imagery of the second subject conveyed by the second user feed independently outputted by the display.
  • In one aspect, the user or viewer controls the operation of the system by separately selecting one or both of the live audio feed and either the live visual imagery feed or prerecorded video feed. In this way, the reproduced audio via the audio feed is not synchronized to the reproduced visual imagery via the visual imagery feed.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 shows a diagram of one embodiment of the subject disclosure as described below.
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • With reference to FIG. 1 , and in operation, the subject disclosure provides a hybrid audio and visual imagery system 10 for providing separate, and unsynchronized, live audio (i.e., a live audio stream), and either prerecorded or live visual imagery (i.e., a live visual imagery stream or a prerecorded video stream), to a display 90, which includes an associated speaker 94, viewable at a remote location 92 by a user in response to a control signal sent by a user through a control device 22.
  • The system 10 includes an audio source 16, a visual imagery source 18, a first control node 50, a second control node 60, and the control device 22.
  • The control device 22 may includes a hand-held remote controller as well as an associated digital cable or satellite transceiver unit respectively controlled thereby to which the user display 90 is connected. The control devices 22 allow the user to navigate through a series of menus (not shown) presented on the respective display 90.
  • The audio source 16 is in the form of “live” sound captured at a respective first location 24 (such as a sound studio or the like) by an audio capturing device 26 from a subject 12 (i.e., a first subject 12). Stated another way, for the purposes of this disclosure, the audio source 16 refers to the “live” sound captured by the audio capturing device 26 that is contemporaneously heard through the display by the user, as described further below. “Live” could also be referred to as any source whereby the user/listener is not directly selecting the audio but such audio is coming from a separate or remote source such as through a computer-generated audio stream.
  • In certain embodiments, the audio source 16 refers to the “live” sound captured by the audio capturing device 26 from a single first subject 12. However, in other embodiments, the audio source 16 refers to the “live” sound from multiple distinct first subjects 12 that are each be captured individually by a single audio capturing device 26 or multiple distinct audio capturing devices 26 (i.e., the audio source 16 refers to “n” audio sources 16, with “n” being a number one (for a single audio source 16) or any number greater than one (for multiple audio sources 16) being respectively captured by one or more audio capturing devices 26). Collectively, hereinafter, the “audio source 16” may refer to any one or more of the “n” audio sources 16 provided herein.
  • Each audio capturing device 26 can be at least one stand-alone microphone 28, and/or at least one microphone provided as part of at least one camera 30 (i.e., a camera 30 including a microphone and illustrated in FIG. 1 as a “camera microphone” with lead line 30) which can hereinafter be referred to as either camera 30 or camera microphone 30, or any other audio device 32 or devices that can capture “live” sound generated from the first subject 12 and contemporaneously transmit the captured sound in the form of an audio output signal (i.e., an audio feed 36 including an audio signal 38) to a first control node 50 at a central location 55, described further below. The number of distinct audio sources 16 corresponds to the number of distinct audio feeds 36 and audio signals 38 (i.e., when there are “n” audio sources 16, there is “n” distinct audio feeds 36 and audio signals 38).
  • The cameras 30 utilized that include the microphones may be hand-held cameras or can be remotely controlled High Definition Television system camera viewing the first subject 12. The microphone may be a microphone working in conjunction with the HDT system camera 30 at the first location 24 to acquire the local environmental sounds that are being produced by the first subject 12. Still further audio devices 32 for capturing audio may include, for example, digital radio or internet radio or a live curator/creator audio stream.
  • The sound captured by any or all of the audio capturing devices 26 from the first subject 12 may include, but is not limited to, musical selections and a possible optional human voice spoken by an “on-air” personality that are produced live at the first location 24. Stated another way, the first subject 12 does not refer strictly to a sound generated by a human (such as talking or singing), but any source that is capable of generating sound captured by the audio capturing device 26 and the first location 24. Such sound from the first subject 12 may be deliberately selected for its aesthetically appealing qualities that produce a relaxation or calming effect on the human psyche.
  • The visual imagery source 18 is in the form of visual imagery that may also be produced at the respective first location 24 or at another location 34 (i.e., a second location 34 such as a television studio or a film set or the like) of a second subject 14 at the respective first location 24 or second location 34.
  • The visual imagery of the visual imagery source 18 may be in the form of a “live” visual imagery of the second subject 14 that is captured by a visual imagery capturing device 40, or alternatively may be in the form of “prerecorded” visual imagery that has previously been captured and stored on the visual imagery capturing device 40. The “prerecorded” visual imagery may be a still shot or a video recording of a predetermined length. Similar to the audio sound, the visual imagery from second subject 14 may be deliberately selected for its aesthetically appealing qualities that produce a relaxation or calming effect on the human psyche.
  • For “live” visual imagery, the visual imagery capturing device 40 is in the form of one or more live video capturing devices 42, such as live video capturing cameras 42 or any other device or devices that can capture “live” visual imagery generated from the second subject 14 and contemporaneously transmit the captured visual imagery in the form of a visual imagery output signal (i.e., a visual imagery feed 46 including a visual imagery signal 48) as directed by the control signal from the user to a second control node 60 at the central location 55, described further below. The cameras 42 utilized as a live video capturing device 42 may be hand-held cameras or can be remotely controlled High Definition Television system camera viewing the second subject 14.
  • Similar to the audio source 16, in certain embodiments, the “live” visual imagery source 18 refers to the “live” visual imagery sources 18 from multiple distinct second subjects 14 that are each be captured individually by a single visual imagery capturing device 40 or multiple distinct visual imagery capturing devices 40 (i.e., the “live” visual imagery source 18 refers to “m” visual imagery sources 18, with “m” being one (for a single visual imagery source 18) or greater than one (for multiple visual imagery sources 18) captured by the one or more visual imagery capturing devices 40 and in particular from the one or more live video capturing devices 42). Collectively, hereinafter, the “visual imagery source 18” may refer to any one or more of the “n” visual imagery sources 18 provided herein.
  • For “prerecorded” visual imagery, the visual imagery capturing device 40 includes one or more prerecorded visual imagery storage devices 44 that store prerecorded (i.e., previously recorded) visual imagery, as described above, generated from the second subject 14 and can subsequently transmit the captured visual imagery in the form of a visual imagery output signal (i.e., the visual imagery feed 46 including the visual imagery signal 48) as directed by the control signal from the user to a second control node 60 at the central location 55, described further below. Accordingly, the term “subsequently transmit” as it relates to the prerecorded visual includes a delayed transmission time from the time in which the video was recorded and stored onto the device 44, which may be as short as a few seconds or as long as multiple years or more.
  • As noted above, the system 10 also includes a first control node 50 and a second control node 60 that may be each located at a central location 55 as shown in FIG. 1 , such as a central studio. Alternatively, the first and second control nodes 50, 60 may be located in two distinct locations, such as a first and second studio (not shown).
  • The first control node 50 is coupled to the audio source 16, and in particular is coupled to the audio capturing device 26 of each of the “n” audio sources 16 (i.e., is coupled to one or more of the one or microphones 28, the microphones of the one or more cameras 30, or the other audio device 32 of each respective one of the “n” audio sources 16 as described above) including, for example, digital radio or internet radio or a live curator/creator audio stream as described above.
  • The first control node 50 receives a first portion of the control signals from the control device 22 associated with audio control of the display 90 and also receives the “n” audio feeds 36 (i.e., a first feed 36) in the form of the “n” audio signals 38 from the “n” audio sources 16 by either digital satellite or digital cable and contemporaneously provides an output signal including the user audio feed 70 to the display 90 as a function of the received first portion of the control signal. In particular, the first portion of the control signals selects one of the “n” audio signals 38 received at the first control node 50, with the first control node 50 contemporaneously providing an output signal including the user audio feed 70 to the display 90 corresponding to the selected one audio signal 38 of the audio feed 36 from the respective one audio source 16 as a function of the received first portion of the control signal.
  • The display 90 reproduces the “live” sounds conveyed by the user audio feed 70 that correspond to the first portion of the control signal received (and corresponding to the selected one audio signal 38) and broadcasts the audio via a speaker or the like that are included on the display 90 that can be heard by the user.
  • The audio that is heard by the user through the display 90 is the “live” sound that is produced by the first subject 12 as one of the respective “n” audio sources 16 captured by the audio capturing device 26 associated with the respective one chosen audio source 16, contemporaneously sent from the respective one audio source 16 to the first control node 50 via the first feed 36 corresponding to the selected one audio signal 38, contemporaneously sent from the first control node 50 to the display 90 via the user audio feed 70, and contemporaneously heard through the speakers of the display 90. The portion of the control signals sent by the user through the control device 22, in addition to being able to select one of the respective “n” audio feeds 36 from the respective one audio source 16, can be used to turn on or off the audio on the display 90 or to control the volume of the generated audio from the display 90. However, the user does not control the content of the “live” audio of the audio feed 36 from the respective one audio source 16, but instead simply hears the sound contemporaneously captured by the audio capturing device 26 from the first subject 12 of the audio feed 36 from the respective one audio source 16. Accordingly, the audio that is heard by the user through the display 90 is akin to a “live” radio broadcast from the first subject 12 of the audio feed 36 from the respective one audio source 16. However, the user can control what audio is actually heard through the display 90 by selecting from the potential “live” audio sources 16 through the control device 22, and thus different genres of audio that are available from the first subject 12 and associated with the different “live” audio sources can be selected by sending additional control signals from the control device 22 that are received by the first control node 50 to alter the audio feed 70 that is sent to the display 90.
  • The second control node 60 receives a second portion of the control signals from the control device 22 (distinct from the first portion of the control signals from the control device 22) associated with visual imagery control of the display 90 and also receives the visual imagery feed 46 (i.e., a second feed 46) in the form of the visual imagery signal 48 from the visual imagery source 18 by either digital satellite or digital cable and contemporaneously provides an output signal including the user visual imagery feed 80 to the display 90 as a function of the received second portion of control signal. The display 90 reproduces the visual imagery conveyed by the user visual imagery feed 80 that correspond to the second portion of the control signal received by the control device 22 and broadcasts the visual imagery via a monitor or the like that are included on the display 90 that can be viewed by the user.
  • In certain instances, the visual display that is desired to be viewed by the user through the display 90 is the one of the “live” visual displays that is produced by the second subject 14, captured by the visual imagery capturing device 40 of one of the respective “m” visual imagery sources 18, contemporaneously sent from the one visual imagery source 18 to the second control node 60 via the second feed 46, contemporaneously sent from the second control node 60 to the display 90 via the user visual imagery feed 80, and contemporaneously reproduced and viewed by the user through the display 90 and associated with the selected one of the “m” live visual imagery sources 18. The second portion of the control signals sent by the user through the control device 22 can be used to turn on or off the “live” visual display on the display 90. However, the user does not control the content of the “live” visual display through the display 90, as this visual display is the same “live” visual display that is being contemporaneously produced by the second subject 14. Accordingly, the visual display that is viewed by the user through the display 90 is akin to a “live” visual display viewed from a television or computer or the like. While the user does not control the content of the “live” visual display, the user can control what visual imagery is actually seen through the display 90 by selecting from the potential “live” visual imagery sources 18, and thus different genres of visual imagery that are available from the second subject 14 and associated with the different “live” visual imagery sources can be selected by sending additional control signals from the control device 22 that are received by the second control node 60 to alter the visual imagery feed 80 that is sent to the display 90.
  • Alternatively, when the visual display that is desired to be viewed by the user through the display 90 is the “prerecorded” visual imagery (i.e., “prerecorded video”) stored on the prerecorded visual imagery storage device 44, the user has enhanced control over what can be displayed through the display 90. In particular, the system 10 can be implemented wherein the user can send the second portion of the control signal via the control device 22 and select any prerecorded video that has been stored on the visual imagery source 18, and in particular on the prerecorded visual imagery storage device 44 and adjust the timing of the viewing of this selected prerecorded video to start or stop at any time. In this way, the user can vary the visual content displayed through the display 90 as desired to correspond to the “live” visual imagery that may also be simultaneously but separately displayed and create a desired listening and viewing effect.
  • As noted above, a large and growing demographic segment of TV viewers, in seeking a more tranquil and relaxing lifestyle, are moving away from TV entirely or only occasionally using it as a quick source of news. Individuals in this demographic segment seek freedom from the chaos, fear and stress of everyday life, and to temporarily divorce themselves from certain aspects of their working lives or the world we live in, such as 24 hours news, email, cell phones, text messaging, voicemail, deadlines, travel warnings, etc. This segment tends to be mature and includes a discerning, upscale audience that can appreciate, and recognizes value in, products and services of various types (e.g., luxury-market automobiles or timepieces, premium luggage, business clothing, jewelry, luxury hotel chains and resorts, cruise lines, travel bureaus, etc.) associated with the prestigious, carefully-selected brand advertising that lends itself well to the television format and system of the subject disclosure. However, this segment includes anyone who seeds a break from stress and is not in any way limited to any demographic. Such viewers often seek to experience, and appreciate the benefits of, tranquility and relaxation.
  • Thus, an embodiment of the subject disclosure provides an entertainment system and format that includes appropriate “continuously live” audio and/or appropriate “live” or “prerecorded” visual imagery for the goal of an aesthetically and aurally pleasing and relaxing experience for the user. In certain embodiments, it may be a therapeutic experience for the user, which may provide in certain circumstances a mental and/or physical benefit to the user.
  • The audio experience provides a continuously live listening experience for the user and may utilize live locale sounds such as nature, surf, running water, rain, foghorn, or other soothing therapeutic sounds such as music or musical interludes. It may also include sounds from humans alone or in combination with the other live locale or soothing therapeutic sounds as describe.
  • The visual entertainment production method captures the most aesthetically pleasing live visuals available at any particular time. It may alternatively use prerecorded visual imagery with aesthetically pleasing visual presentations, and thus provides a low cost method for providing unique video and audio content. However, as opposed to live television, the system provides separate and unsynchronized audio and visual imagery streams to a display at a remote location for viewing by a user. Stated another way, the audio signal is not embedded in the video signal, and hence the resulting product that results is not a unified audio/visual product. In this respect, the resulting audio streams of the product according to the subject disclosure is akin to traditional live radio in which the audio signal is consumed in the instant that it is broadcast and is not intended to be recorded and repeated for rebroadcast at a future date.
  • The subject disclosure is aimed at providing the positive therapeutic effects of a “time out” that would enhance mental and physical health, by virtually transporting the viewer to another, more relaxing scene, which can be experienced visually and/or audibly. The audio and visual imagery experience can be independently controlled by the user, who uses the control device 22 to send control signals associated with the independent selection of the audio and visual imagery content.
  • It will, of course, be understood that the foregoing description is of a preferred exemplary embodiment of the disclosure and that the disclosure is not limited to the specific embodiments shown. Other changes and modifications will become apparent to those skilled in the art and all such changes and modifications are intended to be within the scope of the subject disclosure.

Claims (19)

1. A hybrid audio and visual imagery system for independently providing visual imagery and/or audio to a display at a remote location in response to control signals received from a user at the remote location, comprising:
a first feed including an audio signal conveying continuously live sound of a first subject;
a second feed including a visual imagery signal conveying live visual imagery or prerecorded visual imagery of a second subject supplemental and separate to the conveyed live sound of the first feed;
a control device located at the remote location, with the control signals from the user generated by the control device,
a first control node in communication with the control device and located at a central location at which the first feed is received, the first control node having a first input comprising the first feed and a first portion of the control signals associated with the audio signal, and having a first output signal comprising a first user feed, the first output signal from the first control node received by the control device, and
a second control node in communication with the control device and located at the central location at which the second feed is received, the second control node having a second input comprising the second feed and a second portion of the control signals associated with the visual imagery signal, and having a second output signal distinct and separate from the first output signal comprising a second user feed, the second output signal from the second control node received by the control device, and
a display located at the remote location and coupled to the control device, the sound of the first subject conveyed by the first user feed and the visual imagery of the second subject conveyed by the second user feed independently outputted by the display.
2. The hybrid audio and visual imagery system of claim 1, wherein the continuously live sound of the first subject conveyed by the first user feed and outputted by the display comprises continuously live human voice sounds.
3. The hybrid audio and visual imagery system of claim 1, wherein the continuously live sound of the first subject conveyed by the first user feed and outputted by the display comprises continuously live music.
4. The hybrid audio and visual imagery system of claim 1, wherein the continuously live sound of the first subject conveyed by the first user feed and outputted by the display comprises continuously live prerecorded music played through an audio playing device.
5. The hybrid audio and visual imagery system of claim 1, wherein the continuously live sound of the first subject conveyed by the first user feed and outputted by the display comprises continuously live prerecorded sounds played through an audio playing device.
6. The hybrid audio and visual imagery system of claim 1, wherein the continuously live sound of the first subject conveyed by the first user feed and outputted by the display comprises continuously live non-human sounds.
7. The hybrid audio and visual imagery system of claim 1, wherein the continuously live sound of the first subject is acquired by a microphone and outputted as the first feed from the microphone to the first control node.
8. The hybrid audio and visual imagery system of claim 1, wherein the visual imagery of the second subject conveyed by the second user feed and outputted by the display remains unchanged for minutes to hours at a time.
9. The hybrid audio and visual imagery system of claim 1, wherein the live or prerecorded visual imagery of the second subject conveyed by the second user feed and outputted by the display comprises live visual imagery of the second subject conveyed by the second user feed and outputted by the display.
10. The hybrid audio and visual imagery system of claim 9, wherein the live visual imagery of the second subject is acquired by a camera and outputted as the second feed from the camera to the second control node.
11. The hybrid audio and visual imagery system of claim 1, wherein the live or prerecorded visual imagery of the second subject conveyed by the second user feed and outputted by the display comprises prerecorded visual imagery of the second subject conveyed by the second user feed and outputted by the display.
12. The hybrid audio and visual imagery system of claim 11, wherein the prerecorded visual imagery of the second subject is acquired by a camera and recorded on a recording device through visual imagery outputs from the camera, and wherein the prerecorded visual imagery is subsequently outputted as the second feed from the recording device to the second control node.
13. The hybrid audio and visual imagery system of claim 11, wherein the prerecorded visual imagery of the second subject is stored on a storage device, and wherein the prerecorded visual imagery is subsequently outputted as the second feed from the storage device to the second control node.
14. The hybrid audio and visual imagery system of claim 1, wherein the user sends the control signals from the control device to the first control node for controlling the receipt of the first user feed to the display.
15. The hybrid audio and visual imagery system of claim 1, wherein the user sends the control signals from the control device to the second control node for controlling the receipt of the second user feed to the display.
16. The hybrid audio and visual imagery system of claim 1, wherein the user sends the control signals from the control device to the second control node for selecting between the live visual imagery and the prerecorded visual imagery.
17. The hybrid audio and visual imagery system of claim 15, wherein the user sends the control signals from the control device to the second control node for selecting between the live visual imagery and the prerecorded visual imagery.
18. The hybrid audio and visual imagery system of claim 1, wherein the audio signal is selected by the user from one or more audio signals, with each one of the one or more audio signals corresponding to a respective one audio source of one or more audio sources.
19. The hybrid audio and visual imagery system of claim 1, wherein the visual imagery signal is selected by the user from one or more visual imagery signals, with each one of the one or more visual imagery signals corresponding to a respective one visual imagery source of one or more visual imagery sources.
US17/952,724 2021-09-28 2022-09-26 Hybrid Audio/Visual Imagery Entertainment System With Live Audio Stream Playout And Separate Live Or Prerecorded Visual Imagery Stream Playout Pending US20230097803A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US17/952,724 US20230097803A1 (en) 2021-09-28 2022-09-26 Hybrid Audio/Visual Imagery Entertainment System With Live Audio Stream Playout And Separate Live Or Prerecorded Visual Imagery Stream Playout
CA3201092A CA3201092A1 (en) 2021-09-28 2023-05-30 Hybrid audio/visual imagery entertainment system with live audio stream playout and separate live or prerecorded visual imagery stream playout

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202163249223P 2021-09-28 2021-09-28
US17/952,724 US20230097803A1 (en) 2021-09-28 2022-09-26 Hybrid Audio/Visual Imagery Entertainment System With Live Audio Stream Playout And Separate Live Or Prerecorded Visual Imagery Stream Playout

Publications (1)

Publication Number Publication Date
US20230097803A1 true US20230097803A1 (en) 2023-03-30

Family

ID=85706691

Family Applications (1)

Application Number Title Priority Date Filing Date
US17/952,724 Pending US20230097803A1 (en) 2021-09-28 2022-09-26 Hybrid Audio/Visual Imagery Entertainment System With Live Audio Stream Playout And Separate Live Or Prerecorded Visual Imagery Stream Playout

Country Status (2)

Country Link
US (1) US20230097803A1 (en)
CA (1) CA3201092A1 (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120284421A1 (en) * 2009-12-25 2012-11-08 Shiyuan Xiao Picture in picture for mobile tv

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120284421A1 (en) * 2009-12-25 2012-11-08 Shiyuan Xiao Picture in picture for mobile tv

Also Published As

Publication number Publication date
CA3201092A1 (en) 2024-03-26

Similar Documents

Publication Publication Date Title
US10129729B2 (en) Smartphone Bluetooth headset receiver
CN106464953B (en) Two-channel audio system and method
US20190090028A1 (en) Distributing Audio Signals for an Audio/Video Presentation
CN114269448A (en) Information processing apparatus, information processing method, display apparatus equipped with artificial intelligence function, and reproduction system equipped with artificial intelligence function
US8978087B2 (en) Hybrid audio/video entertainment system
US8120637B2 (en) Virtual theater system for the home
US20060176374A1 (en) System and method for providing hybrid audio/video system
JP2011182109A (en) Content playback device
US20130076980A1 (en) Systems and methods for synchronizing the presentation of a combined video program
GB2458727A (en) Delay of audiovisual (AV) signal component for synchronisation with wireless transmission
US9060040B2 (en) Themed ornament with streaming video display
US20230097803A1 (en) Hybrid Audio/Visual Imagery Entertainment System With Live Audio Stream Playout And Separate Live Or Prerecorded Visual Imagery Stream Playout
US20160165690A1 (en) Customized audio display system
Roquet Acoustics of the one person space: headphone listening, detachable ambience, and the binaural prehistory of VR
WO2021131326A1 (en) Information processing device, information processing method, and computer program
US20150208121A1 (en) System and computer program for providing a/v output
WO2021079640A1 (en) Information processing device, information processing method, and artificial intelligence system
RU2527732C2 (en) Method of sounding video broadcast
US20150143449A1 (en) System and method for providing a television network customized for an end user
WO2023058330A1 (en) Information processing device, information processing method, and storage medium
US20220264193A1 (en) Program production apparatus, program production method, and recording medium
JP2017212626A (en) Moving image reproduction system
US20210344989A1 (en) Crowdsourced Video Description via Secondary Alternative Audio Program

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

AS Assignment

Owner name: ESCAPES NETWORK LLC, MICHIGAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:OKLEJAS, ROBERT A.;CEROVCEVIC, DRAGAN;RADAKOVICH, ROY;REEL/FRAME:062784/0267

Effective date: 20210924

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION