WO2023129561A1 - Consommation d'un flux vidéo en provenance d'un dispositif de caméra situé à distance - Google Patents

Consommation d'un flux vidéo en provenance d'un dispositif de caméra situé à distance Download PDF

Info

Publication number
WO2023129561A1
WO2023129561A1 PCT/US2022/054100 US2022054100W WO2023129561A1 WO 2023129561 A1 WO2023129561 A1 WO 2023129561A1 US 2022054100 W US2022054100 W US 2022054100W WO 2023129561 A1 WO2023129561 A1 WO 2023129561A1
Authority
WO
WIPO (PCT)
Prior art keywords
subsystem
data
location
computing system
user
Prior art date
Application number
PCT/US2022/054100
Other languages
English (en)
Inventor
Jeremiah WOODS
Brian LOVELACE
Original Assignee
Woods Jeremiah
Lovelace Brian
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Woods Jeremiah, Lovelace Brian filed Critical Woods Jeremiah
Publication of WO2023129561A1 publication Critical patent/WO2023129561A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • H04N23/661Transmitting camera control signals through networks, e.g. control via the Internet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums

Definitions

  • a venue or location may be related to activities or services.
  • the activities or services may be unavailable for viewing on a computing device. Issues may arise without available recordation of the activities or services.
  • the live video feed can be consumed via a mobile application or a client application installed in a user device.
  • Some of the technologies also permit interacting with a live video feed and analyzing video feeds (live video feeds or time-shifted video feeds, or both). Various types of analyses can be performed, resulting in generation of recommendations for reservations for space at a particular location, or recommendation for items that can be consumed at a location where a video feed is originated.
  • Some of the technologies also permit supplying directed content assets to user devices. Directed content asset can be provided as a push notification or can be presented with the mobile application.
  • FIG. 1 illustrates an example of an operational environment in accordance with one or more embodiments of this disclosure.
  • FIG. 2B illustrates an example of components of a user device in accordance with one or more embodiments of this disclosure.
  • FIG. 3A illustrates an example of another operational environment, in accordance with one or more embodiments of the disclosure.
  • FIG. 3B illustrates an example of a machine-learning model (e.g., a deep learning neural network) for video analysis, in accordance with one or more embodiments of the disclosure.
  • a machine-learning model e.g., a deep learning neural network
  • FIG. 4A illustrates an example of a user interface, in accordance with one or more embodiments of the disclosure.
  • FIG. 4B illustrates another example of a user interface, in accordance with one or more embodiments of the disclosure.
  • FIG. 5 illustrates another example of an operational environment, in accordance with one or more embodiments of the disclosure.
  • FIG. 5A illustrates an example of a machine-learning model, in accordance with one or more embodiments of the disclosure.
  • FIG. 6 illustrates yet another operational environment, in accordance with one or more embodiments of this disclosure.
  • the first camera device 104a can be placed in a first area 106a
  • the second camera device 104b can be placed in a second area 106b
  • the third camera device 104c can be placed in a third area 106c.
  • the respective areas can cover one or several indoor spaces or one or several outdoor spaces, or a combination of indoor space(s) and outdoor space(s).
  • the first location 102 can be a restaurant, a dog kennel, a homeimprovement store, or similar.
  • Each camera device of the first group of camera devices 104 can send video data to one or more service devices 108 housed within premises or dedicated storage in the first location 102.
  • at least one of the server device(s) 108 can be functionally coupled to the first group of cameras 104.
  • the server device(s) 108 and the first group of cameras 104 can be functionally coupled in numerous configurations, such as a one-to-many configuration or a many-to-many configuration.
  • the server device(s) 108 include a first server device, such as a server device 108a.
  • the first server device can be functionally coupled to each camera device of the first group of camera devices 104.
  • the server device 108a can be functionally coupled to each one of the first camera device 114a, the second camera device 114b, and the third camera device 114c. Accordingly , in such embodiments, a first camera device of the first group of camera devices 104 can send first video data to the first server device, and a second camera device of the first group of camera devices 104 can send second video data to the first server device.
  • the first server device can be functionally coupled to each camera device of the second group of camera devices 114.
  • the server device 118a can be functionally coupled to each one of the first camera device 114a, the second camera device 114b, and the third camera device 114c. Accordingly, in such embodiments, a first camera device of the second group of camera devices 114 can send first video data to the first server device, and a second camera device of the second group of camera devices 114 can send second video data to the first server device.
  • the server device(s) 108 can send video data to one or more first server devices of multiple backend server devices 130.
  • the server device(s) 108 can be functionally coupled to the first server device(s) of the multiple backend server devices 130 by means of at least one network of one or more networks 120.
  • the at least one network can be embodied in a wireless network or a wireline network, or a combination of both.
  • the one or more first server devices can embody respective one or more first gateways of multiple gateways 134.
  • the first gateway(s) can send received video data to at least one of multiple subsystems 138.
  • a first subsystem of the multiple subsystems 138 can relay video data 152 to a user device 160.
  • the first subsystem can be embodied in a content distribution subsystem 210.
  • the user device 160 is represented by a mobile smartphone simply for the sake of illustration.
  • the user device 160 can be embodied in other types of user devices, such as a tablet computer, a laptop computer, a gaming console, a personal computer, or similar devices.
  • the user device 160 can execute a client application (such as a mobile application or a web browser) to consume video data 152.
  • the user device 160 can present the video stream of an area of a location (imaged by a camera device) in a user interface 170.
  • a display device (not depicted in FIG. 1) can present the user interface 170.
  • the user interface 170 can include a viewing pane 174 that presents the video stream.
  • the user interface can be referred to as streaming business profile.
  • the user device 160 can execute a mobile application 274 to cause a display device 260 to present the user interface 170.
  • the mobile application 274 can be retained in one or more memory devices 270.
  • the disclosure is not limited to the mobile application 274.
  • the user device 160 can execute another type of client application, such as web browser, to cause the display device 260 to present the user interface 170.
  • directed content refers to digital media configured for a particular audience, or a particular outlet channel (such as a website, a streaming service, or a mobile application), or both.
  • Directed content can include, for example, digital media of various types, such as advertisement; promotional content (e.g., a coupon or a discount); surveys or other types of questionnaires; motion pictures, animations, or other types of video segments; audio segments of defined durations (e.g., a product or service review; and similar media.
  • the second subsystem can send digital content 154 as a push notification.
  • directed content can be presented at the user device 160 even when the user interface 170 is not actively presented.
  • An end-user can interact with such directed content, causing the user device 160 to present another user interface (e.g., web browser or the client application) in order to consume the directed content more fully.
  • Selection of that visual element can cause the user device 160 to present a query composition interface.
  • the query composition interface can be presented in a stand-alone fashion (e.g., as a new user interface) or as an overlay on the user interface 170. Regardless of type of presentation, the query composition interface can permit generating a query for sources of video data according to one or more of geolocation (“Dog kennels near me”), city, state, ZIP Code, country, industry/category, business name, or keyword(s). Such video data (and video feeds represented by such data) can be retained within the data repository 148. The video data can be organized by channel category , for example.
  • the user device 160 can present a listing of locations having available video feeds.
  • the listing of locations can be presented in a second user interface.
  • the second user interface can be presented in a stand-alone fashion or as an overlay on the user interface 170.
  • the user device 160 can send control signaling 156 including the generated query.
  • a source of video data can be an entity that subscribes to a service for delivery of video feeds, among other functionalities.
  • a particular subsystem (e.g., the search subsystem 230) of the subsystems 138 can receive the query and, in response, can resolve the query using data within subscriber accounts 146 retained within one or more memory devices 144 (generically referred to as account repository 144).
  • the account repository 144 can be retained within the multiple backend storage devices 140 functionally coupled to at least one of the backend service devices 130.
  • the particular subsystem can send the data defining the listing of locations to the user device 160.
  • At least one of the selectable visual elements 178 conversation posts on a video feed presented in the viewing pane 174.
  • Execution of the client application 316 can cause the client device 310 to present a user interface 320 that provides a management portal.
  • the user interface 320 can include a title 322 and a menu of configuration options 324.
  • the user interface 320 can embody a landing page of a web-based portal.
  • An example of the user interface 320 is shown in FIG. 4A.
  • the menu of configuration options 324 can be embodied in multiple selectable visual elements, each providing access to a configuration option. Selection of a particular selectable visual element can cause the client device to present another user interface (an overlay, for example) that provides access to one or more of the subsystems 138 included in the backend server devices 130.
  • the client device 310 can be functionally coupled to the backend server devices 130 via at least one of the network(s) 120.
  • Such a user interface also can permit submitting data identifying such period(s) to an access control subsystem 340.
  • the access control subsystem 340 can receive the data and can update access control data 344 corresponding to the subscribed account being managed. That subscriber account can be the subscribed account 336, for example.
  • the client device 310 can present another user interface in response to selection of the camera option 420.
  • That user interface can include a viewing pane presenting images generated by the camera device (e.g., camera device 104b) and respective selectable UI elements for pan and tilt of the camera device.
  • the user interface can include a single selectable UI element that permits adjusting an orientation of the camera device.
  • the user interface also can include one or several other selectable UI elements that permit controlling presentation of the images.
  • User interface 450 shown in FIG. 4B is an example of such a user interface.
  • the user interface 450 includes a viewing pane 460 that presents a stream of images generated by the camera device (denoted as “Camera #1” in FIG.
  • the user interface 450 includes a selectable UI 470 that sen es as a digital j oystick that permits adjusting the orientation of the camera device.
  • the user interface 450 also includes a selectable UI element 474, and a selectable UI element 478 that permit stopping and resuming the presentation of the stream of images, respectively.
  • the user interface 450 also includes a selectable UI element 472 that, when selected, cause the stream of images to be presented continuously.
  • the particular location can be the location of a dog kennel that offers canine daycare service.
  • An administrator device that embodies the client device 130 can present the user interface responsive to selection of the directed content option 422.
  • the administrator device also can receive input data that defines an event at the dog kennel.
  • An example of the event can be a group dog walk at a park located near the dog kennel.
  • the directed content subsystem 220 can send data identifying the event to multiple user devices that receive one or more video feeds from the dog kennel.
  • a first dataset can be indicative of historical viewership
  • a second dataset can be indicative of number of impressions of directed content for the video feed
  • a third dataset can be indicative of coupons or other incentives redeemed during presentation of the video feed
  • a fourth dataset can be indicative of a number of distinct user account that consume the video feed.
  • Such a number can be colloquially referred to as “number of fans” of the subscriber that provides the video feed.
  • a dog boarding facility can stream video feeds of areas in the facility, and customers of the dog facility can be fans of that facility.
  • selection of the analytics option 424 can cause the client device 310 to send a request for viewership datasets to an analytics subsystem 360.
  • the analytics subsystem 360 can access such datasets from one or more of the data repositories 148 (FIG. 1) and can then send the datasets to the client device 310.
  • the client application 316 can include formatting information to present a viewership dataset
  • the analytics subsystem 360 can send formatting information corresponding to a dataset sent to the client device 310.
  • the dashboard presented at the user device 160 also can present key performance indicators (KPIs) as a function of time.
  • KPIs key performance indicators
  • the dashboard also present KPI trends.
  • Other analytics metrics also can be presented.
  • the user device 160 can obtain the datasets defining the KPIs, for example, from the analytics subsystem 360.
  • selection of the analytics options 424 can cause the client device 310 to send a request for aggregated viewership data and/or aggregated video data to the analytics subsystem 360.
  • the analytics subsystem 360 can access viewership data and/or video data, and can operate on such data to generate the aggregated viewership data or the aggregated video data, or both data.
  • the video data can be accessed from one or more repositories of the data repositories 148 (FIG. 1). Such one or more repositories can server as a video feed cache.
  • the analytics subsystem 360 can then send the aggregated viewership data and/or the aggregated video data to the client device 310.
  • the analytics subsystem 370 can train a machine-learning model to detect a human or a pet (a dog, a cat, or a pig, for example) within video feeds.
  • the analytic subsystem 370 can obtain a machine-learning model already trained from such a detection. Regardless of how a trained machine-learning model is accessed, the analytic subsystem 370 can apply the trained machine-learning model to the video data generated by the camera device(s).
  • the analytics subsystem 360 can use detected individuals (human or otherwise) to generate occupancy count.
  • the analytics subsystem 360 also can generate “busy” metering using the occupancy count. That is, the analytics subsystem 360 can generate a metric identifying a fraction of the full occupancy of a location.
  • the analytics subsystem 360 also can use the detected individuals to generate a predictive model (such as a machine-learning model) to predict occupancy of an area of a location or the location as a whole.
  • the analytics subsystem 360 also can perform image analysis to detect tables or other spaces that are open within a location (e.g., a restaurant). Over time, the predictive model can leam about busy rimes and slow times, and can generate recommendations for one or more locations (e.g., restaurants) based on historical data and occupancy trends. To that end, the predictive model can analyze video feeds corresponding to respective locations. The video feeds can include live video feeds or historical video feeds, or a combination of both. As part of the analysis, the predictive model can detect humans present in images that constitute the video feeds. The predictive model can yield count data identifying a count of humans and can store the count data as a data set. The count data is referenced against day and time. The output of such analysis, including human counting, is a predicted busy time based on number of humans detected present at any given time.
  • the analytics subsystem 360 can apply one or more models (such as machine-learning models) to determine various aspects of individuals within a scene and/or objects within the scene.
  • the analytics subsystem 360 can apply one or more first models to video data in order to evaluate sentiment and demographic analysis of scenes defined by the video data.
  • at least one of the first model(s) can include two concatenated deep convolutional neural networks (CNNs), including a first three-dimensional (3D) deep CNN and a two-dimensional (2D) deep CNN.
  • CNNs concatenated deep convolutional neural networks
  • the analytics subsystem 360 can interpret emotional expressions of a human detected within a scene. As such, the analytics subsystem 360 can determine, with a defined degree of confidence, that the detected human is happy, sad, or surprised. Further, by applying at least one of the first model(s), the analytics subsystem 360 can determine, within a defined degree of confidence, demographic attributes of the detected human, such as gender and/or ethnicity, from a facial image of the detected human. More specifically, in one example, at least one of tire first model(s) can include a human facial recognition model that can be applied to the scenes defined by the video data. The application of the human facial recognition model can determine several attributes, such as mouth shape, skin color, and similar facial elements. Data identified values of such attributes can be stored as a data set. Data sets are assigned a confidence class based on analyzation and the prediction is the output.
  • the analytics subsystem 360 can perform a facial search within one or more scenes defined by video data (e.g., images, stored videos, and streaming videos) for faces that match defined faces stored in a container known as a face collection.
  • a face collection is an index of faces.
  • a search that yields a defined face in the face collection can permit other subsystems to personalize the distribution of digital content (such as directed content) to a user device corresponding to an end-user identified in a user profile corresponding to the defined face.
  • the analytics subsystem 360 can detect adult content and/or violent content in a stream of images and in stored videos. To that end, the analytics subsystem 360 can obtain metadata to filter inappropriate content based on business needs.
  • An example of a business need is preservation of video content within a defined rating.
  • the defined rating can be one of the ratings established by the Motion Picture Association (MPA); e.g., General Audiences (G) rating or Parental Guidance Suggested (PG) rating.
  • MPA Motion Picture Association
  • G General Audiences
  • PG Parental Guidance Suggested
  • filtering inappropriate content can include blurring particular elements of a scene or multiple scenes.
  • the disclosure is not limited to ratings from te MPA, and the defined rating can be one of many ratings corresponding to TV Parental Guidelines, Recording Industry 7 Association of America (RIAA), and/or the Entertainment Software Rating Board (ESRB).
  • Another example of a business need is compliance with privacy rules. In that example need, filtering inappropriate content can include blurring customer faces.
  • the metadata defines at least one keyword and/or at least one keyphrase indicative of respective categories of unsafe content.
  • the analytics subsystem 360 can generate a hierarchical list of labels with confidence scores. In some embodiments, that list can be generated by executing one or multiple function calls based on a third-party' API, for example.
  • the keyword(s) and keyphrase(s) can indicate specific categories of unsafe content, which can enable granular filtering and management of large volumes of user-generated content (UGC).
  • the granular filtering can include moderating video content using one or more confidence scores of respective keywords or keyphrases.
  • moderating the video content in such a fashion can include quarantining video content for further analysis when a confidence score exceeds a first threshold value. Such analysis can be performed by an autonomous agent or a human agent. In addition, or in other cases, moderating the video content can include rejecting the video content when the confidence score exceeds a second threshold value.
  • threshold values can be referred to as moderation thresholds and can configurable.
  • the analytics subsystem 360 can customize the detection of humans to a particular cohort, such as celebrities within a defined category (e.g., politics, sports, business, entertainment, media, science, or other categories).
  • the analytics subsystem 360 can apply one or more third models to video data generated in a single location or in multiple locations to identify individual(s) within a particular cohort in a stream of images and in stored videos.
  • the analytics subsystem 360 can cause the content distribution subsystem 210 (not depicted in FIG. 3A) to add a visual tag in a video feed supplied to the user device 160.
  • the analytics subsystem 360 also can detect objects and/or markings in an object. More specifically, the analytics subsystem 360 can identify and extract textual content from a stream of images and stored videos. To that end, the analytics subsystem 360 can include a recognition module that supports detection of many fonts, including highly stylized ones. In some cases, the recognition module can be a third-party' module that can be accessed as a service from a third-party platform, for example. The analytics subsystem 360 can detect, using the recognition module, for example, text and numbers in different orientations, such as those commonly found m banners, posters, box artwork, or similar.
  • the text that has been extracted can be used to enable visual search based on an index of images that contain same keywords.
  • the search subsystem 230 can permit implementing such a visual search.
  • videos can be catalogued based on particular text on screen, such as advertisement, news, sport scores, captions, a combination thereof, or similar text.
  • the analytics subsystem 360 can identify objects and scenes in images that are specific to business needs. For example, particular products can be identified on store shelves.
  • detection of a particular human, an object and/or markings can cause the directed content subsystem 220 to send a media asset (an advertisement or a coupon, for example) to one or more user devices present at the location where the particular object is detected.
  • a particular human can be detected at a supermarket the supplies video feeds of various aisles of the supermarket.
  • the particular human can be a consumer of the video feeds.
  • a particular brand of dry pasta also can be detected on a shopping cart used by the particular human.
  • the directed content subsystem 220 can send a coupon to a user device (e.g., user device 160) of the particular human, where the coupon applies to the particular brand of dry pasta.
  • the analytics subsystem 360 can obtain activity data from user devices that consume video feeds.
  • the activity data can identify video feeds consumed by the user devices.
  • the analytics subsystem 360 can receive first activity data from the user device 160, where the first activity data identifies video feeds consumed by the user device 160.
  • the analytics subsy stem 360 can categorize the activity data according to demographic data available for the user devices.
  • the analytics subsystem 360 can group activity data in several categories, such as gender, age, income level, employment status, academic advancement, or similar categories.
  • the analytics subsystem 360 can obtain activity data over time and also can categorize the activity data over time.
  • the analytics subsystem 360 can identify changes to one or multiple categories corresponding to a particular group of video feeds. Each one of those changes constitute a customer profile trend for a particular category (e.g., gender).
  • the analytics subsystem 360 can generate reports that identify one or multiple trends, and can make the reports available to one or multiple subscriber accounts.
  • the analytics subsystem 360 can generate a wide variety of custom reports for video content and related activity data corresponding to a subscriber account. Generation of custom reports can be controlled by means of the user interface 320 or another type of management portal, for example.
  • Data contained in a custom report and a report identifying a trend can be visualized in a viewer user interface, e.g., a new user interface or an overlay onto the user interface 320.
  • the client device 310 can present such a viewer user interface.
  • a subscriber account can generate a report, via the analytics subsystem 360, all paid access sold to female users for a particular time interval. Results that constitute such a report can be visualized in the viewer user interface via plots of various types, table(s), or a combination thereof.
  • the analytics subsystem 360 can use user profiles to generate predictions of customer order and cross-promotions.
  • the customer order can include a food order, a beverage order, or similar order.
  • the food order can include a particular dish at a restaurant
  • a beverage order can include a particular soft drink at a bar.
  • a user profile contains various types of information identifying an end-user of the mobile application 274 and one or more user devices having the mobile application 274 stored thereon. The user device(s) are associated with that end-user.
  • Information contained in the user profile also can include data defining various user attributes of the end-user, such as identification attributes (anonymized or otherwise), user preferences, user favorites, a combination thereof, or the like.
  • An example of a user favorite can be a type of food, such as vegan dishes, or a particular dish (e.g., top sirloin steak withzihurri sauce).
  • Another example of a user favorite can be a type of beverage, such as non-alcoholic beverages, or a particular type of drink (e.g., a Martini or Caipirinha).
  • At least some of the user attributes can be specific to one or more locations (e.g., location 102 (FIG. 1) and location 112 (FIG. 1)). Accordingly, a first subset of the user attributes can correspond to a first location, and a second subset of the user attributes can correspond to the second location.
  • the user profiles can be retained in the backend storage devices 140, with the accounts repository 144, for example.
  • the analytics subsystem 360 can generate a recommendation for a customer order by applying a predictive model to information retained in a user profile.
  • the recommendation can be specific to a location, in some cases.
  • the predictive model can be embodied in a machine-learning model that accesses input data (user profile favorites, for example), analyzes the data, assigns a confidence class, and generates a prediction output.
  • a user profile can include a favorite attribute indicative of Martini as a favorite drink. Based on such a favorite attnbute, the predictive model can yield a recommendation for other drinks that fit that type of drink.
  • a user profile can include a favorite attribute indicative of a particular food dish (e.g., a vegan dish). Based on that particular dish, the predictive model can yield a recommendation for other food dishes that fit that type of food dish.
  • the predictive model can yield a recommendation for both a beverage and food dish, such as a wine and a pasta dish.
  • the menu of configuration options 324 also can include a sixth configuration option that permits administering a subscription to the platform that maintains the backend server devices 130.
  • the sixth configuration option can be presented as a selectable visual element 426 (FIG. 4A). Selection of the selectable visual element 426 can cause the client device 310 to present another user interface that can permit entering and submitting various types of administrative information.
  • the administrative information can define an account manager for subscriber.
  • Other administrative information can include streaming hours and business profile information (administrator account, directed content campaigns, notifications, and the like, for example).
  • the user interface that is presented in response to selection of the selectable visual element 426 can include UI elements from other user interfaces presented in response to selection of other selectable elements present in the user interface 320 (FIG. 4A). Accordingly, in one example, the user interface shown in FIG. 4B, can include multiple UI elements 480 that permit configuring a streaming schedule.
  • the menu of configuration options 324 can include a configuration option that permits configuring one or multiple aspects of a subscriber account.
  • a subscriber account can correspond to a restaurant.
  • Selection of a selectable visual element corresponding to such a configuration option can cause the client device 310 to present a user interface (which can be referred to as menu creator) that permits creating and uploading a culinary menu to a profile corresponding to the subscriber account.
  • the culinary menu can be specific to a particular location identified in that profile. More specifically, the user interface can permit entering input data and selecting visual elements, aural elements, and/or other design elements that, individually or in combination, can define the culinary menu.
  • the input data can be entered into one or more fillable fields or selectable pane(s).
  • the client device 310 can then send such input data and/or design elements to the content provisioning subsystem 240 (FIG. 2A).
  • the content provisioning subsystem 240 can retain the input data and/or data identifying the design elements in the data repository 148.
  • the content distribution subsystem 210 can cause the user device 160 to present the culinary menu in the user interface 170, in conjunction with a video feed of the particular location or a section thereof. Subsequent access to the menu creator can permit modifying an extant culinary menu or creating another culinary menu.
  • a cocktail menu or a wine menu can be created.
  • the content distribution subsystem 210 can cause the user device 160 to present the cocktail menu or the wine menu, or both, in conjunction with a video feed of a bar area within the particular location.
  • the content distribution subsystem 210 can cause the user device 160 to present the culinary menu in conjunction with a video feed of a dining room within the particular location.
  • the directed content subsystem 220 can provide other functionality in some embodiments.
  • the directed content subsystem 220 can permit subscribers to place directed content assets and/or monetize space for a directed content asset (e.g., an advertisement) via: paid sponsored streams (algorithm based, for example); paid placement via category (algorithm based, for example); paid placement via geolocation (algorithm based, for example); paid placement via city, state, ZIP code (algorithm based, for example); paid placement via user profde criteria (algorithm based, for example); a combination thereof, or similar.
  • An example of a process for placement of directed content assets is illustrated in FIG. 4C. As is described herein, the illustrated process uses user information, such as user profiles and/or user accounts retained in a data repository (referred to as application user database(s)).
  • application refers to the mobile application 274 (FIG.
  • a video feed live video or time-shifted video feed, for example
  • Directed content assets can be placed within a section of a user interface (UI 170 (FIG. 1), for example) that presents a stream of images generated by a camera device remotely located relative to the user device.
  • the directed content subsystem 220 can update a placed directed content asset in real-time or essentially real-time.
  • Directed content assets also can be presented as part of a push notification to a user device that executes the mobile application 274 while such a stream of images is not being presented at the user device.
  • the user device 160 can cross a geofence corresponding to a particular geographic region.
  • the user device 160 can supply location data to the directed content subsystem 220 as the user device 160 moves.
  • the directed content subsystem 220 can have access to real-time location data or essentially real-time location data of the user device 160.
  • the directed content subsy stem 220 can determine that the user device 160 is present in the particular geographic zone and, in response, can identify one or a restaurant or a bar, for example, that has a subscriber account included in the subscriber accounts 146. As a result, the directed content subsystem 220 can then send a push notification to the user device 160.
  • the push notification includes a directed content asset and/or marking indicative of the directed content asset, where the directed content asset corresponds to the identified restaurant or bar.
  • the push notification can include text or other markings indicating that a promotion represented by directed content asset is nearby, at the restaurant or bar. Therefore, in some cases, the directed content subsystem 220 can provide directed content assets to the user device 160 based on both real-time location of the user device 160 and presence of a business within a geofenced region identified using the realtime location.
  • the directed content subsystem 220 also can implement a similar or same technique to cause a user device to present directed content assets at a user interface while executing the mobile application 274.
  • the user interface also can present a stream of images generated by a camera device remotely located relative to the user device.
  • the user interface can be embodied in, or can include, the UI 170.
  • FIG. 5 illustrates an example of an operational environment 500 for access to a service, in accordance with one or more embodiments of the disclosure.
  • the user device 160 can execute a client application (such as the mobile application 274 (FIG. 2B) or a web browser) to consume video data from one or more camera devices placed at various locations.
  • client application such as the mobile application 274 (FIG. 2B) or a web browser
  • those locations can correspond to respective service providers that can allocate space (e.g., a venue or a dining table) for a defined period of time.
  • a service provider can be a restaurant.
  • the user device 160 can access several video feeds corresponding to respective locations and an end-user can peruse such video feeds.
  • the video feeds can be consumed using a user interface 510 presented in response to execution of the client application.
  • the user device 160 can select a particular location that satisfies one or more criteria, e.g., available space, conformity to one or more user preferences (vegan menu, sustainability of food sources, etc.), and the like. That particular location can be the location 102.
  • the user device 160 can apply a machine-learning model to features of one or more video frames of several video feeds for respective locations in order to generate a score for each one of the respective locations.
  • the user device 160 can generate a first score for a first location, a second score for a second location, and so forth up until a defined number of the respective locations has been evaluated.
  • Each one of the generated scores can represent a matching level between a location and end-user preferences and/or availability.
  • the user device 160 can generate a ranking of the generated scores and can select a score having a defined placement within the ranking (e.g., top-ranked score). The user device can then select the location corresponding to the selected score.
  • the user device 160 can present a video feed from a camera device at the selected location (e.g., camera device 104c) and multiple visual elements 518 that can permit making a reservation at the selected location.
  • the user device 160 in response to selection of a first selectable visual element of the selectable visual elements 518, can send control signaling 156 including query for available time slots for service at the selected location.
  • the subsystems 138 can include a reservations subsystem 520 that can receive the query and, in response, can resolve the query using availability data for a subscriber account corresponding to the location.
  • the subscriber account can correspond to a bar or restaurant, for example.
  • the availability data can be retained in one of the data repositories 148.
  • the reservation subsystem 520 can then send content data 154 including the availability data to the user device 160.
  • the reservations subsystem 520 also can send formatting information that the user device 160 can use to present the availability data.
  • the user device 160 can receive the availability data and can present another user interface (either as a stand-alone user interface or an overlay on the user interface 510) that permits providing selection data for configuring a reservation of a selected available time slot and, in some case, a particular space (such as a table, a room at a pet boarding facility, or a venue) within the selected location.
  • a particular space such as a table, a room at a pet boarding facility, or a venue
  • the user device 160 can send control signaling 154 including the selection data to the reservation subsystem 520.
  • the reservation subsystem 520 can generate reservation data identifying the selected available time slot and, in some cases, the particular space within the selected location.
  • the reservation subsystem 520 can send a confirmation message as part of content 154 sent to the user device 160.
  • the reservation subsystem 520 can send a message to a communication address corresponding to the user device 160.
  • the communication address can be embodied in an email address, for example.
  • the reservation subsystem 520 also can generate recommendations for time slots for reservation of space within a location corresponding to a service provider that has a particular subscriber account of the subscriber accounts 146.
  • a time slot can be defined in terms of a date and time interval.
  • the reservation subsystem 520 can analyze video feeds originated from the location (location 102, for example).
  • the reservation subsystem 520 can analyze images that constitute a video feed.
  • the video feed can be a historical video feed or a live video feed.
  • the images can be analyzed by applying a predictive model to the images.
  • the predictive model can detect presence of human(s) and particular objects within those images.
  • the particular objects can include, for example, a table, a booth, a bar stool, or a combination thereof.
  • the reservation subsystem 520 can determine tables or other spaces that are open within a location.
  • the predictive model can yield count data identifying a count of humans and can store the count data as a data set. The count data is referenced against day and time.
  • the output of such analysis, including human counting, is a predicted busy time based on number of humans detected present and any given time. Over time, the predictive model can learn about busy times and slow times, and can generate recommendations for one or more available time slots based on predicted occupancy and available spaces within the location.
  • the reservation subsystem 520 can include the analytics subsystem 360 (FIG. 3A) described herein.
  • FIG. 5A presents an example of the predictive model embodied in a long term short memory (LTSM) machine-learning model.
  • FIG. 5B illustrates an example of an update model used to incrementally update the predictive model, in accordance with one or more embodiments of the disclosure.
  • the update model can be based on incremental learning time frequency network model (IL-TFNet).
  • the reservation subsystem 520 can send a push notification to a user device (e.g., user device 160) including text and/or or other indicia indicative of one or more recommended time slots for a reservation.
  • a user device e.g., user device 160
  • the reservation subsystem 520 can send a message to a communication address (e.g., an email address) corresponding to the user device.
  • the message can include text and/or other indicia indicative of the recommended time slot(s).
  • the reservation subsystem 520 also can generate reservation data in several other ways.
  • the viewing pane 514 can present indicia, such as marking and/or images, conveying an event at the location 102 (e.g., a dog kennel or a restaurant).
  • the selectable visual elements 518 can include a visual element that, in response to being selected, can permit generating a registration for the event at the location 102 (e.g., a dog pool party).
  • the reservation subsystem 520 can generate the registration and can send confirmation data to the user device 160, via the client application, for example, or to a communication address corresponding to the user account associated with the user device 160.
  • the reservation subsystem 520 can generate booking for one or more areas of the location 102.
  • the reservation subsystem 520 can process a purchase transaction for a registration, a booking, and/or event tickets/passes for a particular event.
  • the reservation subsystem 520 can server a credit-card processor and/or can serve as an interface to a third-party credit-card processor.
  • the subsystems 138 also can include a digital concierge subsystem 530 that can connect an agent (an autonomous agent or a human agent) to the user device 160 by creating a chat session, for example, within the user interface 510.
  • the chat session can be presented in an overlay pane on the viewing pane 514.
  • the connection with the agent can be effected during a live presentation of video content — that is, during a live video feed.
  • the digital concierge subsystem 530 can be functionally coupled to the reservation subsystem 520.
  • the chat session can be used to complete a booking in real-time.
  • the disclosure is not limited to digital concierge subsystem 530 creating a chat session.
  • One or more other subsystems of the subsystem 138 can provide a real-time on-scree chat to interact with a video feed supplied by the content distribution subsystem 210 (FIG. 2A).
  • FIG. 6 illustrates an operational environment 600 for integration with third-party subsystems 610, in accordance with one or more embodiments of this disclosure.
  • Several types of integrations can be implemented.
  • one example integration can include integration of the subsystems 138 with a third-party service provider subsystem 620.
  • the third-party service provider subsystem 620 can be embodied in a ride-share sendee subsystem.
  • the gateways 134 can include a third-party gateway that permits exchanging data and/or signaling with the third-party service provider subsystem 620.
  • the user device 160 can present a user interface (e.g., UI 510; not depicted on FIG. 6) that includes one or more selectable visual elements that permit a one- click reservation from that user interface.
  • a user interface e.g., UI 510; not depicted on FIG. 6
  • streaming business profile e.g., a user interface that includes one or more selectable visual elements that permit a one- click reservation from that user interface.
  • a user interface can be referred to as streaming business profile.
  • the third-party service provider subsystem 620 can be embodied in, or can include, a food-delivery service subsystem.
  • the user device 160 can present a user interface (e.g., the streaming business profile) that includes one or more selectable visual elements that permit viewing menus and/or placing one-click orders directly from the streaming business profile.
  • the streaming business profile can permit requesting food delivery from a desired eatery (restaurant, diner, food truck, for example) via the food delivery service using the mobile application 274 (FIG. 2B) by means of API integration.
  • the user interface that permits viewing menus and placing one-click orders can be accessed via a function call to an API.
  • the mobile application 274 can provide access to the service provider API (e.g., food delivery service API).
  • the third-party service provider subsystem 620 can be embodied in, or can include, a rideshare service subsystem.
  • the user device 160 can present a user interface (e.g., the streaming business profile) that includes one or more selectable visual elements that permit requesting a ride via rideshare service subsystem using the mobile application 274 by means of API integration.
  • the user interface can permit selecting the type of vehicle for the ride and receiving a cost estimate for the ride.
  • the user interface also can permit the user device 160 to receive estimated arrival time of the vehicle and estimated drop-off time at a desired destination while remaining in mobile application 274.
  • the subsystems 138 also can include a payment processor subsystem (not depicted in FIG. 6).
  • the payment processor subsystem can process payments for goods, services, and/or reservations via a digital wallet present in the user device 160. Reservations can include event tickets/passes.
  • the mobile application 274 can access natively the digital wallet to obtain a payment method for completing transactions, such as reservations and appointment bookings as required by the business affiliate.
  • the mobile application 274 can access a payment integration API provided by an operating system (O/S) (e.g., iOS or Android) of the user device.
  • O/S operating system
  • “natively” refers to utilizing device identifiers and O/S coupling to obtain the payment method via the API.
  • O/S coupling can be achieved by passing an authorization certificate associated with the mobile application 274 to the API.
  • the mobile application 274 can pass user data, such as device ID, data defining products, data defining service, data defining amounts, and so forth, to complete a transaction.
  • another example integration can include integration of the subsystems 138 with a third-party social network subsystem 630.
  • the content distribution subsystem 210 (FIG. 2 A) can permit sharing a video feed to a social network subsystem 630.
  • the user device 160 can present a user interface (not depicted in FIG. 6) that includes a selectable visual element adjacent to, or within, a viewing pane.
  • the user device can send an instruction and payload data (via control signaling 156 (FIG. 1), for example) to the content distribution subsystem 210.
  • the instructions can dictate that the video feed be supplied to a particular social-media account indicated by the payload data.
  • a subscriber that provides a video stream and/or a service in accordance with aspects of this disclosure can create a broadcaster channel.
  • the subscriber can supply a video stream and/or other digital content (e.g., directed content) to user devices from a mobile device or another type of client device.
  • the video stream supplied in the broadcaster channel can include video feeds from multiple camera devices (e.g., camera device 104a and camera device 114) generating images essentially concurrently.
  • the subscriber can be a karaoke group that can stream video from 20 bars concurrently (e.g., the first location 102 (FIG.
  • the user device 160 can execute a client application (e.g., the mobile application 274) to present the user interface 170.
  • the user interface 170 can include a viewing pane 174 that present a single video stream.
  • the user interface 170 also can include a menu of options — a carousel, an array of thumbnails, or a dropdown menu, for example — where each option corresponds to available video feeds from respective bars.
  • the user device 160 can receive a selection of an option on the menu of options to select a video feed to consume in the viewing pane 174.
  • the subscriber can permit peer-to-peer consumption of video streams via the broadcaster channel.
  • multiple user devices can send a video feed to the broadcaster channel via respective streaming business profiles.
  • each one of the business profiles can cause a respective user device to generate a video stream from imaging data generated by a camera device integrated into, or otherwise coupled to, the user device.
  • the subsystems 138 can include an aggregator subsystem that can receive video streams from respective ones of the multiple user devices.
  • the aggregator subsystem can route the video streams to the user device 160 when consuming the broadcaster channel.
  • multiple smartphones can stream video datasets to a particular broadcaster channel by sending video stream to the aggregator subsystem.
  • Each video dataset can correspond to video of a scene generated from respective vantage points.
  • a band can be playing at a venue and one smartphone can stream first video feed generated from a side of a stage where the band is located, another smartphone can stream second video feed generated from the back of the audience in the venue, and yet another smartphone can stream third video feed generated from a second floor above the stage.
  • the user device 160 can present the user interface 170 including the viewing pane 174 and the menu of options, where the menu of options included the first, second, and third video feed.
  • the user device 160 can receive a selection from the menu of options and can present the video feed corresponding to the vantage point that has been selected.
  • the client application e.g., mobile application 274 (FIG. 2) can provide a more immersive or otherwise enhanced user experience.
  • the subscriber can monetize the video stream by charging for access to the video feed for a period during which the video stream is broadcasted live.
  • a wedding venue could charge a fee to stream a video of a wedding on the broadcaster channel of the wedding venue.
  • the video stream can be free to watch and can be paid by the wedding party.
  • end-users consuming the video stream can purchase one-time access or subscriptions.
  • a subscription can provide all access to the video content.
  • end-users can provide donations to the broadcaster channel.
  • the user can store historical video streams in packaged repositories and sell access to them as well (e.g., a sales professional selling a 10-steps to success video series). Historical video streams can be stored within the data repository 148 and/or third- party storage devices.
  • the analytics subsystem 360 can generate key performance indicators (KPIs), such as viewership, number of fans, and revenue generated for the streaming channel.
  • KPIs key performance indicators
  • a client device e.g., client device 310) can execute the application 316 to present a dashboard to monitor one or more of the generated KPIs.
  • a broadcaster channel can be linked to a subscriber (or business affiliate). For example, if a band is playing from a bar (e.g., first location 102), the bar can tag the broadcaster channel of the band and followers on their business page (e.g., UI 170 (FIG. 1) or UI 510 (FIG. 5)).
  • the streaming subscriber can request and allow the individual broadcaster to stream on behalf of the business affiliate (e.g., a celebrity chef at a restaurant). This can help promote the business via social influencers and their audience. Users can find and view streams from individual broadcasters at specific businesses or view the streams from the individual broadcasters channel in the mobile application 274 (FIG. 2B), a web-browser, or similar application.
  • a broadcaster channel can be accessed in response to receiving a fee.
  • a DJ at a local club could stream their set for a fee.
  • User devices such as user device 160, can consume the video stream in response to effecting payment via an in-app purchase by natively accessing a payment API available to the user device 160, in accordance with aspects described herein before.
  • FIG. 7 is a block diagram illustrating an example of a computing environment 700 for performing the disclosed methods and/or implementing the disclosed systems.
  • the computing environment 700 shown in FIG. 7 is only an example of an operating environment and is not intended to suggest any limitation as to the scope of use or functionality of operating environment architecture. Neither should the operating environment be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in the exemplary operating environment.
  • the computer-implemented methods and systems in accordance with this disclosure can be operational with numerous other general purpose or special purpose computing system environments or configurations.
  • Examples of well-known computing systems, environments, and/or configurations that can be suitable for use with the systems and methods comprise, but are not limited to, personal computers, server computers, laptop devices, and multiprocessor systems. Additional examples comprise set-top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that comprise any of the above systems or devices, and the like.
  • the processing of the disclosed computer-implemented methods and systems can be performed by software components.
  • the disclosed systems and computer-implemented methods can be described in the general context of computer-executable instructions, such as program modules, being executed by one or more computers or other devices.
  • program modules comprise computer code, routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
  • the disclosed methods can also be practiced in grid-based and distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network.
  • program modules can be located in both local and remote computer storage media including memory' storage devices.
  • the systems and computer-implemented methods disclosed herein can be implemented via a general-purpose computing device in the form of a computing device 701.
  • the components of the computing device 701 can comprise, but are not limited to, one or more processors 703, a system memory 712, and a system bus 713 that couples various system components including the one or more processors 703 to the system memory 712.
  • the system can utilize parallel computing.
  • the system bus 713 represents one or more of several possible types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, or local bus using any of a variety of bus architectures.
  • the bus 713, and all buses specified in this description can also be implemented over a wired or wireless network connection and each of the subsystems, including the one or more processors 703, a mass storage device 704, an operating system 705, software 706, data 707, a network adapter 708, the system memory 712, an Input/Output Interface 710, a display adapter 709, a display device 711, and a human-machine interface 702, can be contained within one or more remote computing devices 714a, b,c at physically separate locations, connected through buses of this form, in effect implementing a fully distributed system.
  • the computing device 701 can embody a user device (e.g., user device 160) in accordance with aspects described herein.
  • the software 706 can include the mobile application 274 (FI
  • the computing device 701 typically comprises a variety of computer-readable media.
  • Example readable media can be any available media that is accessible by the computing device 701 and comprises, for example and not meant to be limiting, both volatile and non-volatile media, removable and non-removable media.
  • the system memory 712 comprises computer readable media in the form of volatile memory, such as random access memory (RAM), and/or non-volatile memory, such as read only memory (ROM).
  • RAM random access memory
  • ROM read only memory
  • the system memory 712 typically contains data such as the data 707 and/or program modules such as the operating system 705 and the software 706 that are immediately accessible to and/or are presently operated on by the one or more processors 703.
  • the computing device 701 can also comprise other removable/non-removable, volatile/non-volatile computer storage media.
  • FIG. 7 illustrates the mass storage device 704 which can provide non-volatile storage of computer code, computer readable instructions, data structures, program modules, and other data for the computing device 701.
  • the mass storage device 704 can be a hard disk, a removable magnetic disk, a removable optical disk, magnetic cassettes or other magnetic storage devices, flash memory cards, CD-ROM, digital versatile disks (DVD) or other optical storage, random access memories (RAM), read only memories (ROM), electrically erasable programmable read-only memory' (EEPROM), and the like.
  • any number of program modules can be stored on the mass storage device 704, including by way of example, the operating system 705 and the software 706.
  • Each of the operating system 705 and the software 706 (or some combination thereof) can comprise elements of the programming and the software 706.
  • the data 707 can also be stored on the mass storage device 704.
  • the data 707 can be stored in any of one or more databases known in the art. Examples of such databases comprise, DB2®, Microsoft® Access, Microsoft® SQL Server, Oracle®, mySQL, PostgreSQL, and the like.
  • the databases can be centralized or distributed across multiple systems.
  • the user can enter commands and information into the computing device 701 via an input device (not shown).
  • input devices comprise, but are not limited to, a keyboard, pointing device (e.g., a “mouse”), a microphone, a joystick, a scanner, tactile input devices such as gloves, and other body coverings, and the like
  • pointing device e.g., a “mouse”
  • tactile input devices such as gloves, and other body coverings, and the like
  • These and other input devices can be connected to the one or more processors 703 via the human-machine interface 702 that is coupled to the system bus 713, but can be connected by other interface and bus structures, such as a parallel port, game port, an IEEE 1394 Port (also known as a Firewire port), a serial port, or a universal serial bus (USB).
  • a parallel port e.g., game port
  • IEEE 1394 Port also known as a Firewire port
  • serial port e.g., a serial port
  • USB universal
  • the display device 711 can also be connected to the system bus 713 via an interface, such as the display adapter 709. It is contemplated that the computing device 701 can have more than one display adapter 709 and the computing device 701 can have more than one display device 711.
  • the display device 711 can be a monitor, an LCD (Liquid Crystal Display), or a projector.
  • other output peripheral devices can comprise components such as speakers (not shown) and a printer (not shown) which can be connected to the computing device 701 via the Input/Output Interface 710. Any operation and/or result of the methods can be output in any form to an output device. Such output can be any form of visual representation, including, but not limited to, textual, graphical, animation, audio, tactile, and the like.
  • the display device 711 and computing device 701 can be part of one device, or separate devices.
  • the computing device 701 can operate in a networked environment using logical connections to one or more remote computing devices 714a,b,c.
  • a remote computing device can be a personal computer, portable computer, smartphone, a server, a router, a network computer, a peer device or other common network node, and so on.
  • Logical connections between the computing device 701 and a remote computing device 714a, b,c can be made via a network 715, such as a local area network (LAN) and/or a general wide area network (WAN).
  • LAN local area network
  • WAN general wide area network
  • the network adapter 708 can be implemented in both wired and wireless environments.
  • one or more of the remote computing devices 714a, b.c can comprise an external engine and/or an interface to the external engine.
  • Computer-readable media can comprise “computer storage media” and “communications media.”
  • “Computer storage media” comprise volatile and non-volatile, removable and non-removable media implemented in any methods or technology for storage of information such as computer- readable instructions, data structures, program modules, or other data.
  • Exemplary computer storage media comprises, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer.
  • Embodiments of this disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects.
  • the methods and systems may take the form of a computer program product on a computer-readable storage medium having computer-readable program instructions (e.g., computer software) embodied in the storage medium.
  • the present methods and systems may take the form of web-implemented computer software. Any suitable computer-readable storage medium may be utilized including hard disks, CD- ROMs, optical storage devices, or magnetic storage devices, whether internal, networked, or cloud-based.
  • Embodiments of this disclosure have been described with reference to diagrams, flowcharts, and other illustrations of computer-implemented methods, systems, apparatuses, and computer program products.
  • processor-accessible instructions can include, for example, computer program instructions (e.g., processor-readable and/or processor-executable instructions).
  • the processor-accessible instructions can be built (e.g., linked and compiled) and retained in processor-executable form in one or multiple memory devices or one or many other processor-accessible non-transitory storage media.
  • These computer program instructions may be loaded onto a general-purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine.
  • the loaded computer program instructions can be accessed and executed by one or multiple processors or other types of processing circuitry.
  • the loaded computer program instructions provide the functionality described in connection with flowchart blocks (individually or in a particular combination) or blocks in block diagrams (individually or in a particular combination).
  • flowchart blocks individually or in a particular combination
  • blocks in block diagrams individually or in a particular combination
  • These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including processor-accessible instruction (e.g., processor-readable instructions and/or processor-executable instructions) to implement the function specified in the flowchart blocks (individually or in a particular combination) or blocks in block diagrams (individually or in a particular combination).
  • the computer program instructions (built or otherwise) may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operations to be performed on the computer or other programmable apparatus to produce a computer-implemented process.
  • the series of operations can be performed in response to execution by one or more processor or other types of processing circuitry.
  • Such instructions that execute on the computer or other programmable apparatus provide operations for implementing the functions specified in the flowchart blocks (individually or in a particular combination) or blocks in block diagrams (individually or in a particular combination).
  • blocks of the block diagrams and flowchart illustrations support combinations of means for performing the specified functions in connection with such diagrams and/or flowchart illustrations, combinations of operations for performing the specified functions and program instruction means for performing the specified functions.
  • Each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations can be implemented by special purpose hardware-based computer systems that perform the specified functions or operations, or combinations of special purpose hardware and computer instructions.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

La présente invention concerne des technologies pour la consommation d'un flux vidéo en direct en provenance d'un dispositif de caméra situé à distance. Le flux vidéo en direct peut être consommé par l'intermédiaire d'une application mobile ou d'une application client installée dans un dispositif utilisateur. Certaines des technologies permettent également d'interagir avec un flux vidéo en direct et d'analyser des flux vidéo (flux vidéo en direct ou flux vidéo décalés dans le temps, ou les deux). Divers types d'analyses peuvent être effectués, conduisant à la génération de recommandations pour des réservations d'un espace à un emplacement particulier, ou à une recommandation pour des articles qui peuvent être consommés à un emplacement où un flux vidéo est émis. Certaines des technologies permettent également de fournir des actifs de contenu dirigés à des dispositifs d'utilisateur. Un actif de contenu dirigé peut être fourni sous la forme d'une notification poussée ou peut être présenté avec l'application mobile.
PCT/US2022/054100 2021-12-28 2022-12-27 Consommation d'un flux vidéo en provenance d'un dispositif de caméra situé à distance WO2023129561A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202163294244P 2021-12-28 2021-12-28
US63/294,244 2021-12-28

Publications (1)

Publication Number Publication Date
WO2023129561A1 true WO2023129561A1 (fr) 2023-07-06

Family

ID=87000174

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2022/054100 WO2023129561A1 (fr) 2021-12-28 2022-12-27 Consommation d'un flux vidéo en provenance d'un dispositif de caméra situé à distance

Country Status (1)

Country Link
WO (1) WO2023129561A1 (fr)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090252383A1 (en) * 2008-04-02 2009-10-08 Google Inc. Method and Apparatus to Incorporate Automatic Face Recognition in Digital Image Collections
US20110007159A1 (en) * 2009-06-06 2011-01-13 Camp David M Video surveillance system and associated methods
US20130286153A1 (en) * 2012-04-26 2013-10-31 Wizard Of Ads, Sunpop Studios Ltd System and Method for Remotely Configuring and Capturing a Video Production
US20140313341A1 (en) * 2010-05-14 2014-10-23 Robert Patton Stribling Systems and methods for providing event-related video sharing services
US10270959B1 (en) * 2014-11-03 2019-04-23 Alarm.Com Incorporated Creating preview images for controlling pan and tilt cameras
US20200183975A1 (en) * 2015-04-16 2020-06-11 Diginary Software, Llc Video content optimization system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090252383A1 (en) * 2008-04-02 2009-10-08 Google Inc. Method and Apparatus to Incorporate Automatic Face Recognition in Digital Image Collections
US20110007159A1 (en) * 2009-06-06 2011-01-13 Camp David M Video surveillance system and associated methods
US20140313341A1 (en) * 2010-05-14 2014-10-23 Robert Patton Stribling Systems and methods for providing event-related video sharing services
US20130286153A1 (en) * 2012-04-26 2013-10-31 Wizard Of Ads, Sunpop Studios Ltd System and Method for Remotely Configuring and Capturing a Video Production
US10270959B1 (en) * 2014-11-03 2019-04-23 Alarm.Com Incorporated Creating preview images for controlling pan and tilt cameras
US20200183975A1 (en) * 2015-04-16 2020-06-11 Diginary Software, Llc Video content optimization system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ALI GHULAM, ALI TARIQ, IRFAN MUHAMMAD, DRAZ UMAR, SOHAIL MUHAMMAD, GLOWACZ ADAM, SULOWICZ MACIEJ, MIELNIK RYSZARD, FAHEEM ZAID BIN: "IoT Based Smart Parking System Using Deep Long Short Memory Network", ELECTRONICS, vol. 9, no. 10, pages 1696, XP093078539, DOI: 10.3390/electronics9101696 *
RICHARDS PAUL: "Using PTZ Cameras for Remote Production", STREAMGEEKS, 6 December 2021 (2021-12-06), XP093078541, Retrieved from the Internet <URL:https://streamgeeks.us/ptz-cameras-for-remote-production/> [retrieved on 20230904] *

Similar Documents

Publication Publication Date Title
US11544744B2 (en) Systems, devices, and methods for autonomous communication generation, distribution, and management of online communications
US11417085B2 (en) Systems and methods for automating benchmark generation using neural networks for image or video selection
US10380650B2 (en) Systems and methods for automating content design transformations based on user preference and activity data
US10628845B2 (en) Systems and methods for automating design transformations based on user preference and activity data
US20170185925A1 (en) Collaborative system with personalized user interface for organizing group outings to events
US20170053299A1 (en) System and methods for effectively taking surveys using mobile devices
JP2018110010A (ja) 消費者主導広告システム
US20130179440A1 (en) Identifying individual intentions and determining responses to individual intentions
WO2019171128A1 (fr) Filtres photo à pages multiples, exploitables sur photo, et éphémères, avec publicité de commandes et sur support, intégration automatisée de contenus externes, défilement d&#39;alimentation automatisé, modèle basé sur une publication publicitaire et actions et commandes de réaction sur des objets reconnus dans une photo ou une vidéo
US20200193475A1 (en) Apparatus, method and system for replacing advertising and incentive marketing
US20240193914A1 (en) Systems and Methods for Managing Computer Memory for Scoring Images or Videos using Selective Web Crawling
US20180047070A1 (en) System and method for providing a profiled video preview and recommendation portal
US20150371283A1 (en) System and method for managing or distributing promotional offers
US20140068515A1 (en) System and method for classifying media
US20130304541A1 (en) Consumer-initiated demand-driven interactive marketplace
Álvarez-Monzoncillo et al. Digital word of mouth usage in the movie consumption decision process: the role of Mobile-WOM among young adults in Spain
Roy et al. Restaurant analytics: Emerging practice and research opportunities
US20140067460A1 (en) System and method for simulating return
Kumar The Role of Digital Marketing on Customer Engagement in the Hospitality Industry
US10963921B2 (en) Presenting content to an online system user assigned to a stage of a classification scheme and determining a value associated with an advancement of the user to a succeeding stage
WO2023129561A1 (fr) Consommation d&#39;un flux vidéo en provenance d&#39;un dispositif de caméra situé à distance
US20220058753A1 (en) Systems and methods for intelligent casting of social media creators
US12020470B1 (en) Systems and methods for using image scoring an improved search engine
US20230106337A1 (en) Marketing management system, business management system, processes, and method for managing digital marketing profiles
Uhl et al. Customer centricity

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22917291

Country of ref document: EP

Kind code of ref document: A1