CN111316656B - Computer-implemented method and storage medium - Google Patents

Computer-implemented method and storage medium Download PDF

Info

Publication number
CN111316656B
CN111316656B CN201880071766.5A CN201880071766A CN111316656B CN 111316656 B CN111316656 B CN 111316656B CN 201880071766 A CN201880071766 A CN 201880071766A CN 111316656 B CN111316656 B CN 111316656B
Authority
CN
China
Prior art keywords
video data
user
image capture
capture device
captured
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201880071766.5A
Other languages
Chinese (zh)
Other versions
CN111316656A (en
Inventor
文森特·查尔斯·张
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Meta Platforms Inc
Original Assignee
Meta Platforms Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/121,060 external-priority patent/US10805521B2/en
Priority claimed from US16/121,087 external-priority patent/US10666857B2/en
Priority claimed from US16/121,081 external-priority patent/US10868955B2/en
Application filed by Meta Platforms Inc filed Critical Meta Platforms Inc
Priority to CN202310329426.0A priority Critical patent/CN116193175A/en
Priority to CN202310329439.8A priority patent/CN116208791A/en
Publication of CN111316656A publication Critical patent/CN111316656A/en
Application granted granted Critical
Publication of CN111316656B publication Critical patent/CN111316656B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/27Server based end-user applications
    • H04N21/274Storing end-user multimedia data in response to end-user request, e.g. network recorder
    • H04N21/2743Video hosting of uploaded data from client
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2457Query processing with adaptation to user needs
    • G06F16/24578Query processing with adaptation to user needs using ranking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25808Management of client data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • H04N21/25891Management of end-user data being end-user preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/4223Cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4728End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for selecting a Region Of Interest [ROI], e.g. for requesting a higher resolution version of a selected region
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/65Transmission of management data between client and server
    • H04N21/658Transmission by the client directed to the server
    • H04N21/6587Control parameters, e.g. trick play commands, viewpoint selection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • H04N23/661Transmitting camera control signals through networks, e.g. control via the Internet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/69Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/02Arrangements for optimising operational condition

Abstract

Various client devices include a display apparatus and one or more image capture devices configured to capture video data. Different users of the online system may authorize the client devices to exchange information captured by their respective image capture devices. Further, the client device modifies the captured video data based on the user identified in the video data. For example, the client device changes the parameters of the image capture device to more prominently display the user identified in the video data, and may also change the parameters of the image capture device based on the user's gestures or movements identified in the video data. The client device may apply multiple models to the captured video data to modify the captured video data or the subsequent capture of the video data by the image capture device.

Description

Computer-implemented method and storage medium
Background
The present disclosure relates generally to capturing video data and, more particularly, to modifying the capture of video data based on previously captured video data.
More and more client devices, online systems, and networks allow users to exchange more content with each other. For example, online systems allow their users to exchange video data captured by different users via client devices associated with the users. In a particular example, an online system may establish video messaging (messaging) between a user and another user, allowing the users to exchange video data captured by their respective client devices in real-time or near real-time.
However, when providing video data, conventional client devices require a user to manually configure the video capture. For example, a user of the client device provides input to the client device to identify (identify) a focus of an image capture device of the client device, specify a magnification of the image capture device of the client device, or other parameters of the image capture device of the client device. In addition to providing initial input identifying parameters of the image capture device of the client device, conventional client devices require a user to manually reposition the image capture device of the client device to a different portion of a local area within the field of view of the image capture device of the client device. Thus, a user manually selects and maintains content captured by an image capture device of a client device, which is transferred to another client device. While such reliance on user-provided input provides the user with significant control over the video data captured by the client device, conventional client devices do not allow the user to easily perform other tasks in a video messaging session while capturing video data via the client device because conventional client devices cannot adjust video data capture as conditions change without receiving one or more user-provided inputs.
SUMMARY
Various client devices associated with users of online systems include one or more image capture devices. An image capture device included in a client device is configured to capture video data of a local area surrounding the client device, for example, during a video call or when a user turns on a video capture function. In addition, the client device includes a controller coupled to the one or more image capture devices. The controller applies one or more models to video data captured by the image capture device and modifies the video data captured by the image capture device and/or parameters of the image capture device based on the application of the one or more models. This allows the controller to modify the captured video data based on the characteristics or content of the video data previously captured by the image capture device.
In various embodiments, the controller maintains and enforces one or more privacy settings for the user and others captured in the video data or other data. For example, the controller may have a default privacy setting that prevents the controller from identifying the user until the user manually changes the privacy setting to allow the controller to identify the user. The default privacy settings may also be extended to any captured video data, audio data, image data, or other data so that the user may select whether to allow the image capture device to identify any user in the data. In addition, the privacy settings also manage the transfer of information from the client device to another entity (e.g., another client device or a third-party system). Various privacy settings allow a user to control the identification of the user and the storage and sharing of any user-related data. Privacy settings may also be implemented individually for each person. For example, a user who chooses to join (opt-in) a user identification function does not change the default privacy settings of other users that may be accidentally captured in a local area around the client device.
In various embodiments, based on a user's privacy selections that enable the client device to recognize the user, the controller applies one or more machine learning models to video data captured by the image capture device to locate the user included in the captured video data. The model applied by the controller to the captured video data may perform face tracking (two-dimensional or three-dimensional), two-dimensional pose tracking, three-dimensional pose tracking, or any other suitable method to identify portions of a person's face or portions of a person's body. In various embodiments, the controller modifies the captured video data or parameters of the image capture device to more prominently present the located user. For example, the controller crops the captured video data to remove a portion of the video data that does not include at least one person. As another example, the controller modifies the focus of the image capture device to a human face and increases the magnification (i.e., zooming) of the image capture device. In various embodiments, the user has the option to prevent any recordings (video, voice, etc.) from being stored locally in the client device and/or on the cloud, and also has the option to delete any recordings if they are saved.
In various embodiments, when user identification is enabled, one or more rules are applied by one or more models applied by the controller to modify video data captured by an image capture device of the client device. For example, if the controller also determines that a person located within the video data is facing the camera, the controller modifies the captured video data to display the person more prominently. In another example, the controller determines a distance between a person identified from the video data and the image capture device and modifies the captured video data such that the video data presents the person having the smallest determined distance from the image capture device in at least one set of threshold sizes (e.g., in at least a threshold height and a threshold width, or using at least a threshold percentage of the field of view of the image capture device), or displays the person having a determined distance from the image capture device less than the threshold distance in at least the set of threshold sizes.
The controller may receive data from other components of the client device and modify the captured video data based on characteristics of the received video data and the data from the other components of the client device. For example, an image capture device or client device includes an audio capture device (e.g., a microphone) configured to capture audio data from a local area surrounding the client device. The controller may process the captured audio data along with the captured video data when modifying the captured video data according to the privacy setting selected by the user. In various embodiments, the controller applies one or more models to the captured audio data to determine a location within the captured video data that includes the source of the audio data. The controller applies one or more models to locations within the captured video data that include the source of the audio data. In response to application of the model determining that the location within the captured video data that includes the audio data source includes a person, the controller modifies the captured video data to more prominently present the location within the captured video data that includes the audio data source, or repositions the image capture device to focus on the audio data source (e.g., increase a field of view of the image capture device occupied by the audio data source, change a location of the audio data source within the field of view of the image capture device). However, in response to determining that the location within the captured video data that includes the audio data source does not include a person, the controller does not modify the captured video data or reposition the image capture device. As another example, the controller modifies the captured video data or repositions the image capture device to more prominently present the person identified within the captured video data that the controller determined was the source of the captured audio data (e.g., increase the field of view of the image capture device occupied by the audio data source, change the location of the audio data source within the field of view of the image capture device), which allows the captured video data to more prominently display the user determined to be speaking or otherwise providing the audio data captured by the audio capture device of the client device.
In a video messaging session or other situation where a user has turned on video capture and user identification functionality, the controller may apply one or more models to video data captured after receiving information identifying a user of interest to modify one or more parameters of the image capture device to follow the user of interest around the local area. For example, the controller applies one or more facial recognition models to a person located in captured video data to identify a person's face that matches a user's face of interest (received from an online system or identified from previously captured video data based on information from the online system), and then repositions the focus of the image capture device to the person having a face that matches the user's face of interest. Alternatively, the controller extracts a chromagram (color map) from the captured video data including the user of interest and repositions the focus of the image capturing device such that the extracted chromagram remains included in the video data captured by the image capturing device.
Embodiments according to the invention are specifically disclosed in the accompanying claims directed to methods and computer program products, wherein any feature mentioned in one claim category (e.g. method) may also be claimed in another claim category (e.g. computer program product, system, storage medium). The dependencies or back-references in the appended claims are chosen for formal reasons only. However, any subject matter resulting from an intentional back-reference (especially multiple references) to any preceding claim may also be claimed, such that any combination of a claim and its features is disclosed and may be claimed, irrespective of the dependencies chosen in the appended claims. The subject matter which may be claimed comprises not only the combination of features as set out in the appended claims, but also any other combination of features in the claims, wherein each feature mentioned in the claims may be combined with any other feature or combination of other features in the claims. Furthermore, any embodiments and features described or depicted herein may be claimed in a separate claim and/or in any combination with any embodiment or feature described or depicted herein or in any combination with any feature of the appended claims.
Brief Description of Drawings
FIG. 1 is a block diagram of a system environment in which an online system operates, according to an embodiment.
Fig. 2 is a block diagram of a client device according to an embodiment.
FIG. 3 is a block diagram of an online system according to an embodiment.
Fig. 4 is a flow diagram of a method for selecting a video data segment for presentation to an interested user of an online system based on a likelihood that the interested user's gaze is directed toward a different location within the video data, according to an embodiment.
The figures depict various embodiments for purposes of illustration only. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles described herein.
Detailed Description
System architecture
FIG. 1 is a block diagram of a system environment 100 for an online system 140. The system environment 100 shown in FIG. 1 includes one or more client devices 110, a network 120, one or more third party systems 130, and an online system 140. Additionally, in the system environment 100 shown in FIG. 1, a controller 117 is coupled to the client device 110. In alternative configurations, different and/or additional components may be included in system environment 100. For example, online system 140 is a social networking system, a content sharing network, or another system that provides content to users.
Client device 110 is one or more computing devices capable of receiving user input as well as sending and/or receiving data via network 120. In one embodiment, client device 110 is a conventional computer system, such as a desktop computer or laptop computer. Alternatively, client device 110 may be a computer-enabled device, such as a Personal Digital Assistant (PDA), mobile phone, smart phone, or other suitable device. Client device 110 is configured to communicate via network 120. In one embodiment, the clientThe end device 110 executes an application that allows a user of the client device 110 to interact with the online system 140. For example, the client device 110 executes a browser application to enable interaction between the client device 110 and the online system 140 via the network 120. In another embodiment, the client device 110 is operated by a native operating system (e.g., native) running on the client device 110
Figure BDA0002478716360000051
Or ANDROID TM ) To interact with the online system 140. As described further below in connection with fig. 2, the client device 110 includes a display device 115 configured to present content and one or more image capture devices configured to capture image or video data of a local area surrounding the client device 110.
Client device 110 is configured to communicate via network 120 using a wired and/or wireless communication system, and network 120 may include any combination of local and/or wide area networks. In one embodiment, network 120 uses standard communication technologies and/or protocols. For example, the network 120 includes communication links using technologies such as Ethernet, 802.11, worldwide Interoperability for Microwave Access (WiMAX), 3G, 4G, code Division Multiple Access (CDMA), digital Subscriber Line (DSL), and so forth. Examples of network protocols for communicating via network 120 include multiprotocol label switching (MPLS), transmission control protocol/internet protocol (TCP/IP), hypertext transfer protocol (HTTP), simple Mail Transfer Protocol (SMTP), and File Transfer Protocol (FTP). Data exchanged over network 120 may be represented using any suitable format, such as hypertext markup language (HTML) or extensible markup language (XML). In some embodiments, all or some of the communication links of network 120 may be encrypted using any suitable technique or techniques.
One or more third party systems 130 may be coupled to the network 120 to communicate with the online system 140, as will be further described below in conjunction with FIG. 3. In one embodiment, the third party system 130 is an application provider that communicates information describing an application for execution by the client device 110 or communicates data to the client device 110 for use by an application executing on the client device. In other embodiments, the third party system 130 provides content or other information for presentation via the client device 110. The third-party system 130 may also transmit information, such as advertisements, content, or information about applications provided by the third-party system 130, to the online system 140.
Fig. 2 is a block diagram of an embodiment of a client device 117. In the embodiment shown in FIG. 2, client device 110 includes a display device 115 and an image capture device 117, and a controller 210. However, in other embodiments, client device 110 includes different or additional components than those shown in fig. 2.
The display device 115 may be integrated into the client device 110 or coupled to the client device 110. For example, the display device 115 integrated into the client device 110 is a display screen included in the client device 110. Alternatively, the display device 115 is a monitor or other display device coupled to the client device 110. The display device 115 presents image data or video data to a user. The image or video data presented by the display device 115 is determined by an application executing on the client device 110. Different applications may be included on client device 110 such that execution of the different applications changes the content presented to the user by display device 115.
Image capture device 117 captures video data or images of a local area around client device 110 and within the field of view of image capture device 117. In some embodiments, image capture device 117 includes one or more cameras, one or more video cameras, or any other device capable of capturing image data or video data. Additionally, the image capture device 117 may include one or more filters (e.g., to increase signal-to-noise ratio). Various parameters (e.g., focal length, focus, frame rate, ISO, sensor temperature, shutter speed, aperture, focus, etc.) configure the capture of video or image data by image capture device 117. Thus, modifying one or more parameters of image capture device 117 may modify video data or image data captured by image capture device 117 after modification of the one or more parameters. Although fig. 2 shows a single image capture device 117 included in client device 110, in other embodiments, client device 110 includes any suitable number of image capture devices 117. In various embodiments, the user has the option to prevent any recordings (video, voice, etc.) from being stored locally in the client device and/or on the cloud, and also has the option to delete any recordings if they are saved.
The controller 210 is coupled to the image capture device 117 and includes a memory device coupled to a processor. In various embodiments, the controller 210 is also coupled to the display device 115. The controller 210 includes instructions that, when executed by the processor, apply one or more models to video data captured by the image capture device 117. In various embodiments, one or more models are applied to any combination of video data, audio data, image data, or data captured by image capture device 117 or any other device included in or coupled to client device 110. As described further below in conjunction with fig. 4, the model applied to the captured video data by the controller 210 applies one or more rules to characteristics of the captured video data to identify objects, people, movement, or any other suitable content in the captured video data. Based on the application of the model and in accordance with one or more privacy settings, controller 210 modifies the captured video data or modifies one or more parameters of image capture device 117 such that subsequently captured video data is modified. For example, the user may authorize controller 210 to apply a model that locates the user in the captured video data based on characteristics of the captured video data and modify the captured video data to more prominently include the located user, or modify one or more parameters of image capture device 117 (e.g., focus, magnification or zoom, crop of the captured video data) such that the additional video data more prominently includes the located user. The additional video data more prominently includes the located person by presenting the located person in at least one set of threshold sizes (e.g., in at least a threshold height or a threshold width), presenting the located person in at least a threshold amount of the field of view of the image capture device 117 or in at least a threshold amount of frames of the captured video data, or presenting the located person at one or more particular locations within the captured video data. However, the model applied by controller 210 may identify any suitable component of the video data captured by image capture device 117 and modify parameters of image capture device 117 or modify the captured video data accordingly.
In various embodiments, the client device 110 includes one or more audio capture devices (e.g., microphones). For example, the client device 110 includes a microphone array configured for two-dimensional or three-dimensional beamforming. The audio capture device captures audio signals from different regions within a local area around the client device 110. In various embodiments, one or more audio capture devices are coupled to the controller 210, the controller 210 maintaining information identifying different zones in a local area around the client device 110; for example, controller 210 identifies 24 15 degree zones from points within client device 110 in a local area around client device 110, thereby identifying zones in a 360 degree local area around client device 110.
One or more audio capture devices are coupled to the controller 210. In accordance with the user-selected privacy setting, the controller 210 applies one or more models (e.g., machine learning models or other sound recognition models) to audio data captured from regions in the local area surrounding the client device 110. The controller 210 includes information identifying a user or object (e.g., television, mobile device), and applies one or more models to audio captured from a region in a local area around the client device 110 to determine whether the captured audio data includes audio data from the user or object identified by the controller 210 or ambient noise. In some embodiments, one or more models applied by the controller 210 determine a particular user or particular object identified by the controller 210 from which one or more audio capture devices capture audio in the region. In other embodiments, the client device 110 communicates audio data captured by one or more audio devices to the online system 140, and the online system 140 applies one or more models to determine whether the audio data includes audio data captured from a recognized object or user, or to determine a particular recognized user or object from which to capture audio data. The online system 140 provides an indication to the client device 110 whether the captured audio data includes audio data captured from a recognized object or user, or provides information specifying the particular recognized object or user from which the audio data was captured. The controller 210 or the online system 140 similarly determines whether the audio was captured from other zones around the local area of the client device 110. Based on the determination of the identified object or user from which to capture audio data in the different zones, the controller 210 modifies the positioning of the one or more audio devices to improve the quality of the audio captured from the one or more zones. For example, the controller 210 repositions one or more audio capture devices to improve the quality of audio captured from a zone around the local area (from which audio data was captured from a particular user or from a particular object). Similarly, controller 210 may reposition image capture device 117 or otherwise modify one or more parameters of image capture device 117 based on a region in a local area around client device 110 (from which audio data was captured from different users or objects). In various embodiments, one or more audio devices and image capture device 117 may be directed toward different portions of a local area around client device 110. For example, image capture device 117 is oriented towards the object described by the user, while controller 210 directs one or more audio capture devices towards a region in a local area around client device 110 from which audio data was captured by the particular user.
In various embodiments, the online system 140 and the controller 210 of the client device 110 cooperatively and/or separately maintain and enforce one or more privacy settings of a user or person identified from captured video data or other data. The privacy settings of a user or person determine how particular information associated with the user or person may be shared and may be stored in association with information identifying the user or person. In some embodiments, the controller 210 retrieves privacy settings for one or more users maintained by the online system 140. In one embodiment, the privacy settings specify particular information associated with the user and identify other entities with which the specified information may be shared. Examples of entities with which information may be shared may include other users, applications, third party systems 130, or any entity that may potentially access information. Examples of information that may be shared by users include: image data containing a user or person, audio data containing audio captured from a user or person, video data containing a user or person, and the like.
For example, in particular embodiments, the privacy settings may allow the first user to specify (e.g., by opt out in) whether the online system 140 may receive, collect, record, or store particular objects or information associated with the user for any purpose. In particular embodiments, the privacy settings may allow the first user to specify whether a particular video capture device, audio capture device, application, or process may access, store, or use particular objects or information associated with the user. The privacy settings may allow the first user to opt-in or opt-out to enable a particular device, application, or process to access, store, or use the object or information. The presence system 140 may access such information to provide a particular function or service to the first user, while the presence system 140 may not access the information for any other purpose. Prior to accessing, storing, or using such objects or information, the online system may prompt the user to provide privacy settings that specify which applications or processes (if any) may access, store, or use the objects or information before allowing any such actions. By way of example and not limitation, the first user may send a message to the second user via an application (e.g., messaging app) related to the online social network, and may specify privacy settings at which such message should not be stored by online system 140.
The privacy settings maintained and enforced by the online system 140 and/or the controller 210 may be associated with default settings. In various embodiments, controller 210 does not identify the user within the captured video data, audio data, image data, or other data unless controller 210 obtains authorization from the user that controller 210 identifies the user's privacy settings. For example, the privacy settings associated with the user have default settings that prevent the controller 210 from identifying the user, and thus the controller 210 does not identify the user unless the user manually changes the privacy settings to allow the controller 210 to identify the user. Further, in various embodiments, the alternate privacy settings management manages the transmission of information identifying the user from the client device 110 to another entity (e.g., another client device 110, the online system 140, the third-party system 130). In various embodiments, the alternate privacy setting has a default setting that prevents transmission of information identifying the user, which prevents the controller 210 from transmitting information identifying the user to other entities unless the user manually modifies the alternate privacy setting to authorize the transmission). The controller 210 maintains one or more privacy settings for each user identified from the captured video data or other data, allowing user-specific control over the transmission and identification of each user. In some embodiments, when controller 210 initially identifies a person from the captured data, controller 210 prompts the person to provide privacy settings and stores the provided privacy settings in association with information identifying the person.
In various embodiments, for various components of the online system 140 and/or client device 110 that have functionality that may use the user's personal information or biometric information as input for user authentication or experience personalization purposes, the user may choose to utilize these functionality to enhance their experience with the device and online system. By way of example and not limitation, a user may voluntarily provide personal information or biometric information to the online system 140. The user's privacy settings may specify that such information may only be used for a particular process (e.g., authentication), and further specify that such information may not be shared with any third party or may not be used for other processes or applications associated with the online system 140. As another example and not by way of limitation, online system 140 may provide functionality for a user to provide voiceprint (voice-print) records to an online social network. By way of example and not limitation, if a user wishes to take advantage of this functionality of an online social network, the user may provide a voice recording of his or her own voice for providing status updates on the online social network. The recording of the voice input may be compared to the user's voice print to determine what the user said. The user's privacy settings may specify that such voice recordings may only be used for voice input purposes (e.g., authenticating the user, sending voice messages, improving voice recognition to use voice-operated functions of the online social network), and further specify that such voice recordings may not be shared with any third-party system or used by other processes or applications associated with the online system 140. As another example and not by way of limitation, online system 140 may provide functionality for a user to provide a reference image (e.g., a facial contour) to an online social network. The online social network may compare the reference image to later received image input (e.g., to authenticate the user, mark the user in a photo). The user's privacy settings may specify that such voice recordings may only be used for limited purposes (e.g., authenticating, tagging the user in a photo), and further specify that such voice recordings may not be shared with any third-party system or used by other processes or applications associated with system 140. Any such restrictions on capturing biometric data and/or other personal data may also apply to client device 110.
A user may authorize capturing data, identifying the user, and/or sharing and using user-related data across applications in one or more ways. For example, the user may pre-select various privacy settings before the user uses the functionality of the client device 110 and/or takes action in the online system 140. In another case, the selection dialog may be prompted when the user first performs an action or uses a function of the client device 110 and/or the online system 140, and/or when the user has not performed an action or used a function for a predetermined period of time. In yet another example, when certain functions that require user data begin to operate or are disabled due to a user's selection, the client device 110 and the online system 140 may also provide a notification to the user to allow the user to make further selections through the notification. Other suitable ways of authorizing the user are also possible.
In some embodiments, the controller 210 obtains information maintained by the online system 140 or from one or more third-party systems 130 for a user identified from captured video data, according to the user's privacy settings. Based on information obtained including video data, audio data, image data, or other data of the user previously captured by the client device 110, the controller 210 may generate content for presentation to the user via the client device 110. For example, the controller 210 overlays (overlay) content items from the online system 140 that are associated with one or more objects identified by the controller 210 from video data or image data captured by the client device 110. Alternatively, the presence system 140 generates content for the user based on video data, image data, audio data, or other data received from the client device 110 that includes the user and information maintained by the presence system 140 for the user (or information obtained by the presence system 140 from one or more third-party systems 130) and provides the generated content to the client device 110 for presentation to the user.
FIG. 3 is a block diagram of the architecture of the presence system 140. The online system 140 shown in FIG. 3 includes a user profile store 305, a content store 310, an action logger 315, an action log 320, an edge store 325, a content selection module 330, and a web server 335. In other embodiments, the presence system 140 may include additional, fewer, or different components for various applications. Conventional elements (e.g., network interfaces, security functions, load balancers, failover servers, management and network operations consoles, etc.) are not shown so as not to obscure the details of the system architecture.
Each user of the online system 140 is associated with a user profile that is stored in the user profile store 305. The user profile includes declarative information about the user that is explicitly shared by the user, and may also include profile information inferred by the online system 140. In one embodiment, the user profile includes a plurality of data fields (data fields), each data field describing one or more attributes of a respective online system user. Examples of information stored in a user profile include biographical information, demographic information, and other types of descriptive information (e.g., work experience, educational history, gender, hobbies or preferences, location, etc.). The user profile may also store other information provided by the user, such as images or videos. In some embodiments, the user's image may be tagged with information identifying the online system user displayed in the image, where the information identifying the user's tagged image is stored in the user profile of the user. The user profiles in the user profile store 305 may also maintain references to actions performed by the respective users on content items in the content store 310 and stored in the action log 320.
Further, the user profile maintained for the user includes characteristics of one or more client devices 110 associated with the user, allowing the online system 140 to subsequently identify the user from the characteristics provided by the client devices 110. For example, an application associated with the presence system 140 and executing on the client device 110 provides a device identifier or other information uniquely identifying the client device 110 to the presence system 140 in association with the user identifier. The presence system 110 stores the device identifier or other information uniquely identifying the client device 110 in a user profile maintained for the user, which allows for subsequent user identification if the presence system 140 receives the device identifier or other information uniquely identifying the client device 110. Other characteristics of the client device 110 associated with the user may alternatively or additionally be included in a user profile maintained by the user. For example, the user profile includes a network address used by the client device 110 to access the network 120, an identifier of an application executing on the client device 110 from which the presence system 140 receives information, a type of the client device 110 from which the presence system 140 receives information (e.g., an identifier of a manufacturer, a model of the client device 110, etc.), and an operating system executing on the client device 110 from which the presence system 140 receives information. However, the presence system 140 may store any suitable characteristics of the client device 110 in the user profile, which allows the presence system 140 to maintain information about the client device 110 used by the user corresponding to the user profile.
While the user profiles in the user profile store 305 are typically associated with individuals, allowing individuals to interact with each other via the online system 140, the user profiles may also be stored for entities such as businesses or organizations. This allows entities to establish presence on the online system 140 for contacting and exchanging content with other online system users. An entity may use a brand page associated with the entity's user profile to publish information about itself, about its products, or to provide other information to users of the online system 140. Other users of the online system 140 may be affiliated with the brand page to receive information published to or from the brand page. The user profile associated with the brand page may include information about the entity itself to provide the user with background or informational data about the entity.
The content store 310 stores objects that each represent various types of content. Examples of content represented by an object include page posts, status updates, photos, videos, links, shared content items, game application achievements, check-in events at local businesses, brand pages, or any other type of content. The online system user may create objects stored by the content store 310, such as status updates, photos, events, groups, or applications that are tagged by the user as being associated with other objects in the online system 140. In some embodiments, the object is received from a third party application or a third party application independent of the online system 140. In one embodiment, the objects in the content store 310 represent a single piece of content (single piece of content) or a content "item (item)". Thus, online system users are encouraged to communicate with each other by publishing text and content items of various types of media to the online system 140 via various communication channels. This increases the amount of user interaction with each other and increases the frequency with which users interact within the online system 140.
One or more content items included in the content store 310 include creatives (creative), which are content for presentation to the user, and bid amounts (bid amount). A creative is text, an image, audio, video, or any other suitable data presented to a user. In various embodiments, the creative also specifies a page of content. For example, the content item includes a link that specifies a network address of a landing page for content that the user is directed to when the content item is accessed. The bid amount is included in the content item by the user if the content is presented to the user, and is used to determine an expected value, e.g., monetary compensation, that the advertiser would provide to the online system 140 if the content in the content item is presented to the user, if the content in the content item receives user interaction while presented, or if any suitable condition is met while the content in the content item is presented to the user. For example, a bid amount included in a content item specifies a monetary amount that the online system 140 receives from a user that provides the content item to the online system 140 if the content in the content item is displayed. In some embodiments, the expected value for the online system 140 to present content from the content item may be determined by multiplying the bid amount by the probability that the user accessed the content of the content item.
The various content items may include an intent (objective) that identifies interactions that a user associated with the content item desires other users to perform when presenting content included in the content item. Example intents include: installing an application associated with the content item, indicating a preference for the content item, sharing the content item with other users, interacting with an object associated with the content item, or performing any other suitable interaction. When content from a content item is presented to an online system user, the online system 140 records interactions between users presented with the content item or objects associated with the content item. Further, when the online system user performs an interaction with the content item that satisfies the intent included in the content item, the online system 140 receives compensation from the user associated with the content item.
Further, the content item may include one or more targeting criteria (targeting criteria) specified by a user providing the content item to the online system 140. The targeting criteria included in the content item request specify one or more characteristics of a user eligible to be presented with the content item. For example, the location criteria are used to identify users having user profile information, edges, or actions that satisfy at least one of the location criteria. Thus, the positioning criteria allow the user to identify users with specific characteristics, which simplifies the subsequent distribution of content to different users.
In one embodiment, the positioning criteria may specify a type of action or connection between the user and another user or object of the online system 140. The positioning criteria may also specify interactions between the user and objects executing outside of the online system 140 (e.g., on the third-party system 130). For example, the location criteria identifies a user who has taken a particular action (e.g., sending a message to another user, using an application, joining a group, leaving a group, joining an event, generating an event description, purchasing or viewing a product or service using an online marketplace, requesting information from a third-party system 130, installing an application, or performing any other suitable action). The inclusion of actions in the targeting criteria allows the user to further refine (refine) the users eligible to be presented with the content item. As another example, the positioning criteria identify a user that is associated with another user or object or that is associated with a particular type of another user or object.
Based on the privacy settings, the action logger 315 may be authorized to receive communications regarding user actions internal and/or external to the online system 140, populating the action log 320 with information regarding the user actions. Examples of actions include adding a connection to another user, sending a message to another user, uploading an image, reading a message from another user, viewing content associated with another user, and attending an event published by another user. Further, many actions may involve one object and one or more particular users, and thus these actions are also associated with a particular user and stored in the action log 320.
Based on the privacy settings, the action log 320 may be authorized by the user to be used by the online system 140 to track user actions on the online system 140 and actions on third-party systems 130 that communicate information to the online system 140. The user may interact with various objects on the online system 140 and information describing these interactions is stored in the action log 320. Examples of interactions with objects include: comment posts, share links, check-in at a physical location via the client device 110, access content items, and any other suitable interaction. Additional examples of interactions with objects on the online system 140 included in the action log 320 include: reviewing photo albums, communicating with users, establishing associations with objects, joining events, joining groups, creating events, authorizing applications, using applications, expressing preferences for objects ("liking" objects), and participating in transactions. Further, the action log 320 may record user interactions with advertisements on the online system 140 as well as interactions with other applications running on the online system 140. In some embodiments, data from the action log 320 is used to infer interests or preferences of the user, enhance interests included in the user profile of the user, and allow for a more complete understanding of the user's preferences.
The action log 320 may also store user actions taken on the third-party system 130 (e.g., external website) and communicate to the online system 140, depending on the user's privacy settings. For example, an e-commerce website may identify a user of the online system 140 by a social plug-in that enables the e-commerce website to identify the user of the online system 140. Because the user of the online system 140 is uniquely identifiable, the e-commerce website (e.g., in the previous example) may communicate information about the user's actions external to the online system 140 for association with the user. Thus, the action log 320 may record information about actions performed by the user on the third-party system 130, including web browsing history, participating advertisements, completed purchases, and other patterns from shopping and purchases. Further, actions performed by the user via an application associated with the third party system 130 and performed on the client device 110 may be communicated by the application to the action logger 315 for logging in the action log 320 and association with the user.
In one embodiment, the edge store 325 stores as edges information describing associations between users and other objects on the online system 140. Certain edges may be defined by the user, allowing the user to specify their relationships with other users. For example, a user may generate edges with other users that are parallel to the user's real-life relationships (e.g., friends, colleagues, buddies, etc.). Other edges are generated when a user interacts with objects in the online system 140 (e.g., expresses interest in pages on the online system 140, shares links with other users of the online system 140, and comments made by other users of the online system 140).
The edges may include various features, each of which represents a characteristic of an interaction between users, an interaction between a user and an object, or an interaction between objects. For example, features included in edges describe the rate of interaction between two users, how recently two users have interacted with each other, the rate or amount of information one user retrieves about an object, or the number and type of comments a user posts about an object. These features may also represent information describing a particular object or user. For example, the characteristics may represent a user's level of interest in a particular topic, the rate at which the user logs into the online system 140, or information describing demographic information about the user. Each feature may be associated with a source object or user, a target object or user, and a feature value. Features may be specified as expressions based on values describing a source object or user, a target object or user, or interactions between a source object or user and a target object or user; thus, an edge may be represented as one or more feature expressions.
The edge store 325 also stores information about edges, such as affinity scores (affinity scores) of objects, interests, and other users. The online system 140 may calculate an affinity score, or "affinity," over time to approximate the user's interest in another user or object in the online system 140 based on the actions performed by the user. The online system 140 may calculate the affinity of the user over time to approximate the user's interest in another user, object, or topic in the online system 140 based on the actions performed by the user. The calculation of affinity is further described in U.S. patent application No. 12/978,265 filed on 23/12/2010, U.S. patent application No. 13/690,254 filed on 30/11/2012, U.S. patent application No. 13/689,969 filed on 30/11/2012, and U.S. patent application No. 13/690,088 filed on 30/11/2012, each of which is hereby incorporated by reference in its entirety. In one embodiment, multiple interactions between a user and a particular object may be stored as a single edge in the edge store 325. Alternatively, each interaction between a user and a particular object is stored as a separate edge. In some embodiments, the associations between users may be stored in the user profile store 305, or the user profile store 305 may access the side store 325 to determine associations between users.
The content selection module 330 selects one or more content items for transmission to the client device 110 for presentation to the user. The content selection module 330 retrieves content items from the content store 310 or from another source that are eligible for presentation to the user, and the content selection module 330 selects one or more of these content items for presentation to the viewing user. The content items that are eligible for presentation to the user are content items that are associated with at least a threshold number of the positioning criteria that are met by the characteristics of the user, or content items that are not associated with positioning criteria. In various embodiments, the content selection module 330 includes content items that are eligible for presentation to the user in one or more selection processes that identify a set of content items to be presented to the user. For example, the content selection module 330 determines relevance metrics of various content items to the user based on attributes associated to the user by the online system 140 and based on the user's affinity to different content items. The measure of relevance of the content item to the user is based on a measure of quality of the content item for the user, which may be based on a creative included in the content item and content of a landing page identified by a link in the content item. Based on the relevance metrics, the content selection module 330 selects content items for presentation to the user. As an additional example, the content selection module 330 selects the content item having the highest relevance metric or having at least a threshold relevance metric for presentation to the user. Alternatively, the content selection module 330 ranks the content items based on their associated relevance metrics and selects the content item having the highest position in the ranking or at least a threshold position in the ranking for presentation to the user.
The content items eligible for presentation to the user can include content items associated with a bid amount. The content selection module 330 uses the bid amount associated with the content item when selecting content for presentation to the user. In various embodiments, the content selection module 330 determines an expected value associated with various content items based on bid amounts of the various content items and selects a content item associated with a maximum expected value or at least associated with a threshold expected value for presentation. The expected value associated with the content item represents an expected amount of compensation to the online system 140 for presenting the content item. For example, the expected value associated with the content item is a product of a bid amount for the content item and a likelihood of user interaction with the content item. The content selection module 330 can rank the content items based on their associated bid amounts and select content items having at least a threshold position in the ranking for presentation to the user. In some embodiments, the content selection module 330 ranks content items not associated with a bid amount and content items associated with a bid amount in a unified ranking based on the bid amount and relevance metric associated with the content items. Based on the unified ranking, the content selection module 330 selects content for presentation to the user. Selecting content items associated with a bid amount and content items not associated with a bid amount by unified ranking is further described in U.S. patent application No. 13/545,266, filed on day 10, 7/2012, which is hereby incorporated by reference in its entirety.
For example, the content selection module 330 receives a request to present a content information stream (feed) to a user of the online system 140. The information stream includes content items, such as dynamics (stories) that describe actions associated with other online system users connected to the user. The content selection module 330 accesses one or more of the user profile store 305, the content store 310, the action log 320, and the side store 325 to retrieve information about the user. For example, information describing actions associated with other users connected to the user or other data associated with users connected to the user is retrieved. The content selection module 330 retrieves and analyzes content items from the content store 310 to identify candidate content items that are eligible for presentation to the user. For example, content items associated with users not associated with the user or dynamics associated with users having less than a threshold affinity for the user are discarded as candidate content items. Based on various criteria, the content selection module 330 selects one or more content items identified as candidate content items for presentation to the identified user. The selected content item is included in a content information stream presented to the user. For example, the content information stream includes at least a threshold number of content items that describe actions associated with a user connected to the user via the online system 140.
In various embodiments, the content selection module 330 presents content to the user by way of an information stream that includes a plurality of content items selected for presentation to the user. The content selection module 330 may also determine the order in which the selected content items are presented via the information stream. For example, the content selection module 330 orders content items in the information stream based on the likelihood of user interaction with various content items.
Based on the user's actions or permissions, the content selection module 330 receives video data captured by the image capture device 117 included in the client devices 110 associated with the users of the online system and sends the video data to the receiving client device 110 for presentation to the viewing user via the display device 115. The online system 140 may receive a request from the client device 110 identifying a viewing user, and then provide video data from the client device 110 to the receiving client device 110 in response to receiving authorization from the viewing user. Instead, the online system 140 receives a request from a viewing user via the receiving client device 110, and then provides the video data received from the client device 110 to the receiving client device 110 in response to receiving authorization from the user. This allows different users of the online system 140 to exchange video data captured by the client device 110 associated with the user via the online system 140.
Further, the content selection module 330 may receive instructions from the viewing user via the receiving client device 110 and send one or more instructions to the client device 110. Based on the received instruction, the client device 110 modifies video data captured after receiving the instruction, or modifies one or more parameters of the image capture device 117 based on the instruction. Thus, client device 110 modifies the captured video data based on one or more instructions from receiving client device 110 and sends the modified video data or video data captured by image capture device 117 using the modified parameters to content selection module 330, which content selection module 330 sends the video data to receiving client device 110, as further described below in conjunction with fig. 4. This allows the viewing user to modify or adjust the video data captured by the client device 110 and provided to the viewing user via the receiving client device 110.
In various embodiments, the content selection module 330 implements one or more privacy settings of the user of the online system 140. The privacy settings of the user determine how particular information associated with the user may be shared and may be stored in the user profile of the user in the user profile store 305. In one embodiment, the privacy settings specify particular information associated with the user and identify other entities with which the specified information may be shared. Examples of entities with which information may be shared may include other users, applications, third party systems 130, or any entity that may potentially access information. Examples of information that may be shared by a user include user profile information (e.g., profile photos), phone numbers associated with the user, affiliations of the user, video data including the user, actions taken by the user (e.g., adding affiliations), changing user profile information, and so forth. In various embodiments, the online system 140 maintains privacy settings associated with the user that have default settings that prevent other entities from accessing or receiving content associated with the user, and allows the user to modify different privacy settings to allow other entities specified by the user to access or retrieve content corresponding to the modified privacy settings.
The privacy settings specifications may be provided at different levels of granularity. In one embodiment, the privacy settings may identify particular information to be shared with other users. For example, the privacy settings identify a work phone number or a particular set of related information (e.g., personal information including profile photos, home phone numbers, and status). Alternatively, the privacy settings may be applied to all information associated with the user. The specification of the set of entities that may access particular information may also be specified at different levels of granularity. The various sets of entities with which information may be shared may include, for example, all users associated with the user, a group of users associated with the user, additional users associated with the user, all applications, all third-party systems 130, a particular third-party system 130, or all external systems.
One embodiment uses enumeration of entities to specify entities that are allowed to access the identified information or to identify the type of information that is presented to different entities. For example, a user may specify the type of action to transmit to other users or to a specified group of users. Alternatively, the user may specify the type of action or the type of other information not published or presented to other users.
The content selection module 330 includes logic that determines whether certain information associated with a user may be accessed by other users, third party systems 130, and/or other applications and entities affiliated with the user via the online system 140. Based on the user's privacy settings, the content selection module 330 determines whether another user, the third-party system 130, an application, or another entity is allowed to access information associated with the user (including information about actions taken by the user). For example, content portion module 230 uses the privacy settings of the user to determine whether video data including the user may be presented to another user. This enables the user's privacy settings to specify which other users or other entities are allowed to receive data about the user's actions or other data associated with the user.
Web server 335 links online system 140 to one or more client devices 110 and to one or more third parties via network 120And a system 130.Web server 335 serves web pages and other content, e.g.
Figure BDA0002478716360000211
XML, and the like. The web server 335 may receive and route messages (e.g., instant messages, queued messages (e.g., email), text messages, short Message Service (SMS) messages, or messages sent using any other suitable messaging technique) between the presence system 140 and the client device 110. The user may send a request to upload information (e.g., images or videos) stored in content store 310 to web server 335. In addition, web server 335 may provide Application Programming Interface (API) functionality to send data directly to the native client device operating system, e.g.
Figure BDA0002478716360000212
ANDROID TM Or blackberry os.
Modifying video data capture based on characteristics of previously captured video data
FIG. 4 is an interaction diagram of one embodiment of a method for modifying the capture of video data by image capture device 117 based on characteristics of video previously captured by image capture device 117. In various embodiments, the steps described in conjunction with fig. 4 may be performed in a different order. Further, in some embodiments, the method may include different and/or additional steps than those shown in fig. 4.
As further described above in connection with fig. 1 and 2, an image capture device 117 is included in sending client device 110A and captures 405 video data of a local area around sending client device 110A. The image captured by image capture device 117 is communicated to controller 210 included in sending client device 110A (or coupled to client device 110 in other embodiments). In various embodiments, the user may authorize (e.g., through selection of pre-selected privacy settings and/or prompts) controller 210 to apply one or more machine learning models to characteristics of video captured 405 by image capture device 117 to locate persons included in the captured video data. In various embodiments, the controller 210 modifies the video data to more prominently present the located user and sends 410 the modified video data to the online system 140. The located user is presented more prominently by being presented in the video data in at least one set of threshold sizes (e.g., in at least a threshold height or a threshold width), in at least a threshold amount of the field of view of the image capture device 117 or in at least a threshold amount of frames of the captured video data, or in one or more particular locations within the captured video data. For example, the controller 210 clips the captured video data to remove a portion of the video data that does not include at least one person. As another example, the controller 210 increases the zoom (also referred to as magnification) of a portion of the video data including a person. To modify the captured video data, controller 210 may modify the video data after it is captured by image capture device 117, or may modify one or more parameters of image capture device 117 to modify how image capture device 117 captures 405 the video data.
In various embodiments, controller 210 applies one or more methods to locate a person within the captured video data. However, the controller 210 may similarly locate objects (e.g., appliances, furniture, products) by applying one or more models to the captured video data. Although the following examples refer to applying models to video data, one or more models may be applied to video data, audio data, image data, any other data captured by client device 110, and any combination thereof. Controller 210 may use any suitable model or combination of models to locate a person within video data captured 405 by image capture device 117. The model applied to the captured video data by the controller 210 may perform face tracking (two-dimensional or three-dimensional), two-dimensional pose tracking, three-dimensional pose tracking, or any other suitable method to identify portions of a person's face or portions of a person's body. Similarly, the model applied by the controller 210 may identify objects from the captured video data. In some embodiments, based on the user's authorization, the controller 210 communicates with the online system 140 to more specifically identify objects or people based on information obtained from the online system 140, while in other embodiments, the controller 210 maintains a model locally to identify different objects or people from the captured video data. Based on the application of the one or more models, the controller 210 may modify the cropping or scaling of the captured video data including the portions of the body of the located user to more prominently display the portions of the body of the located user. For example, when one or more models identify a person's face, the controller modifies the captured video data to remove portions of the video data that do not include the person's face. If the application of one or more models locates multiple people in the captured video data, the controller 210 modifies the captured video data such that different portions of the video show different people. For example, the controller 210 divides the captured video data into a grid, each region of the grid displaying one or more portions of a different person. In other embodiments, controller 210 increases the magnification (i.e., zooms) of image capture device 117 over a portion of video data that includes a portion of a person. Accordingly, the controller 210 may crop portions of the captured video data or increase the magnification (i.e., zoom) of portions of the captured video data to modify the video data to more prominently present portions of one or more individuals located within the captured video data. Further, when modifying the captured video data based on application of one or more models, the controller 210 may apply the one or more models to stabilize the modified video data to present portions of the one or more located persons with higher quality.
Based on the privacy settings, controller 210 may also apply one or more models to locate the part of the identified person's body and modify one or more parameters of image capture device 117 or the video data captured by image capture device 117 to modify the part of the located person's body included in the captured video data. For example, the controller 210 locates different joints of the identified person's body and modifies the captured video data or parameters of the image capture device 117 to include joints corresponding to different parts of the located person's body in the captured video data. Thus, the controller 210 may modify whether the captured video data includes a head of a person, a head and a torso of a person, or a whole body of a person. The controller 210 may include various rules that modify portions of a person's body included in captured video data based on content included in previously captured video data, movement identified in previously captured video data, or any other suitable characteristic identified from video data.
In various embodiments, one or more models applied by controller 210 apply one or more rules to modify video data captured 405 by image capture device 117 of sending client device 110A. For example, if the controller 210 also determines that the face of a person located from the video data is facing the camera, the controller 210 modifies the captured video data to more prominently display the person located from the video data by modifying the scaling of the captured data over a portion of the video data where the person is located, or by modifying the cropping of a portion of the video data where the person is located to remove objects other than the person. As an example, if one or more models applied by controller 210 determine that a person's face is oriented toward image capture device 117 (e.g., if one or more particular features of the person's face are captured by image capture device 117), controller 210 modifies the captured video data to more prominently display the user's face. In another example, controller 210 determines a distance between a person located within the video data and image capture device 117 and modifies the captured video data to prominently display the person having the smallest determined distance from image capture device 117 or to prominently display the person having a determined distance from the image capture device that is less than a threshold distance. In another example, as a person or object moves, controller 210 applies one or more models to reposition image capture device 117, allowing video data captured by image capture device 117 to track the movement of the person or object.
The one or more models applied by the controller 210 may modify the captured video data based on rules that take into account the locations of the plurality of persons identified in the captured video data. In various embodiments, the user may authorize the controller 210 to locate the user in the captured video, the controller 210 applying a model to the captured video that determines the location within the captured video data toward which the identified user's gaze is directed. In response to determining that at least a threshold number or quantity of positioned users have gaze toward a location within the captured video data that includes a particular person, the controller 210 modifies the captured video data to more prominently display the particular person (e.g., cropping the captured video data to remove content other than the particular person, increasing the magnification or zoom of the particular person). As another example, controller 210 determines the distance between different people located within the video data and modifies the captured video data to more prominently display people within a threshold distance of each other; this allows the controller 210 to modify the captured video data by cropping or scaling a portion of the video data where a group of people are located. Further, the controller 210 may remove one or more frames from the captured video data based on objects or persons identified within the captured video data; for example, if less than a threshold number of objects or people are identified within the captured video data, or if less than a threshold amount of movement of the objects or people identified within the captured video data is determined, the controller 210 removes frames from the captured video data before sending 410 the captured video data to the online system 140. In other embodiments, the online system 140 removes frames from the video data received from the sending client device 110A using the above criteria before sending 415 the captured video data to the receiving client device 110B, as described further below.
The controller 210 may receive data from other components of the sending client device 110A and modify the captured video data based on characteristics of the received video data and the data from the other components of the sending client device 110A. For example, image capture device 117 or client device 110 includes an audio capture device (e.g., a microphone) configured to capture audio data from a local area surrounding client device 110. When modifying the captured video data, the user may authorize controller 210 to process the captured audio data along with the captured video data. In various embodiments, controller 210 applies one or more models to the captured audio data to determine a location within the captured video data that includes the source of the audio data. Controller 210 applies one or more models to locations within the captured video data that include the source of the audio data. In response to application of the model determining that the location within the captured video data that includes the source of audio data includes a person, the controller 210 modifies the captured video data to more prominently present the location within the captured video data that includes the source of audio data (i.e., increasing the zoom of the location of the source that includes the captured video data, or cropping the location of the source that includes the captured video data to remove objects other than the source of the captured video data), or repositions the image capture device 117 to focus on the source of audio data. However, in response to determining that the location within the captured video data that includes the audio data source does not include a person, the controller 210 does not modify the captured video data or reposition the image capture device 117. As another example, controller 210 modifies the captured video data or repositions image capture device 117 to more prominently present persons located within the captured video data that controller 210 determines are the source of the captured audio data, allowing the captured video data to more prominently display persons determined to be speaking or otherwise providing audio data captured by the audio capture device of sending client device 110A (i.e., increasing the zoom of the location including persons determined to be providing audio data, or cropping the location of the source including the captured video data to remove objects other than the persons determined to be providing audio data).
In some embodiments, the user may also authorize controller 210 to apply one or more models to modify captured video data or parameters of image capture device 117 of client device 110 based on video data previously captured 405 by image capture device 117. For example, if controller 210 locates multiple people in captured video data, controller 210 modifies the captured video data or one or more parameters of imaging device 117 such that each located person is prominently presented (e.g., presented in the video data in at least one set of threshold sizes, presented in at least a threshold amount of the field of view of image capture device 117 or in at least a threshold amount of the frames of the captured video data, or presented at one or more particular locations within the captured video data) for a minimum amount of time in the captured video. As another example, according to privacy settings, controller 210 stores information identifying people located in captured video data that have been prominently presented in the captured video data for at least a threshold amount of time. When controller 210 locates another person in the captured video data, controller 210 compares the other person to stored information identifying the person that has been prominently presented in the captured video data. In response to determining that the stored information does not identify the other person, controller 210 modifies the captured video data or modifies one or more parameters of image capture device 117 to prominently display the other person for at least a threshold amount of time. This allows controller 210 to modify the video data such that everyone that controller 210 locates in the video data is prominently displayed for at least a threshold amount of time.
Further, controller 210 may modify the captured video data or parameters of image capture device 117 in response to identifying movement of a person located in the captured video data. For example, if one or more models applied by the controller 210 to the captured video determine that the located person is gesturing towards the object, the controller 210 modifies the captured video such that the located person and the object are prominently presented. As one example, if the captured video prominently displays the located person (e.g., displays the located person in at least a set of threshold sizes, or displays the located person occupying at least a threshold amount of one or more frames of the video data), and the controller determines that the located person is gesturing toward the object, the controller 210 modifies the captured video such that the located person and the object are presented in the video data. For example, the controller 210 reduces the magnification of the captured video data so that both the located person and the object are included in the captured video. In another example, if controller 210 applies one or more models that determine that the person being located is holding the object, controller 210 modifies one or more parameters of the captured video data or image capture device 117 such that the object is prominently rendered (e.g., changes the focus of imaging device 117 to the object and increases the zoom of image capture device 117). In some embodiments, controller 210 may prominently present the object held by the located person in the captured video data for a particular amount of time and then modify the captured video data or parameters of image capture device 117 so that the video data again captured by the located person is prominently presented.
The sending client device 110A sends 410 video data (which may be modified by the controller 210 as further described above) from the sending client device 110A to the online system 140, and the online system 140 sends 415 the captured video data to the receiving client device 110B. Using the display device 115, the receiving client device 110B presents 420 the video data from the online system 140 to a viewing user of the online system 140. In various embodiments, the viewing user communicates a request to communicate with the sending client device 110A from the receiving client device 110B to the online system 140. The presence system 140 communicates the request to the sending client device 110A, and the sending client device 110A provides a response to the presence system 140. If the sending client device 110A provides authorization to the online system 140 in response to the request, the online system 140 transmits video data captured 405 by the image capture device 117 of the sending client device 110A and provided to the online system 140 to the receiving client device 110B for presentation on the display device 115, and vice versa.
In various embodiments, the user may authorize (e.g., based on predetermined privacy settings and by user action) the sending client device 110A to provide information to the online system 140 identifying one or more users corresponding to the people within the captured video that the controller 210 locates, including information identifying the one or more users, where the video data is sent 420 to the receiving client device 110B. Alternatively, the sending client device 110A identifies to the online system 140A video data portion that includes the person located by the controller 210, and the online system 140 compares the video data portion in which the person located by the controller 210 is located with the stored image that identifies the online system user. Based on the privacy settings, the online system 140 retrieves information identifying users identified by stored images that the online system 140 determined to have at least a threshold similarity to the portion of the controller 210 video data of the sending client device 110A where the person located by the controller 210 is located. This allows the presence system 140 to identify the user of the presence system 140 that is included in the video data received from the sending client device 110A if the user chooses to incorporate such an identifying feature. The online system 140 may apply one or more facial recognition processes or other recognition processes to the portion of the received video data where the person located by the controller 210 is located and to images stored by the online system 140 that identify the user (e.g., profile pictures in the user profile of the user, images of the user that include the user's face tagged with identification information) to determine whether the person located in the portion of the received video data is an online system user. For example, the presence system 140 augments the video data from the sending client device 110A such that information identifying the presence system user (e.g., first and last names, email address) is superimposed on a portion of the video data from the sending client device 110A that includes a person identified by the presence system 140 as a presence system user. The online system 140 sends 420 the enhanced video data to the receiving client device 110B. Alternatively, the presence system 140 generates information identifying the presence system users corresponding to the located people in the video data received from the sending client device 110A and sends the information identifying the presence system users corresponding to the people in the video along with the video data to the receiving client device 110B. For example, the presence system 140 generates a list of first and last names or a list of user names of presence system users corresponding to people in the video data received from the sending client device 110A for presentation to the viewing user by the receiving client device 110B in conjunction with the video data.
In some embodiments where the user authorizes various user-related data for improving the user experience of the online system 140, the online system 140 may consider the affinity between the viewing user associated with the receiving client device 110B and the user identified in the video data received from the sending client device 110A when generating information identifying the online system user corresponding to the person located in the video data received from the sending client device 110A. For example, the presence system 140 generates identification information for presence system users identified in the video data received from the sending client device 110A that are affiliated with the viewing user maintained by the presence system 140; in some embodiments, the presence system 140 may not generate identification information for presence system users not associated with the viewing user identified in the video data received from the sending client device 110A. Instead, the online system 140 visually distinguishes the identifying information of online system users connected to the viewing user that is identified from the received video data from the identifying information of online system users not connected to the viewing user. As another example, the presence system 140 determines the affinity of the viewing user for each of the presence system users identified in the video data received from the sending client device 110A and modifies the presentation of the identification information of the presence system users identified in the video data. For example, the presence system 140 generates information identifying the presence system users identified from the received video data that visually distinguishes the identified presence system users for which the presence system 140 determines that the viewing user has at least a threshold affinity. As another example, the online system 140 ranks the information identifying online system users identified from the received video data based on the affinity of the viewing user for the online system users identified from the received video data; the online system 140 generates information that visually distinguishes information identifying online system users identified from the received video data that have at least a threshold position in the ranking from information identifying other online system users identified from the received video data. In another embodiment, the presence system 140 generates information based on the affinity of the viewing user for the presence system user identified from the received video data, which information presents information identifying the presence system user identified from the received video data.
The receiving client device 110B renders 420 the video data from the online system 140 via the display device 115, allowing the sending client device 110A to provide the video data for rendering to the receiving client device 110B. Information from the online system 140 identifying the user of the online system 140 is presented by the receiving client device 110B in conjunction with the video data. Based on the presented video data, the receiving client device 110B receives 425 a selection of a user of interest from the viewing user and sends 430 information identifying the user of interest to the online system 140. In various embodiments, the viewing user may select an object of interest. For example, the viewing user selects information identifying a user of interest from information identifying users of the online system 140 presented in connection with the video data and the receiving client device 110B, and the receiving client device 110B sends 430 the information identifying the user of interest to the online system 140. The viewing user may select information identifying the user of interest from information describing the user included in the video data and presented by the receiving client device 110B along with the video data. Alternatively, the viewing user selects the portion of the presented video data that includes the person, and the receiving client device 110B identifies the selected portion of the presented video data to the online system 140, the online system 140 compares the content of the selected portion of the video data to stored images of the face or body associated with the user (e.g., images included in the user profile, images maintained by the online system 140 in which various users are identified), and identifies the user associated with one or more stored images that match the content of the selected portion of the video as the user of interest.
The online system 140 sends 435 information identifying the user of interest to the sending client device 110A. Based on the information identifying the user of interest and the characteristics of the object included in the video data captured 405 by the image capture device 117, the controller 210 modifies 440 the captured video data or parameters of the image capture device 117 of the sending client device 110A to more prominently present the user of interest. The pre-authorization of such changes may be modified after a notification of how the captured video may be changed is displayed on the client device 110A for viewing and authorization by the captured user and/or based on pre-authorization of such changes with reference to a list of pre-authorized persons controlling the receiving client device 110B (e.g., through privacy settings). Modifying may include modifying parameters of image capture device 117 to increase the magnification of portions of the video data that include the user of interest, or to remove portions of the video data that do not include the user of interest. In various embodiments, controller 210 of sending client device 110A applies one or more models, described further above, to the captured video data to modify 440 the captured video data to more prominently present the user of interest. The modified video data for prominently presenting the user of interest is transmitted 445 from the sending client device 110A to the presence system 140, the presence system 140 transmits 450 the video data for prominently presenting the user of interest to the receiving client device 110B, and the receiving client device 110B presents 455 the modified video data to the viewing user.
Controller 210 may apply one or more models to video data captured after receiving information identifying a user of interest to modify one or more parameters of image capture device 117 to follow the user of interest around the local area. For example, controller 210 applies one or more facial recognition models to a person located in captured video data to identify a person's face that matches a user's face of interest (received from online system 140 or identified from previously captured 405 video data based on information from the online system), and then repositions the focus of image capture device 117 to the person having a face that matches the user's face of interest. Alternatively, controller 210 extracts a chromaticity diagram from the captured video data including the user of interest and repositions the focus of image capture device 117 so that the extracted chromaticity diagram remains included in the video data captured by image capture device 117. In some embodiments, controller 210 receives information identifying a user of interest and then modifies image capture device 117 to track the identified person or object, allowing video data captured by image capture device 117 to track the movement of the identified person or object.
In various embodiments, the viewing user provides additional instructions to the online system 140 that modify the video data presented by the display device 115 of the receiving client device 110B, and the online system 140 sends these instructions to the sending client device 110A. Based on instructions from online system 140, controller 210 of sending client device 110A modifies one or more parameters of image capture device 117 or modifies video data captured 405 by image capture device 117. For example, instructions provided by the viewing user via the online system 140 cause the controller 210 to configure the image capture device 117 of the sending client device 110A such that the image capture device 117 is repositioned as the user of interest moves within the field of view of the image capture device 117. As another example, the viewing user, via instructions provided by the online system 140, identifies an object within video data captured 405 by the image capture device 117 of the sending client device 110A and presented via the receiving client device 110B. When the sending client device 110A receives instructions from the online system 140, the controller 210 modifies the captured video data or modifies one or more parameters of the image capture device 117 so that the captured video data subsequently provided to the additional client device 110 includes the identified object. This allows the viewing user to provide instructions to sending client device 110A to modify the video data captured by image capture device 117 of sending client device 110A and presented to the viewing user via receiving client device 110B. In various embodiments, the viewing user may provide instructions to the online system 140 for communicating with the sending client device 110A to modify video data captured by the sending client device 110A without identifying the user of interest from the video data captured by the sending client device 110A and presented 425 by the receiving client device 110B. This allows the viewing user to change the video data captured 405 by the sending client device 110A.
Conclusion
The foregoing description of embodiments of the present disclosure has been presented for purposes of illustration; it is not intended to be exhaustive or to limit the patent claims to the precise form disclosed. One skilled in the relevant art will recognize that many modifications and variations are possible in light of the above disclosure.
Some portions of the present description describe embodiments in terms of algorithms and symbolic representations of operations on information. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to convey the substance of their work effectively to others skilled in the art. These operations, while described functionally, computationally, or logically, are understood to be implemented by computer programs or equivalent electrical circuits, microcode, or the like. Moreover, it has also proven convenient at times, to refer to these arrangements of operations as modules, without loss of generality. The described operations and their associated modules may be embodied in software, firmware, hardware, or any combination thereof.
Any of the steps, operations, or processes described herein may be performed or implemented using one or more hardware or software modules, alone or in combination with other devices. In one embodiment, the software modules are implemented with a computer program product comprising a computer readable medium embodying computer program code, which may be executed by a computer processor, for performing any or all of the steps, operations, or processes described.
Embodiments may also relate to an apparatus for performing the operations herein. The apparatus may be specially constructed for the required purposes, and/or it may comprise a general purpose computing device selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a non-transitory, tangible computer readable storage medium, or any type of medium suitable for storing electronic instructions, which may be coupled to a computer system bus. Moreover, any computing system referred to in the specification may include a single processor, or may be an architecture that employs a multi-processor design to increase computing power.
Embodiments may also relate to products produced by the computing processes described herein. Such products may include information produced by a computing process, where the information is stored on a non-transitory, tangible computer-readable medium and may include any embodiment of a computer program product or other combination of data described herein.
Finally, the language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the patent rights. It is therefore intended that the scope of the patent claims be limited not by this detailed description, but rather by any claims issued on an application based hereon. Accordingly, the disclosure of the embodiments is intended to be illustrative, but not limiting, of the scope of the patent, which is set forth in the following claims.
Embodiments according to the invention are specifically disclosed in the accompanying claims directed to methods and computer program products, wherein any feature mentioned in one claim category (e.g. method) may also be claimed in another claim category (e.g. computer program product, system, storage medium). The dependencies or back-references in the appended claims are chosen for formal reasons only. However, any subject matter resulting from an intentional back-reference (especially multiple references) to any preceding claim may also be claimed, such that any combination of a claim and its features is disclosed and may be claimed, irrespective of the dependencies chosen in the appended claims. The subject matter which can be claimed comprises not only the combination of features as set forth in the appended claims, but also any other combination of features in the claims, wherein each feature mentioned in the claims can be combined with any other feature or combination of other features in the claims. Furthermore, any embodiments and features described or depicted herein may be claimed in a separate claim and/or in any combination with any embodiment or feature described or depicted herein or in any combination with any feature of the appended claims.
In an embodiment according to the invention, a computer-implemented method may include:
providing captured video data of a local area within a field of view of an image capture device included in the client devices, the video data comprising one or more users of the online system;
applying one or more models maintained by the client to characteristics of the captured video data;
locating one or more users of the online system included in the captured video data;
transmitting the video data to an online system;
receiving information from an online system identifying a user of interest within captured video data;
modifying one or more parameters of the image capture device to modify video data subsequently captured by the image capture device, cropping or scaling portions of the video data that include the user of interest to more prominently include the user of interest; and
video data subsequently captured by the image capture device using the modified one or more parameters is transmitted to the online system.
Modifying one or more parameters of the image capture device to modify video data subsequently captured by the image capture device, cropping or scaling portions of the video data that include the user of interest to more prominently include the user of interest, may include:
increasing a magnification of a portion of video data subsequently captured by the image capture device that includes the user of interest.
Modifying one or more parameters of the image capture device to modify video data subsequently captured by the image capture device, cropping or scaling portions of the video data that include the user of interest to more prominently include the user of interest, may include:
video data subsequently captured by the image capture device is modified to remove one or more portions of the video data subsequently captured by the image capture device that do not include the user of interest.
Modifying one or more parameters of the image capture device to modify video data subsequently captured by the image capture device, cropping or scaling portions of the video data that include the user of interest to more prominently include the user of interest, may include:
identifying a user of interest in video data captured by an image capture device; and
the focus of the image capture device is repositioned to the user of interest.
Identifying a user of interest in video data captured by an image capture device may include:
one or more facial recognition models are applied to identify a user of interest in video data captured by an image capture device.
Identifying a user of interest in video data captured by an image capture device may include:
a chromaticity diagram is extracted from a portion of the captured video data that includes a user of interest.
In an embodiment according to the invention, a computer-implemented method may include:
receiving instructions from the online system to modify additional video data captured by the image capture device; and
one or more parameters of the image capture device are modified in response to the received instructions.
Modifying one or more parameters of the image capture device in response to the received instructions may include:
repositioning the image capture device as the user of interest moves within the field of view of the image capture device.
Modifying one or more parameters of the image capture device to modify video data subsequently captured by the image capture device, cropping or scaling portions of the video data that include the user of interest to more prominently include the user of interest, may include:
applying one or more models to video data subsequently captured by the image capture device, the one or more models identifying movement of a user of interest; and
modifying a field of view of the image capture device based on the identified movement of the user of interest.
In an embodiment according to the invention, a computer-implemented method may include:
receiving, at an online system, video data of a local area within a field of view of an image capture device included in a client device, the video data including one or more users of the online system;
retrieving information identifying one or more users of an online system included in the video data;
identifying a viewing user of the online system;
transmitting the video data and information identifying one or more users of the online system included in the video data to an additional client device associated with the viewing user;
receiving, from an additional client device, information identifying an interested user of the one or more users of the online system included in the video data;
sending information identifying a user of interest to a client device;
receiving additional video data captured by an image capture device of a client device, the additional video data including a portion including a user of interest that has been cropped or scaled to more prominently display the user of interest; and
additional video data is sent to additional client devices.
Retrieving information identifying one or more users of an online system included in the video data may include:
identifying a user included in the online system that is associated to the online system of the viewing user via the online system; and
information identifying a user included in the online system that is associated with the online system to the viewing user via the online system is retrieved.
Retrieving information identifying one or more users of an online system included in the video data may include:
identifying a user included in the online system that is associated to the online system of the viewing user via the online system; and
information identifying users of the online system included in the video data is generated that visually distinguishes users included in the video data that are affiliated with the viewing user via the online system from users included in the video data that are not affiliated with the viewing user via the online system.
Retrieving information identifying one or more users of an online system included in the video data may include:
determining an affinity of the viewing user for each user of the online system included in the video data; and
information identifying each user of the online system included in the video data is generated based on the affinity of the viewing user for each user of the online system included in the video data.
Generating information identifying each user of the online system included in the video data based on the affinity of the viewing user for each user of the online system included in the video data may include:
information is generated that identifies each user of the online system for which the viewing user has at least a threshold affinity, included in the video data.
Generating information identifying each user of the online system included in the video data based on the affinity of the viewing user for each user of the online system included in the video data may include:
ranking users of the online system included in the video data based on the affinity of the viewing user for the users of the online system included in the video data; and
information is generated identifying each user of the online system included in the video data having at least a threshold position in the ranking.
In an embodiment according to the invention, a computer-implemented method may include:
receiving, at the online system, an instruction from an additional client device to modify video data captured by an image capture device of the client device;
sending an instruction to the client device;
receiving, from the client device, additional video data captured by the image capture device based on the instruction; and
additional video data is transmitted from the online system to the additional client device.
In an embodiment according to the invention, a computer program product comprising a computer readable storage medium having instructions encoded thereon, which when executed by a processor, cause the processor to:
providing captured video data of a local area within a field of view of an image capture device included in the client devices, the video data comprising one or more users of the online system;
applying one or more models maintained by the client to characteristics of the captured video data;
locating one or more users of the online system included in the captured video data;
transmitting the video data to an online system;
receiving information from the online system identifying a user of interest within the captured video data;
modifying one or more parameters of the image capture device to modify video data subsequently captured by the image capture device, cropping or scaling portions of the video data that include the user of interest to more prominently include the user of interest; and
video data subsequently captured by the image capture device using the modified one or more parameters is transmitted to the online system.
Modifying one or more parameters of the image capture device to modify video data subsequently captured by the image capture device, cropping or scaling portions of the video data that include the user of interest to more prominently include the user of interest, may include:
video data subsequently captured by the image capture device is modified to remove one or more portions of the video data subsequently captured by the image capture device that do not include the user of interest.
Modifying one or more parameters of the image capture device to modify video data subsequently captured by the image capture device, cropping or scaling portions of the video data that include the user of interest to more prominently include the user of interest, may include:
identifying a user of interest in video data captured by an image capture device; and
the focus of the image capture device is repositioned to the user of interest.
In embodiments according to the invention, one or more computer-readable non-transitory storage media may embody software that is operable when executed to perform a method according to the invention or any of the above-mentioned embodiments.
In an embodiment according to the invention, a system may comprise: one or more processors; and at least one memory coupled to the processor and comprising instructions executable by the processor, the processor being operable when executing the instructions to perform a method according to the invention or any of the above mentioned embodiments.
In an embodiment according to the invention, a computer program product, preferably comprising a computer-readable non-transitory storage medium, may be operable when executed on a data processing system to perform a method according to the invention or any of the above-mentioned embodiments.
In an embodiment according to the invention, a method, in particular a computer-implemented method, may comprise:
maintaining, at a client device, one or more models, each model applying one or more rules to determine which part of a body of a user of an online system is located in video data captured by an image capture device of the client device based on characteristics of the video data captured by the image capture device;
providing captured video data of a local area within a field of view of an image capture device included in a client device;
locating a user included in the captured video data;
applying the one or more maintained models to the captured video data;
modifying video data captured by an image capture device to crop or scale portions of the video data that include one or more portions of a user based on application of one or more models; and
the modified video data is sent to an online system.
Modifying video data captured by an image capture device to crop or scale portions of the video data that include one or more portions of a user based on application of one or more models may include:
identifying a joint of a body of a user; and
video data captured by the image capture device is modified to include joints corresponding to different portions of the user's body based on one or more rules included in the one or more models.
Modifying video data captured by an image capture device to crop or scale portions of the video data that include one or more portions of a user based on application of one or more models may include:
based on application of one or more rules included in the one or more models, modifying the video data to include one or more selected from the group consisting of: a user's head, a user's torso, a user's entire body, and any combination thereof.
The one or more rules may be based on one or more selected from the group consisting of: content of video data previously captured by the image capture device, movement of a user in video data previously captured by the image capture device, and any combination thereof.
Modifying video data captured by an image capture device to crop or scale portions of the video data that include one or more portions of a user based on application of one or more models may include:
determining from application of one or more models that a user is gesturing towards an object in a local area; and
the magnification of the image capture device is reduced such that the additional video data captured by the image capture device includes the user and the object toward which the user is gesturing.
Modifying video data captured by an image capture device to crop or scale portions of the video data that include one or more portions of a user based on application of one or more models may include:
determining from application of one or more models that a user is holding an object in a local area; and
changing the sound emission point of the image capturing device to the object and increasing the magnification of the image capturing device.
Modifying video data captured by an image capture device to crop or scale portions of the video data that include one or more portions of a user based on application of one or more models may include:
one or more parameters of the image capture device that affect a field of view of the image capture device are modified based on application of the one or more models.
The parameter of the image capturing device may comprise a magnification of the image capturing device.
Modifying video data captured by an image capture device to crop or scale portions of the video data that include one or more portions of a user based on application of one or more models may include:
video data captured by an image capture device is modified to modify a magnification of a portion of the captured video data that includes the identified user.
Locating the user included in the captured video data may include:
receiving information identifying a user from an online system; and
based on the received information, a portion of the video data captured by the image capture device that includes the identified user is identified.
In an embodiment according to the invention, a computer program product comprising a computer readable storage medium having instructions encoded thereon, which when executed by a processor, cause the processor to:
maintaining, at the client device, one or more models, each model applying one or more rules to determine which part of the user's body is located in video data captured by an image capture device of the client device based on characteristics of the video data captured by the image capture device;
providing captured video data of a local area within a field of view of an image capture device included in the client devices;
locating a user included in the captured video data;
applying the one or more maintained models to the captured video data;
based on application of the one or more models, modifying video data captured by the image capture device to crop or scale portions of the video data that include one or more portions of the user; and
the modified video data is sent to an online system.
Modifying video data captured by an image capture device to crop or scale portions of the video data that include one or more portions of a user based on application of one or more models may include:
identifying a joint of a body of a user; and
the video data captured by the image capture device is modified to include joints corresponding to different portions of the user's body based on one or more rules included in the one or more models.
Modifying video data captured by an image capture device to crop or scale portions of the video data that include one or more portions of a user based on application of one or more models may include:
based on application of one or more rules included in the one or more models, modifying the video data to include one or more selected from the group consisting of: a user's head, a user's torso, a user's entire body, and any combination thereof.
The one or more rules may be based on one or more selected from the group consisting of: the content of the video data previously captured by the image capture device, the movement of the user in the video data previously captured by the image capture device, and any combination thereof.
Modifying video data captured by an image capture device to crop or scale portions of the video data that include one or more portions of a user based on application of one or more models may include:
determining from application of one or more models that a user is gesturing towards an object in a local area; and
the magnification of the image capture device is reduced such that the additional video data captured by the image capture device includes the user and the object toward which the user is gesturing.
Modifying video data captured by an image capture device to crop or scale portions of the video data that include one or more portions of a user based on application of one or more models may include:
determining from application of one or more models that a user is holding an object in a local area; and
changing the sound emission point of the image capturing device to the object and increasing the magnification of the image capturing device.
Modifying video data captured by an image capture device to crop or scale portions of the video data that include one or more portions of a user based on application of one or more models may include:
one or more parameters of the image capture device that affect a field of view of the image capture device are modified based on application of the one or more models.
The parameter of the image capturing device may comprise a magnification of the image capturing device.
Modifying video data captured by an image capture device to crop or scale portions of the video data that include one or more portions of a user based on application of one or more models may include:
video data captured by an image capture device is modified to modify a magnification of a portion of the captured video data that includes a user.
Locating the user included in the captured video data may include:
receiving information identifying a user from an online system; and
based on the received information, a portion of the video data captured by the image capture device that includes the identified user is identified.
In embodiments according to the invention, one or more computer-readable non-transitory storage media may embody software that is operable when executed to perform a method according to the invention or any of the above-mentioned embodiments.
In an embodiment according to the invention, a system may comprise: one or more processors; and at least one memory coupled to the processor and comprising instructions executable by the processor, the processor being operable when executing the instructions to perform a method according to the invention or any of the above mentioned embodiments.
In an embodiment according to the invention, a computer program product, preferably comprising a computer-readable non-transitory storage medium, may be operable when executed on a data processing system to perform a method according to the invention or any of the above-mentioned embodiments.
In an embodiment according to the invention, a method, in particular a computer-implemented method, may comprise:
maintaining one or more models at the client device, each model applying one or more rules to locate a user in video data captured by an image capture device of the client device and to determine that one or more located users are displayed in the captured video data;
providing captured video data of a local area within a field of view of an image capture device included in the client devices;
locating a plurality of users in the captured video data;
applying the one or more maintained models to the captured video data;
modifying the captured video data to display the one or more located users based on application of the one or more maintained models; and
the modified video data is sent from the client device to the online system.
Modifying the captured video data to display the one or more located users based on the application of the one or more maintained models may include:
modifying one or more parameters of the image capture device based on application of the one or more maintained models; and
additional video data of the local area is captured using the modified one or more parameters of the image capture device.
Modifying the captured video data to display the one or more located users based on the application of the one or more maintained models may include:
determining, from application of the one or more maintained models, a location of a gaze orientation of a located user within the captured video data; and
the focus of the image capture device is modified to be at least a threshold number of positioned user people toward whom gazes of the positioned user are directed.
Modifying the captured video data to display the one or more located users based on the application of the one or more maintained models may include:
determining distances between different users located within the captured video data; and
the focus of the image capture device is modified such that the additionally captured video data includes users within a threshold distance of each other.
Modifying the captured video data to display the one or more located users based on the application of the one or more maintained models may include:
capturing audio data via an audio capture device included in a client device;
determining a source of audio data within the local area by applying the one or more maintained models to the captured audio data and the captured video data;
determining that the audio data source within the local region is a user; and
in response to determining that the source of audio data within the local area is a user, the focus of the image capture device is modified to the source of audio data.
Modifying the captured video data to display the one or more located users based on the application of the one or more maintained models may include:
identifying movement of the located user from application of the one or more maintained models; and
modifying a field of view of the image capture device based on the identified movement of the located user.
Identifying movement of the located user from the application of the one or more maintained models may include:
recognizing that the located user is gesturing towards an object included in the local area.
Modifying the field of view of the image capture device may include:
the magnification of the image capture device is reduced such that the additional video data captured by the image capture device includes the located user and the object toward which the located user is gesturing.
Identifying movement of the located user from the application of the one or more maintained models may include:
it is determined from the application of the one or more maintained models that the located user is holding an object in the local area.
Modifying the field of view of the image capture device may include:
changing the focus of the image capturing device to the subject and increasing the magnification of the image capturing device.
Modifying the captured video data to display the one or more located users based on the application of the one or more maintained models may include:
the captured video data is modified such that each located user is presented within the captured video data for at least a threshold amount of time in at least one set of threshold sizes.
Modifying the captured video data to display the one or more located users based on the application of the one or more maintained models may include:
the captured video data is modified such that each located user is presented within the captured video data for at least a threshold amount of time.
Modifying the captured video data to display the one or more located users based on application of the one or more maintained models may include:
identifying a face of the user within the captured video data from application of the one or more maintained models;
modifying the captured video data to remove portions of the captured video data that do not include the user's face.
Modifying the captured video data to display the one or more located users based on the application of the one or more maintained models may include:
the captured video data is modified to include each located user.
Modifying the captured video data to display the one or more located users based on the application of the one or more maintained models may include:
the captured video data is stabilized by applying one or more of the maintained models.
Modifying the captured video data to display the one or more located users based on the application of the one or more maintained models may include:
comparing the located user within the captured video data with stored data describing users included in the captured video data, the stored data describing users that have been previously presented by the captured video data in at least one set of threshold sizes; and
in response to the comparison determining that the identified user does not match a user included in the captured video data who has been previously captured with at least one set of threshold sizes, modifying the captured video data to prominently present the located person.
In an embodiment according to the invention, a computer program product comprising a computer readable storage medium having instructions encoded thereon, which when executed by a processor, cause the processor to:
maintaining one or more models at the client device, each model applying one or more rules to locate users in video data captured by an image capture device of the client device and determining from characteristics of the video data that one or more located users are displayed in the captured video data;
capturing video data of a local area within a field of view of an image capture device included in the client devices;
locating a plurality of users in the captured video data;
applying the one or more maintained models to the captured video data;
modifying the captured video data to display the one or more located users based on application of the one or more maintained models; and
the modified video data is sent from the client device to the online system.
Modifying the captured video data to display the one or more located users based on the application of the one or more maintained models may include:
determining, from application of the one or more maintained models, a location of a gaze orientation of a located user within the captured video data; and
the focus of the image capture device is modified to at least a threshold number of the identified persons gazing toward the located person.
Modifying the captured video data to display the one or more located users based on the application of the one or more maintained models may include:
determining distances between different users located within the captured video data; and
the focus of the image capture device is modified such that the additionally captured video data includes people within a threshold distance of each other.
Modifying the captured video data to display the one or more located users based on the application of the one or more maintained models may include:
capturing audio data via an audio capture device included in a client device;
determining a source of audio data within the local area by applying the one or more maintained models to the captured audio data and the captured video data;
determining that the audio data source within the local region is a located user; and
in response to determining that the audio data source within the local area is the located user, modifying the focus of the image capture device to the audio data source.
In embodiments according to the invention, one or more computer-readable non-transitory storage media may embody software that is operable when executed to perform a method according to the invention or any of the above-mentioned embodiments.
In an embodiment according to the invention, a system may comprise: one or more processors; and at least one memory coupled to the processor and comprising instructions executable by the processor, the processor being operable when executing the instructions to perform a method according to the invention or any of the above mentioned embodiments.
In an embodiment according to the invention, a computer program product, preferably comprising a computer-readable non-transitory storage medium, may be operable when executed on a data processing system to perform a method according to the invention or any of the above-mentioned embodiments.

Claims (19)

1. A computer-implemented method, comprising:
providing captured first video data of a local area within a field of view of an image capture device included in a client device, the first video data comprising one or more users of an online social-networking system;
locating the one or more users of the online social-networking system included in the captured first video data by applying one or more models maintained by the client device to characteristics of the captured first video data in accordance with one or more privacy settings selected by the one or more users;
transmitting the first video data and the located one or more users from the client device to the online social-networking system;
receiving, via the online social networking system, information identifying a user of interest among the located one or more users within the captured first video data;
modifying one or more parameters of the image capture device in accordance with one or more privacy settings selected by the user of interest to modify second video data subsequently captured by the image capture device, cropping or scaling portions of the second video data that include the identified user of interest to more prominently include the identified user of interest; and
sending the second video data subsequently captured by the image capture device using the modified one or more parameters from the client device to the online social-networking system.
2. The computer-implemented method of claim 1, wherein modifying one or more parameters of the image capture device to modify the second video data subsequently captured by the image capture device, cropping or scaling portions of the second video data that include the identified user of interest to more prominently include the identified user of interest comprises:
increasing a magnification of a portion of the second video data subsequently captured by the image capture device that includes the identified user of interest.
3. The computer-implemented method of claim 1 or claim 2, wherein modifying one or more parameters of the image capture device to modify the second video data subsequently captured by the image capture device, cropping or scaling portions of the second video data that include the identified user of interest to more prominently include the identified user of interest comprises:
modifying the second video data subsequently captured by the image capture device to remove one or more portions of the second video data subsequently captured by the image capture device that do not include the identified user of interest.
4. The computer-implemented method of any of claims 1 to 3, wherein modifying one or more parameters of the image capture device to modify the second video data subsequently captured by the image capture device, cropping or scaling portions of the second video data that include the identified user of interest to more prominently include the identified user of interest comprises:
identifying the user of interest in the second video data captured by the image capture device; and
repositioning a focus of the image capture device to the identified user of interest.
5. The computer-implemented method of claim 4, wherein identifying the user of interest in the second video data captured by the image capture device comprises:
applying one or more facial recognition models to identify the user of interest in the second video data captured by the image capture device.
6. The computer-implemented method of claim 4, wherein identifying the user of interest in the second video data captured by the image capture device comprises:
extracting a chromaticity diagram from a portion of the captured second video data including the user of interest.
7. The computer-implemented method of any of claims 1 to 6, further comprising:
receiving instructions from the online social networking system to modify additional video data captured by the image capture device; and
modifying one or more parameters of the image capture device in response to the received instruction.
8. The computer-implemented method of claim 7, wherein modifying one or more parameters of the image capture device in response to the received instructions comprises:
repositioning the image capture device as the identified user of interest moves within the field of view of the image capture device.
9. The computer-implemented method of any of claims 1 to 8, wherein modifying one or more parameters of the image capture device to modify the second video data subsequently captured by the image capture device, cropping or scaling portions of the second video data that include the identified user of interest to more prominently include the identified user of interest comprises:
applying one or more models to the second video data subsequently captured by the image capture device, the one or more models identifying the identified movement of the user of interest; and
modifying a field of view of the image capture device based on the identified movement of the identified user of interest.
10. A computer-implemented method, comprising:
receiving, at an online social-networking system, first video data of a local area within a field of view of an image-capture device included in a client device, the first video data comprising one or more users of the online social-networking system;
retrieving information identifying the one or more users of the online social-networking system included in the first video data according to one or more privacy settings selected by the one or more users;
identifying a viewing user of the online social networking system;
transmitting the first video data and information identifying the one or more users of the online social-networking system included in the first video data to an additional client device associated with the viewing user;
receiving, from the additional client device, information identifying an interested user of the one or more users of the online social-networking system included in the first video data;
transmitting information identifying the user of interest to the client device in accordance with one or more privacy settings selected by the user of interest;
receiving second video data captured by the image capture device of the client device, the second video data comprising a portion including the identified user of interest that has been cropped or scaled to more prominently display the identified user of interest; and
transmitting the second video data to the additional client device.
11. The computer-implemented method of claim 10, wherein retrieving information identifying the one or more users of the online social-networking system included in the first video data comprises:
identifying users of the online social networking system included in the online social networking system that are affiliated to the viewing user via the online social networking system; and
retrieving information identifying users of the online social networking system included in the online social networking system that are affiliated to the viewing user via the online social networking system.
12. The computer-implemented method of claim 10 or claim 11, wherein retrieving information identifying the one or more users of the online social-networking system included in the first video data comprises:
identifying users of the online social networking system included in the online social networking system that are affiliated to the viewing user via the online social networking system; and
generating information identifying users of the online social-networking system included in the first video data that visually distinguishes users included in the first video data that are affiliated with the viewing user via the online social-networking system from users included in the first video data that are not affiliated with the viewing user via the online social-networking system.
13. The computer-implemented method of any of claims 10 to 12, wherein retrieving information identifying the one or more users of the online social-networking system included in the first video data comprises:
determining an affinity of the viewing user for each user of the online social-networking system included in the first video data; and
generating information identifying each user of the online social-networking system included in the first video data based on the affinity of the viewing user for each user of the online social-networking system included in the first video data.
14. The computer-implemented method of claim 13, wherein generating information identifying each user of the online social-networking system included in the first video data based on the affinity of the viewing user for each user of the online social-networking system included in the first video data comprises:
generating information identifying each user of the online social-networking system for which the viewing user has at least a threshold affinity that is included in the first video data.
15. The computer-implemented method of claim 13, wherein generating information identifying each user of the online social-networking system included in the first video data based on the affinity of the viewing user for each user of the online social-networking system included in the first video data comprises:
ranking users of the online social-networking system included in the first video data based on the affinity of the viewing user for users of the online social-networking system included in the first video data; and
generating information identifying each user of the online social-networking system included in the first video data having at least a threshold position in the ranking.
16. The computer-implemented method of any of claims 10 to 15, further comprising:
receiving, at the online social networking system from the additional client device, instructions to modify video data captured by the image capture device of the client device;
sending the instruction to the client device;
receiving, from the client device, second video data captured by the image capture device based on the instruction; and
transmitting the second video data from the online social-networking system to the additional client device.
17. A computer readable storage medium having instructions encoded thereon, which when executed by a processor, cause the processor to:
providing captured first video data of a local area within a field of view of an image capture device included in a client device, the first video data comprising one or more users of an online social-networking system;
locating the one or more users of the online social-networking system included in the captured first video data by applying one or more models maintained by the client device to characteristics of the captured first video data in accordance with one or more privacy settings selected by the one or more users;
transmitting the first video data and the located one or more users from the client device to the online social-networking system;
receiving, via the online social networking system, information identifying a user of interest among the located one or more users within the captured first video data;
modifying one or more parameters of the image capture device in accordance with one or more privacy settings selected by the user of interest to modify second video data subsequently captured by the image capture device, cropping or scaling portions of the second video data that include the identified user of interest to more prominently include the identified user of interest; and
sending the second video data subsequently captured by the image capture device using the modified one or more parameters from the client device to the online social-networking system.
18. The computer-readable storage medium of claim 17, wherein modifying one or more parameters of the image capture device to modify the second video data subsequently captured by the image capture device to crop or scale portions of the second video data that include the identified user of interest to more prominently include the identified user of interest comprises:
modifying the second video data subsequently captured by the image capture device to remove one or more portions of the second video data subsequently captured by the image capture device that do not include the identified user of interest.
19. The computer-readable storage medium of claim 17 or claim 18, wherein modifying one or more parameters of the image capture device to modify the second video data subsequently captured by the image capture device, cropping or scaling the portion of the second video data that includes the identified user of interest to more prominently include the identified user of interest comprises:
identifying the user of interest in the second video data captured by the image capture device; and
repositioning a focus of the image capture device to the identified user of interest.
CN201880071766.5A 2017-09-05 2018-09-05 Computer-implemented method and storage medium Active CN111316656B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202310329426.0A CN116193175A (en) 2017-09-05 2018-09-05 Computer-implemented method and storage medium
CN202310329439.8A CN116208791A (en) 2017-09-05 2018-09-05 Computer-implemented method and storage medium

Applications Claiming Priority (15)

Application Number Priority Date Filing Date Title
US201762554564P 2017-09-05 2017-09-05
US62/554,564 2017-09-05
US201715856108A 2017-12-28 2017-12-28
US201715856105A 2017-12-28 2017-12-28
US201715856109A 2017-12-28 2017-12-28
US15/856,105 2017-12-28
US15/856,109 2017-12-28
US15/856,108 2017-12-28
US16/121,060 2018-09-04
US16/121,060 US10805521B2 (en) 2017-09-05 2018-09-04 Modifying capture of video data by an image capture device based on video data previously captured by the image capture device
US16/121,087 2018-09-04
US16/121,087 US10666857B2 (en) 2017-09-05 2018-09-04 Modifying capture of video data by an image capture device based on video data previously captured by the image capture device
US16/121,081 US10868955B2 (en) 2017-09-05 2018-09-04 Modifying capture of video data by an image capture device based on video data previously captured by the image capture device
US16/121,081 2018-09-04
PCT/US2018/049532 WO2019050938A1 (en) 2017-09-05 2018-09-05 Modifying capture of video data by an image capture device based on video data previously captured by the image capture device

Related Child Applications (2)

Application Number Title Priority Date Filing Date
CN202310329439.8A Division CN116208791A (en) 2017-09-05 2018-09-05 Computer-implemented method and storage medium
CN202310329426.0A Division CN116193175A (en) 2017-09-05 2018-09-05 Computer-implemented method and storage medium

Publications (2)

Publication Number Publication Date
CN111316656A CN111316656A (en) 2020-06-19
CN111316656B true CN111316656B (en) 2023-03-28

Family

ID=65635185

Family Applications (3)

Application Number Title Priority Date Filing Date
CN202310329426.0A Pending CN116193175A (en) 2017-09-05 2018-09-05 Computer-implemented method and storage medium
CN201880071766.5A Active CN111316656B (en) 2017-09-05 2018-09-05 Computer-implemented method and storage medium
CN202310329439.8A Pending CN116208791A (en) 2017-09-05 2018-09-05 Computer-implemented method and storage medium

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202310329426.0A Pending CN116193175A (en) 2017-09-05 2018-09-05 Computer-implemented method and storage medium

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202310329439.8A Pending CN116208791A (en) 2017-09-05 2018-09-05 Computer-implemented method and storage medium

Country Status (5)

Country Link
EP (1) EP3679722A4 (en)
JP (3) JP7258857B2 (en)
KR (1) KR20200039814A (en)
CN (3) CN116193175A (en)
WO (1) WO2019050938A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2884438B1 (en) * 2005-04-19 2007-08-03 Commissariat Energie Atomique PROCESS FOR EXTRACTING AT LEAST ONE COMPOUND OF A LIQUID PHASE COMPRISING A FUNCTIONALIZED IONIC LIQUID, AND A MICROFLUIDIC SYSTEM FOR CARRYING OUT SAID METHOD
KR20220107860A (en) * 2021-01-26 2022-08-02 삼성전자주식회사 Electronic device performing screen captures and method thereof
WO2023087215A1 (en) * 2021-11-18 2023-05-25 Citrix Systems, Inc. Online meeting non-participant detection and remediation
KR20230083101A (en) * 2021-12-02 2023-06-09 삼성전자주식회사 Electronic device and method for editing content being played on display device

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101606039B (en) * 2007-01-08 2012-05-30 微软公司 Dynamic map rendering as function of user parameter

Family Cites Families (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8510283B2 (en) * 2006-07-31 2013-08-13 Ricoh Co., Ltd. Automatic adaption of an image recognition system to image capture devices
KR100703699B1 (en) 2005-02-05 2007-04-05 삼성전자주식회사 Apparatus and method for providing multilateral video communication
US8085302B2 (en) 2005-11-21 2011-12-27 Microsoft Corporation Combined digital and mechanical tracking of a person or object using a single video camera
US20070198632A1 (en) * 2006-02-03 2007-08-23 Microsoft Corporation Transferring multimedia from a connected capture device
US8284990B2 (en) * 2008-05-21 2012-10-09 Honeywell International Inc. Social network construction based on data association
JP4569670B2 (en) * 2008-06-11 2010-10-27 ソニー株式会社 Image processing apparatus, image processing method, and program
JP5495855B2 (en) 2010-03-01 2014-05-21 キヤノン株式会社 Video processing apparatus and video processing method
CN102196087A (en) * 2010-03-12 2011-09-21 中兴通讯股份有限公司 Lens control method and terminals
US8626847B2 (en) * 2010-04-30 2014-01-07 American Teleconferencing Services, Ltd. Transferring a conference session between client devices
US20120204225A1 (en) * 2011-02-08 2012-08-09 Activepath Ltd. Online authentication using audio, image and/or video
US20120233076A1 (en) * 2011-03-08 2012-09-13 Microsoft Corporation Redeeming offers of digital content items
KR20140099111A (en) * 2013-02-01 2014-08-11 삼성전자주식회사 Method for control a camera apparatus and the camera apparatus
US9558555B2 (en) * 2013-02-22 2017-01-31 Leap Motion, Inc. Adjusting motion capture based on the distance between tracked objects
US9923979B2 (en) * 2013-06-27 2018-03-20 Google Llc Systems and methods of determining a geographic location based conversion
JP6429454B2 (en) * 2013-11-28 2018-11-28 キヤノン株式会社 IMAGING DEVICE, IMAGING DEVICE CONTROL METHOD, AND IMAGING DEVICE CONTROL PROGRAM
JP2015115741A (en) 2013-12-11 2015-06-22 キヤノンマーケティングジャパン株式会社 Image management device, image management method, and program
US10235587B2 (en) * 2014-03-04 2019-03-19 Samsung Electronics Co., Ltd. Method and system for optimizing an image capturing boundary in a proposed image
US9576343B2 (en) * 2014-11-10 2017-02-21 Arizona Board Of Regents On Behalf Of Arizona State University Systems and methods for a content-adaptive photo-enhancement recommender
JP6486656B2 (en) * 2014-11-11 2019-03-20 オリンパス株式会社 Imaging device
US10244175B2 (en) * 2015-03-09 2019-03-26 Apple Inc. Automatic cropping of video content
US11750674B2 (en) * 2015-05-05 2023-09-05 Penguin Computing, Inc. Ultra-low latency remote application access
US9860451B2 (en) * 2015-06-07 2018-01-02 Apple Inc. Devices and methods for capturing and interacting with enhanced digital images
US20160364103A1 (en) * 2015-06-11 2016-12-15 Yaron Galant Method and apparatus for using gestures during video playback
US9621795B1 (en) * 2016-01-08 2017-04-11 Microsoft Technology Licensing, Llc Active speaker location detection
JP6241802B1 (en) 2017-01-20 2017-12-06 パナソニックIpマネジメント株式会社 Video distribution system, user terminal device, and video distribution method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101606039B (en) * 2007-01-08 2012-05-30 微软公司 Dynamic map rendering as function of user parameter

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种基于位置社交网络的地点推荐算法;袁适之等;《计算机应用研究》;20150929;第第33卷卷(第07期);第2003-2006页 *

Also Published As

Publication number Publication date
CN111316656A (en) 2020-06-19
JP7258857B2 (en) 2023-04-17
KR20200039814A (en) 2020-04-16
EP3679722A1 (en) 2020-07-15
JP2023098931A (en) 2023-07-11
WO2019050938A1 (en) 2019-03-14
CN116193175A (en) 2023-05-30
JP2020532903A (en) 2020-11-12
JP2023098930A (en) 2023-07-11
EP3679722A4 (en) 2020-07-15
CN116208791A (en) 2023-06-02

Similar Documents

Publication Publication Date Title
US10971158B1 (en) Designating assistants in multi-assistant environment based on identified wake word received from a user
US11558543B2 (en) Modifying capture of video data by an image capture device based on video data previously captured by the image capture device
US10805521B2 (en) Modifying capture of video data by an image capture device based on video data previously captured by the image capture device
CN111316656B (en) Computer-implemented method and storage medium
US11166065B1 (en) Synchronizing presentation of content presented by multiple client devices
US10873697B1 (en) Identifying regions of interest in captured video data objects by detecting movement within higher resolution frames of the regions
US11418827B2 (en) Generating a feed of content for presentation by a client device to users identified in video data captured by the client device
CN109792557B (en) Method for processing video data and storage medium
US10666857B2 (en) Modifying capture of video data by an image capture device based on video data previously captured by the image capture device
US10757347B1 (en) Modifying display of an overlay on video data based on locations of regions of interest within the video data
CN112806021B (en) Modifying presentation of video data by a receiving client device based on analysis of the video data by another client device that captured the video data
US20150150032A1 (en) Computer ecosystem with automatic "like" tagging
US10721394B1 (en) Gesture activation for an image capture device
CN112806020A (en) Modifying capture of video data by an image capture device based on identifying an object of interest in the captured video data to the image capture device
US11381533B1 (en) Intelligent determination of whether to initiate a communication session for a user based on proximity to client device
US10812616B2 (en) Transferring an exchange of content to a receiving client device from a client device authorized to transfer a content exchange to the receiving client device
US11444943B1 (en) Exchange content between client devices when a client device determines a user is within a field of view of an image capture device of the client device and authorized to exchange content
US11100330B1 (en) Presenting messages to a user when a client device determines the user is within a field of view of an image capture device of the client device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: California, USA

Applicant after: Yuan platform Co.

Address before: California, USA

Applicant before: Facebook, Inc.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant