Detailed Description
Please refer to fig. 1, which is a diagram illustrating an embodiment of utilizing a social networking service in a network system. FIG. 1 illustrates that media-based content sharing may be performed in accordance with various embodiments through a social networking service. As shown in fig. 1, the network system includes a social network server 10 and terminal apparatuses (21 to 23) that communicate with the social network server 10. For example, the social networking server 10 includes one or more computing devices and is used to provide social networking services (social networking service). The terminal device is, for example, a computing device (computing device) such as a smart phone, a tablet computer, a notebook computer, or a desktop computer. Thus, users can share based on the content of the media. For example, a user may communicate with the social networking server 10 via the terminal device 21 in a wired or wireless manner to share media-based content (e.g., content including an image CM) to a chat room of a social networking service, such as where two users' terminal devices 22 and 23 may view the media-based content. The social network service is LINE, instagram, youTube, facebook or WeChat, for example, wherein a community, a community or a service with members for communicating information can be regarded as a chat room.
Before enumerating embodiments for facilitating media-based content sharing, reference is made to FIG. 2, which is a schematic block diagram of an embodiment of a computing device. The architecture of the computing device according to fig. 2 may be used to implement the aforementioned terminal device or server. As shown in fig. 2, the computing device 200 includes a storage unit 210, a processing unit 220, and a communication unit 230. The storage unit 210 is electrically coupled to or communicatively connected to the processing unit 220, and the storage unit 210 may be, for example, a storage device, a database, a network storage or a combination thereof, and may be one or more in number and may be used to store data or program instructions. The communication unit 230 is electrically coupled to the processing unit 220. The communication unit 230 may be in signal connection with a network or other computing device in a wired or wireless manner. The processing unit 220, such as a microprocessor, a single chip or a microcontroller, is used to process data or execute program instructions.
In one embodiment, a server (e.g., 10) may be implemented based on computing device 200. For example, the social networking server 10 may be implemented with one or more computing devices 200 to provide social networking services. The processing unit 220 may be operative to execute program instructions of the social networking service to provide the social networking service. The storage unit 210 may be used to store program instructions or data, such as data of users in a social networking service, and databases established by the social networking server 10 to store data related to the user's media-based content sharing behavior. The memory unit 210 may also be used for storing program instructions. The communication unit 230 may be configured to establish a communication link with at least one of a frequency band network, an optical fiber network, a wireless local area network, and a mobile communication network, whereby the processing unit 220 may communicate with one or more terminal devices through the communication unit 230 to provide a social network service. However, the implementation of the present invention is not limited by the above examples.
In another embodiment, the terminal device (e.g., any of 21 to 23) may be implemented based on the computing device 200. For example, the computing device 200 may also include a display unit 240, a sound unit (e.g., including a microphone or sound processor, etc.), or other units or circuits to implement a terminal device. The terminal devices (21 to 23) may install client programs of social network services or use social network services with a browser. The processing unit 220 may be used to execute program instructions of a client program or browser to utilize social networking services. The storage unit 210 may be used to store program instructions or data, such as data generated by a user when using a social networking service, and also, for example, to create a database in a terminal device to store data related to the user's media-based content sharing behavior, and the storage unit 210 may also be used to store program instructions. The communication unit 230 may be configured to establish a communication connection with at least one of a frequency band network, a fiber optic network, a wireless local area network, and a mobile communication network. Thus, the processing unit 220 may communicate with the social network server 10 through the communication unit 230 to use the social network service. The display unit 240, such as a flat panel display or a touch screen, is used to provide an interactive display interface.
Referring to fig. 3A, a flow chart of an embodiment of a method for facilitating media-based content sharing is shown. As shown in fig. 3A, the method includes the following steps. As shown in step S10, for user media content, sharing behavior data regarding media-based content sharing behavior in a social networking service is obtained from a database that is used to indicate relationships between a plurality of attributes including target chat room information and content features involved in the media-based content sharing behavior. As shown in step S20, association information for associating the user media content with one or more corresponding chat rooms in the social networking service is generated based on the sharing behavior data. In step S30, according to the association information, recommending a specific chat room in the one or more corresponding chat rooms as a target chat room of the user media content to be shared, or recommending a specific media content in a media content set as media content to be shared in one chat room in the one or more corresponding chat rooms, wherein the media content set includes the user media content.
The embodiment of fig. 3A may be implemented in a terminal device or a server. For example, to facilitate media-based content sharing, and reduce the burden on the user, the terminal device or server may automatically generate association information for the user 'S media content and chat rooms in the social networking service from a database regarding the user' S sharing behavior through steps S10 and S20 of fig. 3A. The associated information may be used to predict a target chat room that the user may designate when the user wants to share the designated user media content, or to predict user media content that the user may share when the user wants to designate the target chat room. When the user performs the content sharing action based on the media on the terminal device, the terminal device or the server performs sharing action prediction according to the related information and outputs recommended items using the predicted result through step S30 of fig. 3A, so that the user interface of the terminal device presents the recommended items. The recommendation may be user media content or chat room, or further include text or otherwise.
Some or all of the steps in the method of fig. 3A may be implemented using one of the social network server 10 and the terminal apparatus. The following lists some embodiments illustrating the manner in which the method based on fig. 3A may be implemented in different application scenarios. The embodiments herein may be combined in any suitable manner.
In the following embodiments, the terminal device 21 utilizes the method based on fig. 3A to facilitate media-based content sharing. Please refer to fig. 3B, which is a schematic timing diagram of an embodiment of the method based on fig. 3A. Assume that a user takes an image (a still image such as a photograph or the like, or a moving image such as a video) on the terminal device 21, generates a new image using the existing image, downloads the image or stores the image, and prepares to perform media-based content sharing accordingly. In fig. 3B, as indicated by an arrow a110, the user makes a request for content sharing on the terminal device 21, for example, selects an interface component representing sharing, such as selecting a sharing item or pressing a sharing button, on the user interface of the terminal device 21. As indicated by an arrow a200, the terminal device 21 transmits a query request concerning the user sharing behavior to the social network server 10 in response to a request of the user. As indicated by arrow a210, the terminal device 21 acquires the requested sharing behavior data from the database of the social network server 10 regarding the user sharing behavior according to step S10 of fig. 3A. As indicated by arrow a220, the terminal device 21 generates association information for associating user media content with one or more corresponding chat rooms in the social networking service based on the sharing behavior data, according to step S20 of fig. 3A. As indicated by arrow a230, the terminal device 21 outputs recommended items based on the associated information according to step S30 of fig. 3A, thereby causing the user interface of the terminal device 21 to present recommended items. As indicated by arrow a120, the user selects one or more of the recommended items. As indicated by arrow a240, the user interface of the terminal apparatus 21 displays a user interface component, such as a confirm button, for confirming whether or not to transmit. As indicated by arrow a130, the user makes a determination request on the terminal device 21, such as the user pressing a confirm button on the user interface. As indicated by arrow a250, the terminal device 21 outputs a sharing request to the social network server 10 according to the selected recommended item in response to the determination request. As indicated by arrow a310, the social networking server 10 communicates user media content to a terminal device, such as terminal device 22, corresponding to a member in the target chat room in accordance with the sharing request.
In another embodiment referring to fig. 3B, a database about user sharing behavior may also be implemented on the terminal device 21. In this embodiment, the terminal device 21 can acquire the requested sharing behavior data from the database of the terminal device 21 regarding the user sharing behavior according to step S10 of fig. 3A, so it is not necessary to send a query request regarding the user sharing behavior to the social network server 10.
In the following embodiments, the social networking server 10 utilizes the FIG. 3A-based approach to facilitate media-based content sharing. Please refer to fig. 3C, which is a schematic timing diagram of an embodiment of the method based on fig. 3A. Suppose that the user is also likewise ready for media-based content sharing. In fig. 3C, as indicated by an arrow a110, the user makes a request for content sharing on the terminal device 21, for example, selects an interface component representing sharing, such as selecting a sharing item or pressing a sharing button, on the user interface of the terminal device 21. As indicated by arrow a201, the terminal device 21 transmits a query request concerning the user sharing behavior to the social network server 10 in response to the request of the user. As indicated by arrow a410, the social networking server 10 retrieves the requested sharing behavior data from the database of the social networking server 10 regarding the user' S sharing behavior, according to step S10 of fig. 3A. As indicated by arrow a420, social networking server 10 generates association information for associating user media content with one or more corresponding chat rooms in the social networking service based on the shared behavioral data, in accordance with step S20 of fig. 3A. As indicated by arrows a430 and a440, the social network server 10 outputs recommended items to the terminal device 21 according to the associated information to make recommended actions according to step S30 of fig. 3A, thereby causing the user interface of the terminal device 21 to present recommended items as indicated by arrow a 510. As indicated by arrow a120, the user selects one or more of the recommended items on the user interface of the terminal device 21. As indicated by arrow a520, the user interface of the terminal apparatus 21 displays a user interface component for determining whether or not to transmit an option, such as a confirm button. As indicated by arrow a130, the user makes a determination operation on the terminal device 21 to generate a determination request. As indicated by arrow a530, the terminal device 21 outputs a sharing request to the social network server 10 according to the selected recommended item in response to the determination request. As indicated by arrow a450, the social networking server 10 communicates the user media content to a terminal device, such as terminal device 22, corresponding to a member in the target chat room in accordance with the sharing request.
As shown in the embodiment based on fig. 3A, the user can quickly select the target chat room of the user media content to be shared or the user media content to be shared in the process of performing the media-based content sharing action on the terminal device due to the presentation of the recommended item, thereby improving the content sharing efficiency and reducing the use burden of the user. Thus, the method of FIG. 3A may be implemented at a terminal device or server to efficiently utilize the network and computing resources of a social networking service to facilitate media-based content sharing performed in a computing device.
Please refer to fig. 4, which is a flowchart of another embodiment of the method of fig. 3A. The method based on fig. 3A may further comprise the steps as shown in fig. 4. As shown in step S40, the subsequent content sharing actions are confirmed after said step S30. The database is caused to be updated in accordance with the subsequent content sharing actions, as shown in step S50.
An example of step S40 is as follows. For example, in the embodiment of fig. 3B, the terminal device 21 may be implemented such that, after the terminal device 21 receives a selection operation by the user (as indicated by an arrow a 120) or a determination operation by the user (as indicated by an arrow a 130), the terminal device 21 confirms that the user has performed a subsequent content sharing action. For example, in the embodiment of fig. 3C, the social network server 10 may be implemented such that, after the social network server 10 receives the sharing request (as indicated by arrow a 530) output by the terminal device 21, the social network server 10 confirms that the user has performed the subsequent content sharing action.
An example of step S50 is as follows. For example, in the embodiment of fig. 3B, the terminal device 21 outputs a sharing request to the social network server 10 according to the selected recommended item in response to the determination request (as indicated by arrow a 250), and the social network server 10 updates the database after receiving the sharing request, so that the database is updated according to the subsequent content sharing behavior. For example, in the embodiment of fig. 3C, the social network server 10 may be implemented such that, after the social network server 10 receives the sharing request (indicated by arrow a 530) output by the terminal device 21, the social network server 10 confirms that the user has performed a subsequent content sharing action, and updates the database accordingly.
The embodiment of fig. 4 may enable the database about the user sharing behavior to reflect the latest content sharing behavior of the user, and enable the recommendation items, which are then estimated according to the associated information and output using the estimated result, to reflect the tendency of the latest content sharing behavior of the user. Thus, when the user performs another media-based content sharing action on the terminal device, the terminal device or the server can promote the media-based content sharing by effectively utilizing the network and computing resources of the social network service through steps S10 to S30 of fig. 3A again.
Some embodiments based on the steps of the method of fig. 3A are described below. The embodiments herein may be combined in any suitable manner.
Step S10 of fig. 3A is for user media content to obtain data from a database regarding media-based content sharing behavior in a social networking service. The database is a database of user sharing behaviors indicating relationships between a plurality of attributes including target chat-room information and content features involved in the media-based content sharing behaviors. For example, a user a shares photos of a newborn son in a chat room (e.g., home named a) between the family of the user in a social networking service. The target chat room information of the media-based content sharing behavior (hereinafter referred to as content sharing behavior) is a home of a, an identification code (ID) of the chat room, or any data that can represent the chat room. The content features of this content sharing behavior may be one or more facial features, such as a child's face, smile, and may be assigned values to characterize the degree of difference in feature points or smiles of facial features in different user media content for subsequent processing or calculation. And the content features may include, for example, data representing age of the face, data representing a degree of size of a smile, or data representing a size or proportion of a face's length and width, such as a value representing a degree of difference in smiles with 0 representing no smile, 100 representing laugh, and a value between 0 and 100 representing a difference in smiles, such as 20 representing smile. However, implementation of the present invention is not limited by these examples; the content features may be set in kinds and values as needed.
For example, in the embodiment of fig. 3B or 3C, either the social network server 10 or the terminal device 21 may be used to build a database about the user's sharing behavior. The social network server 10 or the terminal device 21 may be configured to, after the user performs the content sharing action, perform image analysis on the user media content related to the content sharing action, thereby obtaining content characteristics, and automatically associate the target chat room information and the content characteristics of the content sharing action of the user with the user, and record the information and the content characteristics in the database. For example, the social network server 10 or the terminal apparatus 21 performs image analysis on the user media content. Such as image analysis of one or more of face recognition, smile recognition, gesture recognition, ambiguity recognition, eye-closure recognition, good photo recognition. The face recognition is used for recognizing whether the face and the number of the face exist in the image, and analyzing or recognizing the characteristic parts of the face in the image so as to obtain a recognition result. The smile recognition is used for recognizing whether the smile exists in the image or not, and analyzing or recognizing characteristic parts of the smile in the image so as to obtain a recognition result. The gesture recognition is used for recognizing whether the gesture of the human body or the animal exists in the image, and analyzing or recognizing characteristic parts of the gesture in the image so as to obtain a recognition result. The ambiguity recognition is used for recognizing the ambiguity in the image, judging whether the image is ambiguous or clear according to the ambiguity, so as to obtain a recognition result, and can be used for distinguishing the quality of the image. The non-eye-closing recognition is used for recognizing whether eyes of a person in the image are closed or the degree of closing, so that a recognition result is obtained, and the quality of the image can be distinguished. The good photo identification is used for identifying the brightness, color saturation or other parameters in the image, so that an identification result is obtained, and the good photo identification can be used for distinguishing the quality of the image. The image analysis may also include at least one of content-based image retrieval (content based image retrieval), face recognition, expression recognition, optical character recognition (optical character recognition), handwriting recognition (handwriting recognition), and video object extraction (video object extraction). The image analysis may also include at least one of animal identification, object identification, and scene identification (situation recognition). However, the implementation of the present invention is not limited by these examples, nor by the way in which the database is built or the way in which the content features are obtained.
Please refer to fig. 5, which is a flowchart of an embodiment of step S10 of fig. 3A. In this embodiment, the step S10 includes the steps shown in fig. 5. Content-based image analysis is performed on the user media content, as shown in step S110. As shown in step S115, it is determined whether the user media content contains the content characteristics, such as content characteristics described in a database about user sharing behavior, according to the result of the image analysis. If it is determined that the user media content includes the content characteristics, the sharing behavior data is obtained from the database, wherein the sharing behavior data includes target chat room information to which one or more previous sharing contents including the content characteristics of the user media content are sent, as shown in step S120. If it is determined that the user media content does not include the content characteristics, the sharing behavior data is obtained from the database, wherein the sharing behavior data includes one or more target chat room information to which the previously shared content was sent, as shown in step S130.
For example, in the embodiment of fig. 3B or 3C, the social networking server 10 or the terminal device 21 may perform image analysis as described above on the user media content (e.g., the child' S image) according to steps S110 and S115, and determine whether the user media content includes the content features according to the image analysis. For example, the social networking server 10 or the terminal device 21 recognizes the face and smile of the user media content, and knows that the user media content is, for example, the face or smile of a child. The social network server 10 or the terminal device 21 may query from the database about the user sharing behavior to obtain the target chat room information, such as the chat rooms with the identification codes AA, BB and CC, to which the previously shared content including the face and smile of the child is sent. In detail, for example, the user previously shared an image containing a child smiling face to 10 times in chat room with identification number AA, 7 times in chat room with identification number BB, and 3 times in chat room with identification number CC, respectively. Thus, according to step S120, the social network server 10 or the terminal apparatus 21 can obtain the sharing behavior data from the database about the user sharing behaviors. For example, the shared behavior data includes target chat room information (e.g., identification codes AA, BB, and CC) or the shared behavior data includes content features (e.g., child's face, smile) and target chat room information (e.g., identification codes AA, BB, and CC). The target chat room information may further include the number of times an image including the smiling face of the child is shared to the target chat room, for example, in the above example, the target chat room information may be expressed as { [ AA,10], [ BB,7], [ CC,3] } with the symbol "[ target chat room, number of times ]". However, the implementation of the present invention is not limited by these examples, nor by the query mode of the database or the content or format of the shared behavior data.
On the other hand, if it is determined that the user media content does not include the content feature, the social networking server 10 or the terminal device 21 may also obtain the sharing behavior data from the database, as shown in step S130. For example, without limiting the content features, the user previously shared images 30 times into chat rooms with identification codes AA, 19 times into chat rooms with identification codes BB, 10 times into chat rooms with identification codes CC, and 8 times into chat rooms with identification codes DD. Thus, according to step S130, the social network server 10 or the terminal apparatus 21 may obtain the sharing behavior data from the database about the user sharing behaviors. For example, the shared behavioral data includes targeted chat room information (e.g., identification code AA, BB, CC, DD). The target chat room information may further include the number of times the image is shared to the target chat room, for example, the symbol "[ target chat room, number of times ]" may be used to represent the target chat room information as { [ AA,30], [ BB,19], [ CC,10], [ DD,8] }, as in the above example. However, the implementation of the present invention is not limited by these examples, nor by the query mode of the database or the content or format of the shared behavior data.
Step S20 of fig. 3A is to automatically generate association information from the sharing behavior data obtained in step S10. The associated information may be used to predict a target chat room that the user may designate when the user wants to share the designated user media content, or to predict user media content that the user may share when the user wants to designate the target chat room.
In some embodiments of step S20, the sharing behavior data obtained in step S10 may be directly set as the association information. For example, if the shared behavior data obtained in step S10 is sorted at least by the degree of correlation of the content features and the number of sharing times, for example, sorted by the number of sharing times being the largest or smallest, the shared behavior data obtained in step S10 may be directly set as the correlation information in step S20.
In some embodiments of step S20, the shared behavior data obtained in step S10 may be used by a ranking, searching, probability, statistics, or other algorithm to generate association information. Please refer to fig. 6, which is a flowchart of an embodiment of step S20 of fig. 3A. In this embodiment, the step S20 includes the steps shown in fig. 6. A ranking score regarding the likelihood of sharing the user media content with the one or more corresponding chat rooms is determined based on the sharing behavior data obtained from the database, as shown in step S210. The association information is determined based on the ranking score, as shown in step S220. For example, the shared behavior data contains content characteristics (e.g., child's face, smile) and target chat room information, which is expressed as { [ AA,15], [ BB,5], [ CC,30] } with the symbol "[ target chat room, number of times ]". Since this number of data can be regarded as a ranking score of the likelihood that the chat rooms AA, BB, CC become target chat rooms containing user media content of the content feature, the social network server 10 or the terminal device 21 can perform a process of sorting or searching for all the number of data in the target chat room information, thereby deriving a sequential relationship of { [ CC,30], [ AA,15], [ BB,5] } in order of the number of data from large to small, and generating the related information accordingly. The associated information may be characterized by the likelihood that the predicted target chat rooms, for images containing content characterized as child's face, smile, are ranked from large to small, CC, AA, BB, respectively. In another embodiment, the social network server 10 or the terminal device 21 may calculate the sharing probabilities of the chat rooms AA, BB, CC and derive the ranking scores of the likelihoods, e.g. the sharing probabilities of the chat rooms AA, BB, CC are 0.3, 0.1, 0.6, respectively, so the ranking scores of the likelihoods may be set to 2, 3, 1, respectively, to represent that for an image containing a face, smile, of a child whose content features are most likely to be shared to the chat room CC, followed by the chat rooms AA, BB.
In some embodiments, the user's media-based sharing behavior may be predicted by artificial intelligence, such as machine learning. For example, image analysis such as face recognition, smile recognition, gesture recognition, or the like is performed on images in the folder of the user's terminal device 21 or images of the album of the user in the social network server 10, so as to find out content features in the images, and ambiguity recognition, eye-closure recognition, or photo-good recognition may be performed, so as to find out images that the user is likely not to share due to a situation on the content. Thus, images with conditions such as blurred, blocked eyes, or insufficient brightness can be automatically filtered out, and the remaining images with content features can be used to estimate the user's content sharing behavior. In this way, the terminal device 21 or the social network server 10 obtains the sharing behavior data including the plurality of content characteristics and the target chat room information to which the previously shared content including the plurality of content characteristics is sent, from the database query about the user sharing behavior for the image having the content characteristics according to step S10. The terminal device 21 or the social network server 10 learns the sharing behavior data to obtain the associated information through artificial intelligence such as machine learning according to step S20, thereby learning the user' S media-based sharing behavior. This association information may characterize the user's media-based sharing behavior, e.g., the association information contains the following information: the user media content is a good image, the person (such as child, friend, family or other identifiable person's face) contained in the image, the user has taken 5 photos in the last about 10 minutes, the user has a 95% chance of sharing photos containing children to the target chat rooms AA, BB, and the user has a 90% chance of sharing photos containing friends to the target chat rooms CC, DD. For another example, the association information contains the following information: the user takes 10 photos of the daughter in 5 minutes, and the user has 90% chance to share 2 to 5 photos to chat rooms FG1 and FG2.
Step S30 in the method of fig. 3A is to perform a recommendation action, such as recommending a chat room or recommending user media content, according to the association information generated in step S20. Implementations in various possible use scenarios will be illustrated below.
In some embodiments of the method based on fig. 3A, step S30 may recommend a particular chat room of the one or more corresponding chat rooms as the target chat room of the user media content to be shared based on the association information generated in step S20. These embodiments may be applicable to application scenarios where a user wants to share one or more user media content (e.g., images) in a terminal device that displays a recommended target chat room (which may be considered a particular chat room) based on the method of fig. 3A, thereby facilitating the user to quickly complete the sharing. As shown in fig. 7, the user interface 700 illustrates a user interface of an application program such as a user-side program of a social network service, an image browser, or a web browser that can be executed on the terminal device 21. The user selects one of the images 701 in the user interface 700 and performs an operation to be shared with respect to the image 701, for example, selects an interface component representing sharing in the user interface 700, such as a sharing item 702 shown in fig. 7.
Step S30 according to fig. 3A in the present embodiment may be implemented by the terminal device 21, as indicated by arrow a230 in fig. 3B. The terminal device 21 outputs the recommended item using the associated information generated in step S20 according to step S30 of fig. 3A, and causes the user interface 700 of the terminal device 21 to present the recommended dialog 703. It is assumed in this embodiment that the content of the image 701 is characterized by containing a child's face, and the associated information includes information that, for an image containing a content characterized by a child's face, the estimated target chat rooms are arranged from large to small in terms of likelihood, respectively CC, AA, BB, where CC, AA, BB are chat room identification codes and correspond to chat rooms named "my family", "friends", "colleagues", respectively. As shown in fig. 7, the recommended dialog box 703 displayed by the user interface 700 contains the chat room names of "my family", "friends", "colleagues", wherein the chat room names may be arranged from large to small according to likelihood to emphasize to the user that "my family" is the most likely target chat room when the user shares the same kind of image. The user may select one of the items, for example, "my home", from the recommended dialog 703, thereby indicating to the terminal device 21 that the target chat room of the image 701 to be shared is "my home". The terminal device 21 may transmit the image 701 to the social network server 10 according to a target chat room selected by the user, thereby sharing the image 701 to the target chat room.
In this embodiment, step S30 according to fig. 3A may also be implemented by the social network server 10, as shown by arrow a430 in fig. 3C, where the social network server 10 may generate recommended items (such as chat room identification codes CC, AA, BB) according to the associated information and output the recommended items to the terminal device 21, so as to complete step S30. The terminal device 21 may present the recommended item output from the social network server 10 in the recommended dialog 703.
In some embodiments of the method based on fig. 3A, step S30 may recommend a particular media content in a media content collection as media content to be shared in one of the one or more corresponding chat rooms according to the association information generated in step S20, wherein the media content collection contains the user media content. These embodiments may be applicable to application scenarios where the number of user media content is plural, the user in one of the one or more corresponding chat rooms wants to share user media content in the terminal device, which in turn displays recommended user media content. As shown in fig. 8, the user interface 800 illustrates a user interface of a user end program of a social network service or an application program of a web browser or the like that can be executed on the terminal device 21, for example, the user interface 800 displays a dialogue scenario of different users in a my home chat room. The user of the terminal apparatus 21 performs an operation to be shared in the user interface 800, for example, selects an interface component representing sharing in the user interface 800, such as a sharing item 802 shown in fig. 8. Further, the media content collection may contain media content stored by the user in the terminal device 21 or media content in a social network service. However, the implementation of the present invention is not limited by the above examples.
Step S30 according to fig. 3A in the present embodiment may be implemented by the terminal device 21, as indicated by arrow a230 in fig. 3B. The terminal device 21 outputs the recommended item using the associated information generated in step S20 according to step S30 of fig. 3A, and causes the user interface 800 of the terminal device 21 to present the recommended dialog 803. As shown in FIG. 8, a user interface 800 displays a plurality of thumbnails (e.g., 804, 805) representing user media content (e.g., images) in a recommendation dialog 803. The user may select one or more of the thumbnails, e.g., thumbnail 804, from recommendation dialog 803 to indicate to terminal device 21 that the user media content (e.g., image) represented by thumbnail 804 is to be shared in the my home chat room. The terminal device 21 may transmit the user media content (e.g., image) represented by the thumbnail 804 to the social networking server 10 according to the thumbnail 804 selected by the user, thereby sharing the user media content (e.g., image) represented by the thumbnail 804 in the my home chat room.
In this embodiment, step S30 according to fig. 3A may also be implemented by the social network server 10, as shown by arrow a430 in fig. 3C, where the social network server 10 may generate a recommendation (e.g. an identification code of the user media content (e.g. an image)) according to the association information and output the recommendation to the terminal device 21, so as to complete step S30. The terminal device 21 may present the recommended item output from the social network server 10 in the recommendation dialog 803.
How to output the recommended items according to step S30 is further illustrated below. As in the previous embodiments with respect to step S10 or S20, the social network server 10 and the terminal apparatus 21 may obtain sharing behavior data from a database with respect to user sharing behaviors according to step S10 or S20, and generate the associated information accordingly. The shared behavior data contains targeted chat room information for a certain content feature (e.g. child's face, smile), such as represented by the symbol "[ targeted chat room, times ]" as { [ AA,15], [ BB,5], [ CC,30] }, where the sharing probabilities of chat rooms AA, BB, CC are 0.3, 0.1, 0.6, respectively. Please refer to table 1, which shows the sharing probabilities of the target chat rooms AA, BB, CC, DD corresponding to the various content features AT1, AT2, AT 3. For example, the content features AT1, AT2, AT3 described in table 1 represent the faces of children, pets, and colleagues, respectively, and the target chat rooms AA, BB, CC, DD (identification codes) represent chat rooms named "friends", "colleagues", "my home", "company", respectively. Table 1 may be generated from the shared behavior data, or may be generated through calculation using a set of association information or association information. Table 1 may be implemented using any suitable data structure or database. The implementation of the present invention is not limited by the above examples, nor by the form or content of table 1.
TABLE 1
The following describes how the recommended items are output according to step S30, further using table 1.
In the related embodiment of fig. 7, the social network server 10 or the terminal device 21 recommends a particular chat room from one or more corresponding chat rooms as a target chat room for the user media content to be shared. The social network server 10 or the terminal device 21 may obtain the sharing probabilities of the corresponding chat rooms from the fields of the table 1 about the content characteristics AT1 of the child regarding the content characteristics (e.g. child) of the user media content (e.g. image 701) selected by the user, and may obtain the chat rooms with the sharing probabilities in the corresponding chat rooms from the order of CC, AA, BB, and the chat rooms CC, AA, BB correspond to the chat rooms named "my family", "friend", "colleague", respectively. Therefore, in the recommendation dialog 703 of fig. 7, the names of the 3 chat rooms are listed from top to bottom, respectively.
In a related embodiment of fig. 8, the social networking server 10 or the terminal device 21 recommends particular media content from the collection of media content as media content to be shared in one of the one or more corresponding chat rooms. The social network server 10 or the terminal device 21 may query the fields of table 1 about the corresponding chat rooms CC for the chat rooms (the chat rooms named "my home" and the identification codes CC) with sharing probabilities greater than 0 to obtain media content that may be shared by the user media content including the content features AT1 and AT 2. The social networking server 10 or the terminal device 21 may accordingly find out the user media content containing the content features AT1 or AT2 or the user media content containing both the content features AT1 and AT2 from the collection of media content and further select a specific media content therefrom as a recommended item. Therefore, in the recommendation dialog 803 of fig. 8, a plurality of thumbnails each containing an image of a child or a pet are listed from left to right.
In one embodiment, the social networking server 10 or the terminal device 21 may further select particular media content from the collection of media content with respect to the number of shares. For example, in the related embodiment of fig. 8, since the content characteristics AT1 and AT2 are media contents that may be shared because the sharing probability is greater than 0 in the row corresponding to the column of the corresponding chat room CC in table 1, it can be further queried that the number of times the user media contents with the content characteristics AT1 and AT2 are shared into the corresponding chat room CC is 180 and 85, respectively, that is, the number of times the user media contents with the content characteristics AT1 are shared is greater than the number of times the user media contents with the content characteristics AT2 are shared. Accordingly, for the event that the user media content is shared into the chat room CC, the social networking server 10 or the terminal device 21 can identify that the user media content having the content feature AT1 is shared with a greater probability than the user media content having the content feature AT2, so that the social networking server 10 or the terminal device 21 can set the user media content (or the identification code or the thumbnail thereof) having the content feature AT1 in the former order when outputting the recommendation (such as the specific media content).
In one embodiment, the social networking server 10 or the terminal device 21 may further utilize the conditions of the time interval to select particular media content from the collection of media content. For example, in the related embodiment of fig. 8, the social network server 10 or the terminal apparatus 21 may obtain a record of the user's taking an image (or an operation of generating, downloading, etc. an image) within a recent certain time interval (e.g., 10 minutes, 30 minutes, or 1 hour, etc.) from a database about the user's sharing behavior, the social network server 10 or the terminal apparatus 21, and select a specific media content from the media content collection according to the embodiment about table 1. For example, the user took 3 photos (e.g., photos with a child) within the last about 10 minutes. The social network server 10 or the terminal device 21 can learn from the table 1 that the user media content containing the content features AT1, AT2 is a media content that may be shared from the column of the corresponding chat room CC regarding the chat room (the chat room named "my home" and the identification code CC) in which the user is currently located. The social network server 10 or the terminal device 21 selects these 3 photos taken within the last 10 minutes, which contain children, as the specific media content. For another example, if the chat room in which the user is located is a chat room named "company" (whose identification code is DD), the social network server 10 or the terminal device 21 knows from the fields of table 1 about the corresponding chat room DD that the user media content including the content feature AT3 is media content that may be shared. The social networking server 10 or the terminal device 21 excludes these 3 photos taken within the last 10 minutes and then searches for the user media content from the collection of media content that contains the content feature AT3 and whether the user media content was taken within the last certain time interval (or an operation to generate, download, etc. an image) to select a particular media content. In the above embodiment, if the user media content meeting the condition is not found within the time interval in the condition of the time interval, the social network server 10 or the terminal device 21 may further increase the value of the time interval and find again, so that the user media content meeting the condition can be found. Alternatively, in the above embodiment, if the number of the user media contents meeting the conditions obtained in the time interval is larger, the social network server 10 or the terminal device 21 may further reduce the value of the time interval and search again, so as to find the user media contents meeting the conditions.
TABLE 2
Referring to table 2, the sharing times of the target chat room AA, BB, CC, DD corresponding to the various content features AT1, AT2, AT3 are shown. Table 2 may be generated by querying sharing behavior data of a database about user sharing behaviors, or may be obtained using association information. Table 2 may also be considered as part of a database or one implementation of the shared behavior of the user.
The foregoing embodiments using table 1 may be implemented using table 2 instead. The social network server 10 or the terminal apparatus 21 may output recommended items such as recommended target chat rooms or recommended media contents using table 2 and according to step S30.
For example, in the related embodiment of fig. 7, the social network server 10 or the terminal device 21 may obtain the sharing times of the corresponding chat rooms from the fields of the content characteristics AT1 of table 2 about the child regarding the content characteristics (such as child) of the user media content (such as image 701) selected by the user, and may obtain the order of the sharing times (or probability) in the corresponding chat rooms from large to small as CC, AA, BB, and the chat rooms CC, AA, BB correspond to the chat rooms named "my family", "friend", "colleague", respectively. Therefore, in the recommendation dialog 703 of fig. 7, the names of the 3 chat rooms are listed from top to bottom, respectively.
For example, in the related embodiment of fig. 8, the social network server 10 or the terminal device 21 may query the fields of table 2 about the corresponding chat rooms CC for a number of sharing greater than 0 for the chat room in which the user is currently located (the chat room named "my home" and the identification code is CC) to obtain the media content that may be shared by the user media content including the content features AT1 and AT 2. The social network server 10 or the terminal device 21 may further obtain that the sharing times of the user media content including the content characteristics AT1 and AT2 to the corresponding chat rooms CC are 180 and 85 respectively by using table 2. Accordingly, for the event that the user media content is shared into the chat room CC, the social networking server 10 or the terminal device 21 may identify that the user media content having the content feature AT1 has a greater chance (or probability) to be shared than the user media content having the content feature AT 2. Thus, the social network server 10 or the terminal device 21 can find the user media content containing the content features AT1 or AT2 or the user media content containing both the content features AT1 and AT2 from the media content collection, and further select the specific media content as the recommended item. For example, in the recommendation dialog 803 of fig. 8, a plurality of thumbnails each containing an image of a child or a pet are listed from left to right.
Other embodiments described above using table 1 may similarly be implemented instead using table 2. The implementation of the present invention is not limited by the above examples, nor by the form or content of table 1 or table 2.
As shown in the embodiment based on fig. 3A, the user can quickly select the target chat room of the user media content to be shared or the user media content to be shared in the process of performing the media-based content sharing action on the terminal device due to the output of the recommended item, thereby improving the efficiency of content sharing and reducing the use burden of the user. Thus, the method of FIG. 3A may be implemented at a terminal device or server to efficiently utilize the network and computing resources of a social networking service to facilitate media-based content sharing performed in a computing device.
In some embodiments of the method based on fig. 3A, the recommendation of step S30 may also include text or otherwise. Please refer to fig. 9, which is a diagram illustrating another embodiment of step S30 of fig. 3A, which is used to recommend media content and provide additional suggestions. In this embodiment, the database about user sharing behavior may further include text information associated with the content features, and step S30 further includes: providing text as an additional suggestion based at least on the text information. As shown in FIG. 9, when the user selects a sharing item in the user interface 800, the recommendation dialog 803 displays a thumbnail of the recommended user media content. In comparison to the embodiment of fig. 8, corresponding text 901, 902 is also displayed in the recommendation dialog 803 of fig. 9 in the vicinity of the thumbnails 804, 805 as additional suggestions. The user may select (e.g., click on) the thumbnail or text adjacent to the thumbnail from the user interface 800 to select the corresponding user media content, so as to send a request to the social network server 10 or the terminal device 21 to share the selected user media content in the chat room currently located, thereby achieving sharing of the image and text selected by the user.
In some embodiments, based on the method of FIG. 3A, the user's media-based sharing behavior may be predicted by artificial intelligence, such as natural speech processing. For example, text transmitted by the user during or after the act of sharing the image may also incorporate the content of the analysis. The terminal device 21 or the social network server 10 may analyze the text content using artificial intelligence such as natural speech processing and learn the text content characteristics of the text used in the user's sharing behavior, for example, nouns, verbs, adjectives, etc. that are used to use in the user's text, or the structure or pattern of sentences, etc., or analyze the association between the content characteristics of the image shared by the user and the text content characteristics, and further record the results of these analyses in a database regarding the user's sharing behavior. For example, when sharing images containing children, users often attach text messages such as "my son …", "Brad today …", whereby it is known that children with certain facial features in images containing children are children of users, and users often attach text messages of "… son …" and the english name of user son is "Brad" (budd) when sharing images of children. The above-described embodiments may be applied to the related embodiments of fig. 9, thereby generating the text 901, 902 using the social network server 10 or the terminal apparatus 21. For example, by image analysis such as face recognition and gesture recognition, the social network server 10 or the terminal apparatus 21 can obtain information that the user media content (image) corresponding to the thumbnail images 804, 805 is child of the user and has actions of flying and smiling, respectively. Accordingly, the social network server 10 or the terminal device 21 generates characters 901 and 902 shown in fig. 9 based on these pieces of information.
Furthermore, in some embodiments, a computing device readable storage medium having stored thereon computing device instructions that when executed by a processing unit of a computing device (e.g., a terminal device or server as described above) implement a method of facilitating media-based content sharing with respect to at least one of the embodiments of fig. 3A, 3B, 3C, 4, 5, 6, 7, 8, 9 (e.g., in accordance with at least one of the embodiments described herein, or a combination thereof). For example, the program code may be one or more programs or program modules, such as the program code for implementing steps S10 to S30 according to fig. 3A, respectively, which may be executed in a suitable variety of sequences.
The exemplary embodiments of the present invention can also be implemented as a computer program, and can be implemented in a digital computer executing the program using a computer readable medium. Examples of the computer readable medium include magnetic storage media (e.g., floppy disks, hard disks, etc.), optical recording media (e.g., CD-ROMs, DVDs, etc.), or memories such as a memory card, ROM, or RAM, etc. The computer readable medium may also be distributed over network coupled computer systems, such as application program stores.
In summary, various embodiments may be implemented in a terminal device or a server, and network and computing resources of a social network service may be efficiently utilized to facilitate media-based content sharing performed in a computing device, thereby improving efficiency of content sharing and reducing usage burden of users.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
While the invention has been described in terms of preferred embodiments, it will be understood by those skilled in the art that the foregoing is only illustrative of the invention and is not to be construed as limiting the scope of the invention. It should be noted that all changes and substitutions equivalent to the described embodiments are intended to be included in the scope of the present invention. Accordingly, the scope of the invention is defined by the following claims.