WO2023045951A1 - 视频处理方法、视频处理装置和计算机可读存储介质 - Google Patents

视频处理方法、视频处理装置和计算机可读存储介质 Download PDF

Info

Publication number
WO2023045951A1
WO2023045951A1 PCT/CN2022/120111 CN2022120111W WO2023045951A1 WO 2023045951 A1 WO2023045951 A1 WO 2023045951A1 CN 2022120111 W CN2022120111 W CN 2022120111W WO 2023045951 A1 WO2023045951 A1 WO 2023045951A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
user
processing method
characters
viewer
Prior art date
Application number
PCT/CN2022/120111
Other languages
English (en)
French (fr)
Inventor
埃里卡·林恩·鲁济奇
段懿
Original Assignee
北京字节跳动网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字节跳动网络技术有限公司 filed Critical 北京字节跳动网络技术有限公司
Publication of WO2023045951A1 publication Critical patent/WO2023045951A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/778Active pattern-learning, e.g. online learning of image or video features
    • G06V10/7784Active pattern-learning, e.g. online learning of image or video features based on feedback from supervisors
    • G06V10/7788Active pattern-learning, e.g. online learning of image or video features based on feedback from supervisors the supervisor being a human, e.g. interactive learning with a human teacher
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7837Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
    • G06F16/784Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content the detected or recognised objects being people
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/732Query formulation
    • G06F16/7335Graphical querying, e.g. query-by-region, query-by-sketch, query-by-trajectory, GUIs for designating a person/face/object as a query predicate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/735Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/74Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/7867Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9536Search customisation based on social or collaborative filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/30Scenes; Scene-specific elements in albums, collections or shared content, e.g. social network photos or video
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Definitions

  • the present disclosure relates to the field of computer technology, and in particular to a video processing method, a video processing device and a computer-readable storage medium.
  • a social network may provide various services, such as photo or video sharing, messaging, etc., based on user input, to facilitate social interaction among users.
  • Digital media may include images, video, audio, text, and more.
  • users can post their own created videos on social networks, and initiate interactions with other users through operations such as reminders.
  • Other users on social networks can interact with video creators by browsing, liking, commenting, etc.
  • a video processing method including:
  • the marking result is displayed outside the video display interface in the form of information flow.
  • a video processing device including:
  • a display configured to provide the first user with an interactive interface for marking characters in the video
  • a processor configured to receive a marking operation on at least one character in the video input by the first user through an interactive interface
  • the display is further configured to, in response to the first user's marking operation, display the marking result in the form of information flow outside the video display interface when the social network publishes the video.
  • a video processing device including:
  • a processor coupled to the memory, the processor configured to execute one or more steps in the video processing method of any embodiment described in the present disclosure based on instructions stored in the memory.
  • a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, the video processing method of any embodiment described in the present disclosure is executed.
  • FIG. 1 shows a flowchart of a video processing method according to some embodiments of the present disclosure
  • Fig. 2 shows a schematic diagram of an interactive interface according to some embodiments of the present disclosure
  • FIG. 2A and FIG. 2B respectively show interactive pages in different ways of presenting marker introduction floating layers according to some embodiments of the present disclosure
  • Fig. 3A shows a schematic diagram of a "mark people" page in a search state according to some embodiments of the present disclosure
  • Fig. 3B shows a schematic diagram of a "mark people" page in a recommendation state according to some embodiments of the present disclosure
  • FIG. 3C shows a schematic diagram of a "marked people” page displaying a list of "marked people” according to some embodiments of the present disclosure
  • 3D and 3E respectively show the interactive interface before posting a video in the case of a single person being tagged and multiple people being tagged according to some embodiments of the present disclosure
  • Figure 3F illustrates a video preview page prior to posting a video, according to some embodiments of the present disclosure
  • FIG. 4A and FIG. 4B respectively show schematic diagrams of high interest value display pages according to some embodiments of the present disclosure
  • FIG. 4C and FIG. 4D respectively show schematic diagrams of low interest value display pages according to some embodiments of the present disclosure
  • Figures 4E and 4F show, respectively, according to some embodiments of the present disclosure, in the case that a single person is marked and multiple people are marked, a floating layer with a list of marked characters is displayed;
  • 5A shows a page that sends notifications and pushes to user accounts of marked characters after a video is released according to some embodiments of the present disclosure
  • Figure 5B shows the detailed content of the notification and push in Figure 5A
  • FIG. 6 shows a flowchart of a video processing method according to other embodiments of the present disclosure.
  • FIG. 6A shows a display page when a video creator browses a published video according to some embodiments of the present disclosure
  • Fig. 6B shows a display page for displaying marked characters in a video provided with an "edit marked character” entry according to some embodiments of the present disclosure
  • Figure 6C illustrates an "Edit Flagged People" page according to some embodiments of the present disclosure
  • Figure 6D shows a sharing page according to some embodiments of the present disclosure
  • Fig. 7 shows a flow chart of a video processing method according to some other embodiments of the present disclosure.
  • FIG. 7A shows a display page when a second user browses a published video according to some embodiments of the present disclosure
  • Figure 7B illustrates a "mark for deletion" page according to some embodiments of the present disclosure
  • Figure 7C illustrates the "Add Back Flags" page according to some embodiments of the present disclosure
  • Fig. 7D shows a sharing page according to other embodiments of the present disclosure.
  • FIG. 8 illustrates a display page for friending from tagged videos according to some embodiments of the present disclosure
  • Figure 9 shows a block diagram of a video processing device according to some embodiments of the present disclosure.
  • Fig. 10 shows a block diagram of a video processing device according to other embodiments of the present disclosure.
  • FIG. 11 shows a block diagram of an electronic device according to some embodiments of the present disclosure.
  • comprising and its variants used in the present disclosure mean an open term including at least the following elements/features but not excluding other elements/features, ie “including but not limited to”.
  • the term “comprising” and its variants used in the present disclosure mean an open term that includes at least the following elements/features but does not exclude other elements/features, namely “comprising but not limited to”. Thus, including is synonymous with containing.
  • the term “based on” means “based at least in part on”.
  • references throughout this specification to "one embodiment,” “some embodiments,” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention.
  • the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one additional embodiment”; the term “some embodiments” means “at least some embodiments.”
  • appearances of the phrase “in one embodiment,” “in some embodiments,” or “in an embodiment” in various places throughout the specification are not necessarily all referring to the same embodiment, but may also refer to the same embodiment. Example.
  • the present disclosure does not limit how to obtain the image or video to be applied/processed.
  • it can be obtained from a storage device, such as an internal memory or an external storage device.
  • a camera assembly can be mobilized to take pictures.
  • the image or video type is not specifically limited.
  • the image or video may be a raw image or video obtained by a camera device, or an image or video that has undergone specific processing on the raw image or video, such as preliminary filtering, anti-aliasing, color adjustment, contrast adjustment, normalization, etc.
  • the preprocessing operation may also include other types of preprocessing operations known in the art, which will not be described in detail here.
  • a person tagging function is introduced for videos on social networks. Once the tagging function is enabled, the creator will be able to tag people in the video, and can continue to edit the tagging results after the video is published, such as adding tags, removing tags, changing tags, etc.
  • Fig. 1 shows a flowchart of a video processing method according to some embodiments of the present disclosure.
  • the video processing method includes: Step S1, providing an interactive interface for marking characters in the video to the first user; Step S3, receiving the marking operation of at least one character in the video input by the first user through the interactive interface; Step S5, in response to the first user's tagging operation, when the video is posted on the social network, the tagged result is displayed outside the video display interface in the form of information flow.
  • the first user is, for example, a video creator.
  • the interactive interface is, for example, a video publishing page.
  • Fig. 2 shows a schematic diagram of an interactive interface according to some embodiments of the present disclosure.
  • the entry of "tag people” is set under the title of the video publishing page, and after clicking, you can enter the "tag people” page for tagging operations. After entering the page of "marking people", the first user can select the corresponding person to mark.
  • an overlay introducing tagging can also be provided. It is also possible to display an overlay introducing the tagging feature when the device first enters the tagging version of the social network.
  • Figure 2A and Figure 2B respectively show interactive pages with different ways of presenting the mark introduction floating layer, but the content of the mark introduction floating layer all includes: title "mark people in the video”; Visible to anyone", "You can edit the people you tag after the video is posted. People you tag can also delete themselves.”
  • Fig. 3A shows a schematic diagram of a "mark people" page in a search state according to some embodiments of the present disclosure.
  • the search box on the page shown in FIG. 3A can be used by the first user to search for persons to be marked.
  • the search range covers all users, except those blocked by the first user and those who blocked the first user. Search results are displayed in real time based on the entered text.
  • the page displays the avatar, nickname, and user name of each user found in the search. For example, as shown in FIG. 3A , if you input the letter "a" in the search box, the page will display all user names whose initial letter is "a" or "A", as well as the corresponding profile pictures and nicknames of each user. In some embodiments, the page also displays the relationship between each user and the first user, eg, friends and followers.
  • the first user can click on any user and treat the user as "selected” and add it to the "marked person list". For example, as shown in FIG. 3A , the first user selects user C as a mark of a character, that is, marks a certain character appearing in the video as "C".
  • the first user may also select a corresponding character from the recommended tag list to tag.
  • Fig. 3B shows a schematic diagram of a "mark people" page in a recommendation state according to some embodiments of the present disclosure. As shown in FIG. 3B , the list of recommended tags can be divided into three areas: recent, friends and following.
  • the "recent” area can include two lists, namely the "recently marked people” list and the "recently sent message” list, wherein the "recently marked people” list is arranged in the "recently sent message” list.
  • People list, that is, the “Recently tagged people” list is shown first, and then the "Recently messaged people” list is shown.
  • the two lists can be sorted according to the last interaction time, and the total displayed characters can be set according to the needs of the page display, for example, up to 10 people can be displayed.
  • each recommended user displays an avatar, nickname, and user name.
  • FIG. 3C shows a schematic diagram of a "Flag People” page displaying a list of "Flag People” according to some embodiments of the present disclosure.
  • the "marked person” list is in the lower area of the recommended tag list, and this area is only displayed when there is a marked person, and all marked people can be sorted and displayed according to the update time from early to late.
  • the first user can click the "X" in the upper right corner of each marked character to cancel its "selected” status. After clicking the "Done (X)” button, you can save the marking status and close the page, where X is the number of people who have been marked.
  • FIGS. 3D and 3E show the interactive interface before publishing the video.
  • Figures 3D and 3E show the situations where a single person is marked and multiple people are marked, respectively.
  • the avatar of the marked person is displayed next to the "mark person” button.
  • the number of marked people is M
  • M 4
  • the avatars of 2 marked people and "+2" are displayed next to the "Mark People" button.
  • a video preview page of the cover may also be presented, as shown in FIG. 3F .
  • the video preview page will display the marking results, providing the same experience as the page that actually browses the video.
  • the marking result is displayed in the form of information flow outside the video display interface when the video is posted on the social network.
  • the display page includes not only tagged results for videos, but also other feeds, and how these feeds are displayed depends on the expected interest value of the viewer. That is, the video can be displayed in a corresponding information flow according to the expected interest value of the viewer in the video.
  • the viewer's expected interest in the video may be determined according to the relationship between the viewer, the first user, and the marked characters in the video. You can choose different information flow display methods according to the expected interest value.
  • the viewer's expected interest in the video when the viewer's expected interest in the video is greater than or equal to the threshold, display the avatar of at least one marked character in the video; when the viewer's expected interest in the video is less than the threshold, display the video The username of at least one flagged person in .
  • the tag of the video displayed to the viewer may also be determined according to the relationship between the viewer and the first user and the tagged characters in the video.
  • the label when the viewer is one of the marked characters in the video, the label is determined as the first label, that is, the video can be displayed with the first label; if the viewer is not one of the marked characters in the video, but In the case of an association relationship with the first user and the tagged person in the video, determine the tag as the second tag, that is, display the video with the second tag; In the case of an association relationship with the tagged person in the video, the tag is determined as the third tag, that is, the video can be displayed on the third tag; If the tagged person in the video is not related, the tag is determined as the fourth tag, that is, the video is displayed with the fourth tag.
  • the first tag is, for example, "You were tagged in the video”.
  • the second label is, for example, "friends/people you follow”.
  • the third label is, for example, "friends/followed people are marked”.
  • the fourth label is, for example, "no association", or "low interest value", or "none". Affiliation includes friend or follow. When the fourth tag is "none", there is no special tag when displaying video.
  • the first tag, the second tag and the third tag that is, the viewer is expected to have a high interest value in the video
  • at least one avatar of a tagged person in the video may be displayed.
  • the username of at least one tagged person in the video may be displayed.
  • different display pages may be displayed according to the number of marked characters in the video.
  • displaying the video to the viewer with the first label, the second label or the third label is because the viewer is expected to have a high interest value in the video;
  • the video is not interesting, that is, it has a low interest value, that is, the viewer has no relationship with the video creator or anyone in the video as friends/followers/matching friends.
  • the tag entry will appear in a new row with a low interest value.
  • FIG. 4A and FIG. 4B respectively show schematic diagrams of high interest value display pages according to some embodiments of the present disclosure.
  • FIG. 4A shows a situation where only one person is marked. As shown in Figure 4A, only one person is marked, and the avatar and user name of the marked person are displayed. If a flagged person's username is too long, the flagged item and creation time (if any) can be displayed on a new line. In this case, if the username of the flagged user is still too long, keep showing the avatar and use the symbol "" to handle the username.
  • FIG. 4A also shows a "Friends Flagged” tab (third tab). When clicked on the display page, the markup list floating layer can be opened.
  • Figure 4B shows a situation where multiple people are flagged. As shown in Figure 4B, since many people have been marked, displaying both avatars and user names will occupy the display page too much, so only the avatars of some marked people (for example, 2 people) are displayed, but several people are marked, such as "marked” 5 people", no longer display the usernames of the flagged people.
  • FIG. 4B also shows a tab of "Friends" (second tab).
  • FIG. 4C and FIG. 4D respectively show schematic diagrams of low interest value display pages according to some embodiments of the present disclosure.
  • FIG. 4C shows a situation where only one person is marked. As shown in FIG. 4C , only one person is marked, and the user name of the marked person is displayed. In FIG. 4C , there is no label corresponding to the relationship, that is, the fourth label is None.
  • Figure 4D shows a situation where multiple people are marked. As shown in FIG. 4D , since multiple people are tagged, displaying user names all will occupy the display page too much, so it is only displayed as tagging multiple characters, for example, tagging "2 characters". In FIG. 4D , there is no label corresponding to the relationship, that is, the fourth label is also None.
  • the viewer's expected interest value in the video is not fixed, and may change with the viewer's behavior or other characteristics in addition to depending on the relationship between the viewer and the first user and the marked characters in the video.
  • the viewer's expected interest value for the video determined according to the relationship can be adjusted according to the viewer's viewing time of the video. For example, when it is detected that the stay time video_staytime of viewers with low expected interest value exceeds the threshold, if the user watches the video for 5 seconds, the expected low interest value can be adjusted to a high interest value, thereby adjusting the displayed page.
  • the viewer's expected interest value in the video may also be adjusted according to other features related to video browsing, so as to convert the displayed page from a display with a low interest value to a display with a high interest value.
  • Hotspots can be the same for high and low interest value displays.
  • a tagged list of people in the video is displayed. For example, when a viewer clicks on the avatar of a marked person, a floating layer with a list of marked people is displayed.
  • Figures 4E and 4F show the situations where a single person is marked and multiple people are marked, respectively.
  • the display page shows the avatar, nickname and user name of the tagged person in the video, such as B, and displays the relationship between him and the viewer, such as "friend".
  • buttons for the viewer to operate such as the "private message (message)" button shown in FIG. 4E .
  • the display page shows the avatars, nicknames and usernames of multiple marked characters in the video, and displays the relationship between them and the viewer, such as “friends”, “friends with the marked characters”, “People you may know”, “From your contacts”. Buttons for viewers to operate are also set on the right side of these items of marked people, such as “private message”, “following”, “follow” and other buttons as shown in FIG. 4F.
  • the height of the above-mentioned floating layer can be adjusted to a certain proportion of the screen, for example, a maximum of 50%, so as to display more marked persons.
  • multiple tagged users can be sorted according to their relationship with the viewer, such as showing themselves first, then friends, then matching friends, then people they follow, and finally strangers.
  • multiple flagged users can also be displayed according to the flagging order.
  • the viewer can slide down to close the floating layer, or click the close button to return to the information flow display.
  • Notifications and pushes can also be sent to the user account of the tagged person after the video is published.
  • the tagged user receives the message "AA tagged you in the video”.
  • Figure 5A also shows that the marked time is "1 hour ago”.
  • the page shown in Fig. 5B is entered.
  • Figure 5B shows detailed notification and push content.
  • the first user or the second user may also be provided with an interactive page for editing the marked results.
  • FIG. 6 shows a flowchart of a video processing method according to some other embodiments of the present disclosure.
  • Figure 6A illustrates a display page when a video creator browses a published video, according to some embodiments of the present disclosure.
  • FIG. 6B shows a display page displaying tagged characters in a video provided with an "edit tagged characters" entry according to some embodiments of the present disclosure.
  • Figure 6C illustrates the "Edit Flagged People" page according to some embodiments of the present disclosure.
  • Figure 6D illustrates a sharing page according to some embodiments of the present disclosure.
  • step S7 is also included. Only the differences between FIG. 6 and FIG. 1 will be described below, and the similarities will not be repeated.
  • step S7 after the video is released, in response to the first user's editing operation on the tagged result, the tags of the characters in the video are modified.
  • the first user may be a video creator.
  • FIG. 6A When the video creator browses the published videos, a display page as shown in FIG. 6A may be presented.
  • the creator of the video can click on the position without an icon as shown in FIG. 6A, and then can enter the display page of the marked person in the video as shown in FIG. 6B.
  • FIG. 6B the entry of "Edit Marked People” is set under the list of marked people in the video on the display page. After clicking, you can enter the "Edit Marked People" page as shown in Figure 6C for editing operations .
  • the video creator can also click the share button set in FIG. 6A , that is, the entry of “share floating layer”, to enter the sharing page as shown in FIG. 6D .
  • the "Edit Marked People” entry is set in the "Share Floating Layer", which is located before the "Privacy Settings”. After clicking, you can also enter the "Edit Marked People” page as shown in Figure 6C. edit operation.
  • Modifying the tagging of people in the video may include at least one of the following: adding tags to untagged people in the video; deleting tags from tagged people in the video.
  • the first user can click the "X" in the upper right corner of each marked person as shown in FIG. 6C to cancel their "marked” status.
  • you can save the edited state and close the page.
  • the share button will change to a "Mark People” button, and the sharing button will open after clicking. Floating layer, the entrance of "marked person” will be fronted, for example, at the first place.
  • FIG. 7 shows a flow chart of a video processing method according to some other embodiments of the present disclosure.
  • FIG. 7A illustrates a display page when a second user browses a posted video according to some embodiments of the present disclosure.
  • FIG. 7B illustrates a "mark for deletion" page according to some embodiments of the present disclosure.
  • Figure 7C illustrates the "Add Back Flags" page, according to some embodiments of the present disclosure.
  • FIG. 7D shows a sharing page according to other embodiments of the present disclosure.
  • step S6 is also included. Only the differences between FIG. 7 and FIG. 1 will be described below, and the similarities will not be repeated.
  • step S6 after the video is published, in response to the second user's editing operation on the tagged result, the tags of the characters in the video are modified.
  • the second user is different from the first user.
  • the second user may be of a different identity.
  • the second user may or may not be a character in the video. For the situation that the second user is a character in the video, it may or may not have been marked.
  • Different identities correspond to different editing permissions.
  • the second user's modification of the tag of the character in the video includes at least one of: adding a tag to an untagged person in the video; deleting the tag for the second user.
  • FIG. 7A shows the label "You have been tagged” and shows the number of tagged people and partial avatars.
  • the second user can click on the position without an icon as shown in FIG. 7A , and then enter the deletion mark page as shown in FIG. 7B .
  • the second user can see a "delete mark” or "unmark” button on the right side of his user name in the list of marked persons in the video shown in Figure 7B, and the button can be deleted after clicking own mark. After the above operation, the displayed page will prompt "You have deleted your own tag from the video". The second user revisits the list of marked people without himself being in it.
  • the display page shown in FIG. 7B will change to the display page shown in FIG. 7C , and the "delete mark” or “unmark” button will become the “add back mark” button, which can be marked again after being clicked.
  • the second user can also use the "share floating layer” to enter the sharing page as shown in FIG. 7D.
  • the “delete mark” or “unmark” button is set in the “share floating layer”, which is located at the first place. After clicking, the display interface where the mark can be deleted can also be entered as shown in FIG. 7B .
  • the second user's modification of the tag of the person in the video includes: adding a tag to the untagged person in the video.
  • the second user can also add friends from the tagged video, as shown in FIG. 8 .
  • Figure 8 shows friends matched with video creators. For example, FIG. 8 shows who the creator of the video is followed, and a new row of the following information lists marked people, for example, three people are marked. Viewers can click on the corresponding characters to add friends.
  • the page can be displayed with a low interest value, and only the user name is displayed, so as to avoid overlapping avatars on the page of the friend button and the marked person.
  • FIG. 9 shows a block diagram of a video processing device according to some embodiments of the present disclosure.
  • the video processing device 9 includes: a display 91 configured to provide the first user with an interactive interface for marking characters in the video; A marking operation for a character.
  • the display 91 is further configured to, in response to the first user's tagging operation, display tagging results outside the video display interface in an information stream when the video is posted on the social network.
  • the apparatus may also include a memory that can store various information generated in operation by the video processing device, each unit included in the video processing device, programs and data for operations, and the like.
  • the memory can be volatile memory and/or non-volatile memory.
  • memory may include, but is not limited to, random access memory (RAM), dynamic random access memory (DRAM), static random access memory (SRAM), read only memory (ROM), flash memory.
  • RAM random access memory
  • DRAM dynamic random access memory
  • SRAM static random access memory
  • ROM read only memory
  • flash memory flash memory
  • the memory may also be located outside the video processing device.
  • Fig. 10 shows a block diagram of a video processing device according to other embodiments of the present disclosure.
  • the video processing apparatus 10 can be various types of equipment, such as but not limited to mobile phones, notebook computers, digital broadcast receivers, PDA (personal digital assistant), PAD (tablet computer), PMP (portable multimedia players), mobile terminals such as vehicle-mounted terminals (such as vehicle-mounted navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like.
  • PDA personal digital assistant
  • PAD tablet computer
  • PMP portable multimedia players
  • mobile terminals such as vehicle-mounted terminals (such as vehicle-mounted navigation terminals), and the like
  • stationary terminals such as digital TVs, desktop computers, and the like.
  • the video processing device 10 includes: a memory 101 and a processor 102 coupled to the memory 101 . It should be noted that the components of the video processing device 10 shown in FIG. 10 are exemplary rather than limiting, and the video processing device 10 may also have other components according to actual application requirements. Processor 102 may control other components in video processing device 10 to perform desired functions.
  • memory 101 is used to store one or more computer readable instructions.
  • the processor 102 is used to execute computer-readable instructions, the computer-readable instructions are executed by the processor 102 to implement the method according to any of the foregoing embodiments.
  • the specific implementation and related explanations of each step of the method reference may be made to the above-mentioned embodiments, and repeated descriptions will not be repeated here.
  • processor 102 and the memory 101 may directly or indirectly communicate with each other.
  • processor 102 and memory 101 may communicate over a network.
  • a network may include a wireless network, a wired network, and/or any combination of a wireless network and a wired network.
  • the processor 102 and the memory 101 may also communicate with each other through a system bus, which is not limited in the present disclosure.
  • the processor 102 can be embodied as various suitable processors, processing devices, etc., such as a central processing unit (CPU), a graphics processing unit (Graphics Processing Unit, GPU), a network processor (NP), etc.; Signal Processor (DSP), Application Specific Integrated Circuit (ASIC), Field Programmable Gate Array (FPGA) or other Programmable Logic Devices, Discrete Gate or Transistor Logic Devices, Discrete Hardware Components.
  • the central processing unit (CPU) may be an X86 or ARM architecture or the like.
  • memory 101 may include any combination of various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory.
  • the memory 101 may include, for example, a system memory, and the system memory stores, for example, an operating system, an application program, a boot loader (Boot Loader), a database, and other programs.
  • Various application programs, various data, and the like can also be stored in the storage medium.
  • a central processing unit (CPU) 1101 executes various processes according to programs stored in a read only memory (ROM) 1102 or loaded from a storage section 1108 to a random access memory (RAM) 1103 .
  • ROM read only memory
  • RAM random access memory
  • the central processing unit is only exemplary, and it may also be other types of processors, such as the various processors mentioned above.
  • ROM 1102, RAM 1103, and storage portion 1108 may be various forms of computer-readable storage media, as described below. It should be noted that although the ROM 1102, the RAM 1103, and the storage portion 1108 are shown separately in FIG. 11, one or more of them may be combined or located in the same or different memory or storage modules.
  • the CPU 1101, ROM 1102, and RAM 1103 are connected to each other via a bus 1104.
  • An input/output interface 1105 is also connected to the bus 1104 .
  • the following components are connected to the input/output interface 1105: an input section 1106, such as a touch screen, touch pad, keyboard, mouse, image sensor, microphone, accelerometer, gyroscope, etc.; an output section 1107, including a display, such as a cathode ray tube (CRT ), a liquid crystal display (LCD), a speaker, a vibrator, etc.; a storage section 1108 including a hard disk, a magnetic tape, etc.; and a communication section 1109 including a network interface card such as a LAN card, a modem, and the like.
  • the communication section 1109 allows communication processing to be performed via a network such as the Internet. It is easy to understand that although it is shown in FIG. 11 that each device or module in the electronic device 1100 communicates through the bus 1104, they may also communicate through a network or other methods, wherein the network may include a wireless network, a wired network , and/or any combination of wireless and wired networks.
  • a driver 1110 is also connected to the input/output interface 1105 as needed.
  • a removable medium 1111 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 1110 as necessary, so that a computer program read therefrom is installed into the storage section 1108 as necessary.
  • the programs constituting the software can be installed from a network such as the Internet or a storage medium such as the removable medium 1111.
  • embodiments of the present disclosure include a computer program product, which includes a computer program carried on a computer-readable medium, where the computer program includes program codes for executing the methods shown in the flowcharts.
  • the computer program may be downloaded and installed from the network via the communication means 1109, or from the storage portion 1108, or from the ROM 1102.
  • the CPU 1101 When the computer program is executed by the CPU 1101, the above-mentioned functions defined in the methods of the embodiments of the present disclosure are executed.
  • a computer-readable medium may be a tangible medium that may contain or store information for use by or in conjunction with an instruction execution system, device, or device. program.
  • a computer readable medium may be a computer readable signal medium or a computer readable storage medium or any combination of the two.
  • a computer-readable storage medium may be, for example, but not limited to: an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination thereof.
  • Computer-readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer diskettes, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave carrying computer-readable program code therein.
  • Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which can transmit, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device .
  • Program code embodied on a computer readable medium may be transmitted by any appropriate medium, including but not limited to wires, optical cables, RF (radio frequency), etc., or any suitable combination of the above.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device, or may exist independently without being incorporated into the electronic device.
  • a computer program including: instructions, which when executed by a processor cause the processor to execute the method of any one of the above embodiments.
  • instructions may be embodied as computer program code.
  • the computer program codes for performing the operations of the present disclosure may be written in one or more programming languages or a combination thereof, the above-mentioned programming languages include but not limited to object-oriented programming languages, Such as Java, Smalltalk, C++, also includes conventional procedural programming languages, such as the "C" language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (such as through an Internet connection).
  • LAN local area network
  • WAN wide area network
  • each block in a flowchart or block diagram may represent a module, program segment, or portion of code that contains one or more logical functions for implementing specified executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented by a dedicated hardware-based system that performs the specified functions or operations , or may be implemented by a combination of dedicated hardware and computer instructions.
  • modules, components or units involved in the embodiments described in the present disclosure may be implemented by software or by hardware. Wherein, the name of a module, component or unit does not constitute a limitation on the module, component or unit itself under certain circumstances.
  • exemplary hardware logic components include: Field Programmable Gate Array (FPGA), Application Specific Integrated Circuit (ASIC), Application Specific Standard Product (ASSP), System on Chip (SOC), Complex Programmable Logical device (CPLD) and so on.
  • FPGA Field Programmable Gate Array
  • ASIC Application Specific Integrated Circuit
  • ASSP Application Specific Standard Product
  • SOC System on Chip
  • CPLD Complex Programmable Logical device

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Library & Information Science (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • User Interface Of Digital Computer (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

本公开涉及一种视频处理方法、视频处理装置和计算机可读存储介质。视频处理方法包括:向第一用户提供标记视频中人物的交互界面;接收第一用户通过交互界面输入的、对视频中至少一个人物的标记操作;响应于第一用户的标记操作,在社交网络发布视频时在视频显示界面之外以信息流的方式显示标记结果。

Description

视频处理方法、视频处理装置和计算机可读存储介质
相关申请的交叉引用
本申请是以中国申请号为202111139005.9、申请日为2021年9月27日的申请和中国申请号为202210142810.5、申请日为2022年2月16日的申请为基础,并主张其优先权,该中国申请的公开内容在此作为整体引入本申请中。
技术领域
本公开涉及计算机技术领域,特别涉及一种视频处理方法、视频处理装置和计算机可读存储介质。
背景技术
社交网络可根据用户的输入,提供各种服务,例如,照片或视频共享、消息传递等,以促进用户之间的社交互动。
利用与社交网络的交互,用户可将数字媒体上传至系统,供他人浏览。数字媒体可包括图像、视频、音频、文本等。例如,用户可将自己创作的视频发布到社交网络上,通过提醒等操作发起与其他用户的互动。社交网络上的其他用户可以通过浏览、点赞、评论等方式与视频创作者进行互动。
随着用户对社交网络的依赖性日益增加,用户对社交网络的体验要求也越来越高。
发明内容
根据本公开的一些实施例,提供了一种视频处理方法,包括:
向第一用户提供标记视频中人物的交互界面;
接收第一用户通过交互界面输入的、对视频中至少一个人物的标记操作;
响应于第一用户的标记操作,在社交网络发布视频时在视频显示界面之外以信息流的方式显示标记结果。
根据本公开的另一些实施例,提供了一种视频处理装置,包括:
显示器,配置为向第一用户提供标记视频中人物的交互界面;
处理器,配置为接收第一用户通过交互界面输入的、对视频中至少一个人物的标记操作,
其中,显示器还配置为响应于第一用户的标记操作,在社交网络发布视频时在视频显示界面之外以信息流的方式显示标记结果。
根据本公开的又一些实施例,提供了一种视频处理装置,包括:
存储器;和
耦接至所述存储器的处理器,所述处理器被配置为基于存储在所述存储器中的指令,执行本公开中所述的任一实施例的视频处理方法中的一个或多个步骤。
根据本公开的再一些实施例,提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时执行本公开中所述的任一实施例的视频处理方法。
提供该发明内容部分以便以简要的形式介绍构思,这些构思将在后面的具体实施方式部分被详细描述。该发明内容部分并不旨在标识要求保护的技术方案的关键特征或必要特征,也不旨在用于限制所要求的保护的技术方案的范围。
通过以下参照附图对本公开的示例性实施例的详细描述,本公开的其它特征、方面及其优点将会变得清楚。
附图说明
下面参照附图说明本公开的优选实施例。此处所说明的附图用来提供对本公开的进一步理解,各附图连同下面的具体描述一起包含在本说明书中并形成说明书的一部分,用于解释本公开。应当理解的是,下面描述中的附图仅仅涉及本公开的一些实施例,而非对本公开构成限制。在附图中:
图1示出根据本公开一些实施例的视频处理方法的流程图;
图2示出根据本公开一些实施例的交互界面的示意图;
图2A和图2B分别示出根据本公开一些实施例呈现标记介绍浮层的方式不同的交互页面;
图3A示出根据本公开一些实施例的搜索状态下“标记人物”页面的示意图;
图3B示出根据本公开一些实施例的推荐状态下“标记人物”页面的示意图;
图3C示出根据本公开一些实施例的显示“已标记人物”列表的“标记人物”页面的示意图;
图3D和3E分别示出了根据本公开一些实施例的在单人被标记和多人被标记的情形下,在发布视频之前的交互界面;
图3F示出根据本公开一些实施例的在发布视频之前的视频预览页面;
图4A和图4B分别示出根据本公开一些实施例的高兴趣值显示页面的示意图;
图4C和图4D分别示出根据本公开一些实施例的低兴趣值显示页面的示意图;
图4E和4F分别示出了根据本公开一些实施例,在单人被标记和多人被标记的情形下,显示具有已标记人物列表的浮层;
图5A示出根据本公开一些实施例在视频发布之后向被标记人物的用户帐号发送通知和推送的页面;
图5B示出图5A中通知和推送的详细内容;
图6示出根据本公开另一些实施例的视频处理方法的流程图;
图6A示出根据本公开一些实施例的在视频创作者浏览已发布的视频时的显示页面;
图6B示出根据本公开一些实施例设有“编辑已标记人物”入口的、显示视频中已标记人物的显示页面;
图6C示出根据本公开一些实施例的“编辑已标记人物”页面;
图6D示出根据本公开一些实施例的分享页面;
图7示出根据本公开又一些实施例的视频处理方法的流程图;
图7A示出根据本公开一些实施例的在第二用户浏览已发布的视频时的显示页面;
图7B示出示出根据本公开一些实施例的“删除标记”页面;
图7C示出根据本公开一些实施例的“加回标记”页面;
图7D示出根据本公开另一些实施例的分享页面;
图8示出根据本公开一些实施例的从标记的视频加好友的显示页面;
图9示出根据本公开一些实施例的视频处理装置的框图;
图10示出根据本公开另一些实施例的视频处理装置的框图;
图11示出根据本公开一些实施例的电子设备的框图。
应当明白,为了便于描述,附图中所示出的各个部分的尺寸并不一定是按照实际的比例关系绘制的。在各附图中使用了相同或相似的附图标记来表示相同或者相似的部件。因此,一旦某一项在一个附图中被定义,则在随后的附图中可能不再对其进行进一步讨论。
具体实施方式
下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完 整地描述,但是显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。以下对实施例的描述实际上也仅仅是说明性的,决不作为对本公开及其应用或使用的任何限制。应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例。
应当理解,本公开的方法实施方式中记载的各个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。本公开的范围在此方面不受限制。除非另外具体说明,否则在这些实施例中阐述的部件和步骤的相对布置、数字表达式和数值应被解释为仅仅是示例性的,不限制本公开的范围。
本公开中使用的术语“包括”及其变型意指至少包括后面的元件/特征、但不排除其他元件/特征的开放性术语,即“包括但不限于”。此外,本公开使用的术语“包含”及其变型意指至少包含后面的元件/特征、但不排除其他元件/特征的开放性术语,即“包含但不限于”。因此,包括与包含是同义的。术语“基于”意指“至少部分地基于”。
整个说明书中所称“一个实施例”、“一些实施例”或“实施例”意味着与实施例结合描述的特定的特征、结构或特性被包括在本发明的至少一个实施例中。例如,术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。而且,短语“在一个实施例中”、“在一些实施例中”或“在实施例中”在整个说明书中各个地方的出现不一定全都指的是同一个实施例,但是也可以指同一个实施例。
需要注意,本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。除非另有指定,否则“第一”、“第二”等概念并非意图暗示如此描述的对象必须按时间上、空间上、排名上的给定顺序或任何其他方式的给定顺序。
需要注意,本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“一个或多个”。
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。
下面结合附图对本公开的实施例进行详细说明,但是本公开并不限于这些具体的实施例。下面这些具体实施例可以相互结合,对于相同或者相似的概念或过程可能在某些实施例不再赘述。此外,在一个或多个实施例中,特定的特征、结构或特性可以 由本领域的普通技术人员从本公开将清楚的任何合适的方式组合。
应理解,本公开对于如何获得待应用/待处理的图像或视频也不做限制。在本公开的一个实施例中,可以从存储装置,例如内部存储器或者外部存储装置获取,在本公开的另一个实施例中,可以调动摄影组件来拍摄。应指出,在本说明书的上下文中,图像的或视频类型未被具体限制。此外,图像或视频可以是由摄像装置获得的原始图像或视频,或者已对原始图像或视频进行过特定处理的图像或视频,例如初步过滤、去混叠、颜色调整、对比度调整、规范化等等。应指出,预处理操作还可以包括本领域已知的其它类型的预处理操作,这里将不再详细描述。
随着用户对社交网络的依赖性日益增加,用户对社交网络的体验要求也越来越高。为了进一步提升在社交网络上的体验,在社交网络上为视频引入人物标记功能。一旦启动标记功能,创作者将能够标记在视频中的人物,并且在视频发布后还可以继续编辑标记结果,例如添加标记、删除标记、更换标记等。
图1示出根据本公开一些实施例的视频处理方法的流程图。
如图1所示,视频处理方法包括:步骤S1,向第一用户提供标记视频中人物的交互界面;步骤S3,接收第一用户通过交互界面输入的、对视频中至少一个人物的标记操作;步骤S5,响应于第一用户的标记操作,在社交网络发布视频时在视频显示界面之外以信息流的方式显示标记结果。
在步骤S1中,第一用户例如为视频创作者。交互界面例如为视频发布页面。
图2示出根据本公开一些实施例的交互界面的示意图。如图2所示,视频发布页面的标题下方设置了“标记人物(tag people)”的入口,点击后即可进入“标记人物”页面,进行标记操作。在进入“标记人物”页面后,第一用户可以选择对应的人物来标记。
对于不熟悉标记功能的用户,例如首次使用有标记功能的社交网络的用户,也可以提供介绍标记功能的浮层。也可以在设备首次进入有标记功能的版本的社交网络时,展示介绍标记功能的浮层。
图2A和图2B分别示出呈现标记介绍浮层的方式不同的交互页面,但标记介绍浮层的内容都包括:标题“标记视频中的人物”;说明“你标记的人对能看这个视频的任何人都可见”、“你可以在视频发布后编辑标记的人。你标记的人也可以删除他们自己。”
点击图2A和图2B中的确定按钮“OK”、关闭按钮“×”和顶部遮罩区,都可以关闭标记介绍浮层。
图3A示出根据本公开一些实施例的搜索状态下“标记人物”页面的示意图。如图3A所示页面上的搜索框,第一用户可以用来搜索待标记的人物。搜索范围为全部用户,除了第一用户拉黑的和拉黑了第一用户的人。搜索结果根据输入的文本实时展示。
页面显示搜索到的每个用户的头像、昵称、用户名。例如图3A所示,在搜索框中输入字母“a”,则页面显示所有首字母为“a”或“A”的用户名、以及对应的每个用户的头像和昵称。在一些实施例中,页面还显示每个用户与第一用户之间的关系,例如,好友和关注。
第一用户可以点击任一用户后将该用户处理为“已选中”状态,并添加到“已标记人物列表”中。例如图3A所示,第一用户选择用户C作为人物的标记,即将视频中出现的某一人物标记为“C”。
在一些实施例中,第一用户也可以从推荐标记列表中选择对应的人物来标记。
图3B示出根据本公开一些实施例的推荐状态下“标记人物”页面的示意图。如图3B所示,推荐标记列表可以分为三个区域:最近、好友和关注。
在一些实施例中,“最近”区域又可以包括2个列表,即“最近标记的人物”列表和“最近发送消息的人物”列表,其中“最近标记的人物”列表排列在“最近发送消息的人物”列表之前,即,先展示“最近标记的人物”列表,然后再展示“最近发送消息的人物”列表。两个列表可以分别根据最后的互动时间排序,总共展示的人物可以根据页面展示需要进行设置,例如最多展示10个人。
“好友”、“关注”列表分别列出与第一用户互为好友的人、第一用户关注的人,也可以根据首字母排序。如图3B所示,每个被推荐的用户展示头像、昵称、用户名。
第一用户点击任一用户后将该用户处理为“已选中”状态,并添加到“已标记人物列表”中。当有已标记人物时,出现图3C所示的页面。
图3C示出根据本公开一些实施例的显示“已标记人物”列表的“标记人物”页面的示意图。如图3C所示,“已标记人物”列表在推荐标记列表的下方区域,该区域仅在存在已标记人物时才展示,并可以根据更新时间从早到晚排序展示所有已标记人物。
在一些实施例中,第一用户可以点击每个已标记人物右上角的“×”,取消其“已选中”状态。点击“完成(X)”按钮后,可以保存标记状态并关闭页面,其中X为已标记人数。
在第一用户通过交互界面完成对视频中人物的标记操作之后,在发布视频之前的交互界面如图3D和3E所示。图3D和3E分别示出了单人被标记和多人被标记的情 形。
如图3D所示,在“标记人物”按钮旁显示被标记人物的头像。在已标记人数为M位的情形下,在视频发布页面显示N位已标记人物的头像和已标记人数与显示头像数的差值X,其中,M为大于1的正整数,N为大于1的正整数,M大于N,X=M-N。如图3E所示,M=4,在“标记人物”按钮旁显示2位已标记人物的头像和“+2”。
在发布视频之前,还可以呈现封面的视频预览页面,如图3F所示。虽然封面中没有人物,但即将发布的视频中有人物,例如当视频中已标记人物后,例如已标记3个人物,视频预览页面显示标记结果,提供与实际浏览视频的页面一致的体验。
在第一用户通过交互界面完成对视频中人物的标记操作之后,响应于第一用户的发布视频的操作,在社交网络发布视频时在视频显示界面之外以信息流的方式显示标记结果。显示页面不仅包括视频的标记结果,还可以包括其他信息流,而如何显示这些信息流则取决于浏览者的预期兴趣值。即可以根据浏览者对视频的预期兴趣值,以相应的信息流显示视频。
在一些实施例中,可以根据浏览者与第一用户、视频中已标记人物之间的关系,确定浏览者对视频的预期兴趣值。可以根据预期兴趣值,选择不同的信息流显示方式。
例如,在浏览者对视频的预期兴趣值大于或等于阈值的情况下,显示视频中至少一个已标记人物的头像(avatar);在浏览者对视频的预期兴趣值小于阈值的情况下,显示视频中至少一个已标记人物的用户名。
除此之外,还可以根据浏览者与第一用户、视频中已标记人物之间的关系,确定向浏览者显示的视频的标签。
在一些实施例中,在浏览者是视频中已标记人物之一的情况下,将标签确定为第一标签,即可以第一标签显示视频;在浏览者不是视频中已标记人物之一、但与第一用户以及视频中已标记人物是关联关系的情况下,将标签确定为第二标签,即以第二标签显示视频;在浏览者不是视频中已标记人物之一、与第一用户不是关联关系、但与视频中已标记人物是关联关系的情况下,将标签确定为第三标签,即可以第三标签显示视频;在浏览者不是视频中已标记人物之一、与第一用户和视频中已标记人物也不是关联关系的情况下,将标签确定为第四标签,即以第四标签显示视频。
第一标签例如为“你在视频中被标记”。第二标签例如为“好友/你关注的人”。第三标签例如为“好友/关注的人被标记”。第四标签例如为“没有关联关系”,或者“低兴趣值”,或者“无”。关联关系包括好友或关注。在第四标签为“无”的情况下,显示视频的 时候无特别的标签。
在第一标签、第二标签和第三标签的情况下,即预期浏览者对视频具有高兴趣值,可以显示视频中至少一个已标记人物的头像。在第四标签的情况下,即预期浏览者对视频具有低兴趣值,可以显示视频中至少一个已标记人物的用户名。不管是预期高兴趣值还是低兴趣值的情形,都可以根据视频中已标记人物的数量,有不同的显示页面。
应当理解,以第一标签、第二标签或第三标签向浏览者显示视频是因为预期浏览者对该视频具有高兴趣值;而以第四标签向浏览者显示视频则因为预期浏览者对该视频不感兴趣,也就是具有低兴趣值,即浏览者与视频创作者、视频中任何人都没有好友/关注/匹配的好友等关系。当视频创作者与浏览者是匹配的好友,并且视频创作者以卡片形式显示时,标签条目将在新行中显示为低兴趣值。
图4A和图4B分别示出根据本公开一些实施例的高兴趣值显示页面的示意图。
图4A示出仅1人被标记的情形。如图4A所示,仅1人被标记,显示被标记人物的头像和用户名。如果被标记人物的用户名太长,则可在新的一行中显示标记项和创作时间(如果有)。这种情况下,如果已标记用户的用户名仍然太长,则保持显示头像,使用符号“…”处理用户名。
图4A还示出“好友被标记”的标签(第三标签)。显示页面当被点击时可打开标记列表浮层。
图4B示出多人被标记的情形。如图4B所示,由于多人被标记,都显示头像和用户名会过度占用显示页面,因此仅显示部分已标记人物(例如2人)的头像,但显示标记了几个人物,例如标记“5个人物”,不再显示已标记人物的用户名。图4B也示出“好友”的标签(第二标签)。
图4C和图4D分别示出根据本公开一些实施例的低兴趣值显示页面的示意图。
图4C示出仅1人被标记的情形。如图4C所示,仅1人被标记,显示被标记的人物的用户名。在图4C中,没有关系对应的标签,即第四标签为无。
图4D示出多人被标记的情形。如图4D所示,由于多人被标记,都显示用户名会过度占用显示页面,因此仅显示为标记多个人物,例如标记“2个人物”。在图4D中,也没有关系对应的标签,即第四标签也为无。
浏览者对视频的预期兴趣值并不是固定不变的,除了取决于浏览者与第一用户、视频中已标记人物之间的关系,也可能随着浏览者的行为或其他特征而发生改变。
在一些实施例中,可以根据浏览者对视频的浏览时长,调整根据关系确定的浏览 者对视频的预期兴趣值。例如,在监测到预期低兴趣值的浏览者观看视频的停留时间video_staytime超过阈值时,如用户观看视频达5秒,可将预期低兴趣值调整为高兴趣值,从而调整显示页面。
在另一些实施例中,也可以根据与视频浏览相关的其他特征,调整浏览者对视频的预期兴趣值,从而将显示页面从低兴趣值显示转换为高兴趣值显示。
浏览者想要查看谁在视频中时,可以单击头像或人物图标等热区以查看视频中的人物。热区对于高兴趣值显示和低兴趣值显示可以相同。响应于浏览者对视频中标记结果的点击操作,显示视频中人物的标记列表。例如,浏览者点击已标记人物的头像时,显示具有已标记人物列表的浮层。图4E和4F分别示出了单人被标记和多人被标记的情形。
如图4E所示,显示页面示出视频中已标记人物的头像、昵称和用户名,如B,并显示其与浏览者之间的关系,如“好友”。在已标记人物的这些项目右侧还设置了供浏览者操作的按钮,如图4E所示的“私信(message)”按钮。
如图4F所示,显示页面示出视频中多个已标记人物的头像、昵称和用户名,并显示其与浏览者之间的关系,如“好友”、“与已标记人物是好友”、“可能认识的人”、“来自你的联系人”。在已标记人物的这些项目右侧还设置了供浏览者操作的按钮,如图4F所示的“私信”、“关注的人(following)”、“关注(follow)”等按钮。
上述的浮层高度可以调整到屏幕的一定比例,例如最大50%,以显示更多的已标记人物。
在浮层中,多个已标记用户可以根据其与浏览者之间的关系进行排序,例如先显示自己,再显示好友,接下来显示匹配的好友,然后显示关注的人,最后显示陌生人。当然,多个已标记用户也可以根据标记顺序来显示。
浏览者可以下滑关闭浮层,也可以或点击关闭按钮回到信息流显示。
在视频发布之后,还可以向被标记人物的用户帐号发送通知和推送。如图5A所示,被标记用户收到消息“AA在视频中标记了你”。图5A还显示了标记时间为“1小时前”。根据图5A所示的“向上滑动打开”的指示进行相应操作后,进入如图5B所示的页面。图5B示出详细的通知和推送内容。
在视频发布之后,还可以向第一用户或第二用户提供编辑标记结果的交互页面。
下面结合图6和图6A-6D来描述在视频发布之后第一用户对标记结果的编辑。图6示出根据本公开另一些实施例的视频处理方法的流程图。图6A示出根据本公开一 些实施例的在视频创作者浏览已发布的视频时的显示页面。图6B示出根据本公开一些实施例设有“编辑已标记人物”入口的、显示视频中已标记人物的显示页面。图6C示出根据本公开一些实施例的“编辑已标记人物”页面。图6D示出根据本公开一些实施例的分享页面。
图6与图1的区别在于还包括步骤S7。下面将仅描述图6与图1的不同之处,相同之处不再赘述。
在步骤S7,在视频发布之后,响应于第一用户对标记结果的编辑操作,对视频中人物的标记进行修改。如前所述,第一用户可以为视频创作者。
在视频创作者浏览已发布的视频时,可呈现如图6A所示的显示页面。视频创作者可以点击如图6A所示的无图标位置,则可以进入如图6B所示的视频中已标记人物的显示页面。如图6B所示,显示页面中视频中已标记人物的列表下方设置了“编辑已标记人物”的入口,点击后即可进入如图6C所示的“编辑已标记人物”页面,进行编辑操作。
在一些实施例中,视频创作者也可以点击图6A中设置的分享按钮,即“分享浮层”的入口,进入如图6D所示的分享页面。如图6D所示,“分享浮层”中设置了“编辑已标记人物”的入口,位于“隐私设置”之前,点击后也可进入如图6C所示的“编辑已标记人物”页面,进行编辑操作。
在进入“编辑已标记人物”页面后,第一用户可以对已标记人物的列表进行编辑。对视频中人物的标记进行修改可以包括以下至少一项:对视频中未标记人物增加标记;对视频中已标记人物删除标记。
例如,第一用户可以点击如图6C所示的每个已标记人物右上角的“×”,取消其“已标记”状态。点击“完成”按钮后,可以保存编辑后状态并关闭页面。
在一些实施例中,若发布的视频被识别为“多人参与”,即视频画面中有多个人,但还没有人物被标记,则分享按钮会变为“标记人物”按钮,点击后打开分享浮层,“标记人物”的入口将被前置,例如位于首位。
下面结合图7和图7A-7D来描述在视频发布之后第二用户对标记结果的编辑。图7示出根据本公开又一些实施例的视频处理方法的流程图。图7A示出根据本公开一些实施例的在第二用户浏览已发布的视频时的显示页面。图7B示出示出根据本公开一些实施例的“删除标记”页面。图7C示出根据本公开一些实施例的“加回标记”页面。图7D示出根据本公开另一些实施例的分享页面。
图7与图1的区别在于还包括步骤S6。下面将仅描述图7与图1的不同之处,相同之处不再赘述。在步骤S6,在视频发布之后,响应于第二用户对标记结果的编辑操作,对视频中人物的标记进行修改。
第二用户与第一用户不同。第二用户可以是不同的身份。第二用户可能是视频中人物,也可能不是视频中人物。对于第二用户是视频中人物的情形,其可能已被标记,也可能未被标记。不同的身份对应不同的编辑权限。
对于第二用户是视频中已标记人物之一的情形,第二用户对视频中人物的标记进行修改包括以至少一项:对视频中未标记人物增加标记;对第二用户删除标记。
与第一用户不同地,在第二用户浏览已发布的视频时,呈现如图7A所示的显示页面。图7A示出“你已被标记”的标签,并且示出已标记人物的数量和部分头像。第二用户可以点击如图7A所示的无图标位置,则可以进入如图7B所示的删除标记页面。
作为被标记人物,第二用户在图7B所示的视频中已标记人物的列表中能够看到自己的用户名的右侧设置了“删除标记”或“不标记”按钮,点击后即可删除自己的标记。在上述操作后,显示页面会提示“你已从视频删除自己的标记”。第二用户重新访问标记人物列表,自己就不在其中了。
相应地,图7B所示的显示页面会变为如图7C所示的显示页面,“删除标记”或“不标记”按钮变为“加回标记”按钮,点击后可再次进行标记。
在一些实施例中,第二用户也可以利用“分享浮层”,进入如图7D所示的分享页面。如图7D所示,“分享浮层”中设置了“删除标记”或“不标记”按钮,位于首位,点击后也可进入如图7B所示的可删除标记的显示界面。
对于第二用户不是视频中已标记人物的情形,第二用户对视频中人物的标记进行修改包括:对视频中未标记人物增加标记。
第二用户除了可以对标记结果进行编辑之外,还可以从标记的视频加好友,如图8所示。图8示出与视频创作者匹配的好友。例如,图8示出视频创作者被谁关注,并且关注信息新起一行列出已标记人物,如标记了3个人物。浏览者可以点击对应的人物加好友。图8可以采取低兴趣值显示页面,仅显示用户名,以避免在加好友按钮和已标记人物的页面上头像重叠。
图9示出根据本公开一些实施例的视频处理装置的框图。
如图9所示,视频处理装置9包括:显示器91,配置为向第一用户提供标记视频 中人物的交互界面;和处理器92配置为接收第一用户通过交互界面输入的、对视频中至少一个人物的标记操作。
显示器91还配置为响应于第一用户的标记操作,在社交网络发布视频时在视频显示界面之外以信息流的方式显示标记结果。
此外,尽管未示出,该设备也可以包括存储器,其可以存储由视频处理装置、视频处理装置所包含的各个单元在操作中产生的各种信息、用于操作的程序和数据等。存储器可以是易失性存储器和/或非易失性存储器。例如,存储器可以包括但不限于随机存储存储器(RAM)、动态随机存储存储器(DRAM)、静态随机存取存储器(SRAM)、只读存储器(ROM)、闪存存储器。当然,存储器可也位于视频处理装置之外。
图10示出根据本公开另一些实施例的视频处理装置的框图。
在一些实施例中,视频处理装置10可以为各种类型的设备,例如可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。
如图10所示,视频处理装置10包括:存储器101以及耦接至该存储器101的处理器102。应当注意,图10所示的视频处理装置10的组件只是示例性的,而非限制性的,根据实际应用需要,视频处理装置10还可以具有其他组件。处理器102可以控制视频处理装置10中的其它组件以执行期望的功能。
在一些实施例中,存储器101用于存储一个或多个计算机可读指令。处理器102用于运行计算机可读指令时,计算机可读指令被处理器102运行时实现根据上述任一实施例所述的方法。关于该方法的各个步骤的具体实现以及相关解释内容可以参见上述的实施例,重复之处在此不作赘述。
例如,处理器102和存储器101之间可以直接或间接地互相通信。例如,处理器102和存储器101可以通过网络进行通信。网络可以包括无线网络、有线网络、和/或无线网络和有线网络的任意组合。处理器102和存储器101之间也可以通过系统总线实现相互通信,本公开对此不作限制。
例如,处理器102可以体现为各种适当的处理器、处理装置等,诸如中央处理器(CPU)、图形处理器(Graphics Processing Unit,GPU)、网络处理器(NP)等;还可以是数字信号处理器(DSP)、专用集成电路(ASIC)、现场可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。中央处理元(CPU)可以为 X86或ARM架构等。例如,存储器101可以包括各种形式的计算机可读存储介质的任意组合,例如易失性存储器和/或非易失性存储器。存储器101例如可以包括系统存储器,系统存储器例如存储有操作系统、应用程序、引导装载程序(Boot Loader)、数据库以及其他程序等。在存储介质中还可以存储各种应用程序和各种数据等。
另外,根据本公开的一些实施例,根据本公开的各种操作/处理在通过软件和/或固件实现的情况下,可从存储介质或网络向具有专用硬件结构的计算机系统,例如图11所示的电子设备1100的计算机系统安装构成该软件的程序,该计算机系统在安装有各种程序时,能够执行各种功能,包括诸如前文所述的功能等等。
在图11中,中央处理单元(CPU)1101根据只读存储器(ROM)1102中存储的程序或从存储部分1108加载到随机存取存储器(RAM)1103的程序执行各种处理。在RAM 1103中,也根据需要存储当CPU 1101执行各种处理等时所需的数据。中央处理单元仅仅是示例性的,其也可以是其它类型的处理器,诸如前文所述的各种处理器。ROM 1102、RAM 1103和存储部分1108可以是各种形式的计算机可读存储介质,如下文所述。需要注意的是,虽然图11中分别示出了ROM 1102、RAM 1103和存储部分1108,但是它们中的一个或多个可以合并或者位于相同或不同的存储器或存储模块中。
CPU 1101、ROM 1102和RAM 1103经由总线1104彼此连接。输入/输出接口1105也连接到总线1104。
下述部件连接到输入/输出接口1105:输入部分1106,诸如触摸屏、触摸板、键盘、鼠标、图像传感器、麦克风、加速度计、陀螺仪等;输出部分1107,包括显示器,比如阴极射线管(CRT)、液晶显示器(LCD),扬声器,振动器等;存储部分1108,包括硬盘,磁带等;和通信部分1109,包括网络接口卡比如LAN卡、调制解调器等。通信部分1109允许经由网络比如因特网执行通信处理。容易理解的是,虽然图11中示出电子设备1100中的各个装置或模块是通过总线1104来通信的,但它们也可以通过网络或其它方式进行通信,其中,网络可以包括无线网络、有线网络、和/或无线网络和有线网络的任意组合。
根据需要,驱动器1110也连接到输入/输出接口1105。可拆卸介质1111比如磁盘、光盘、磁光盘、半导体存储器等等根据需要被安装在驱动器1110上,使得从中读出的计算机程序根据需要被安装到存储部分1108中。
在通过软件实现上述系列处理的情况下,可以从网络比如因特网或存储介质比如 可拆卸介质1111安装构成软件的程序。
根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置1109从网络上被下载和安装,或者从存储部分1108被安装,或者从ROM 1102被安装。在该计算机程序被CPU 1101执行时,执行本公开实施例的方法中限定的上述功能。
需要说明的是,在本公开的上下文中,计算机可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是,但不限于:电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
在一些实施例中,还提供了一种计算机程序,包括:指令,指令当由处理器执行时使处理器执行上述任一个实施例的方法。例如,指令可以体现为计算机程序代码。
在本公开的实施例中,可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设 计语言,诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言,诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络(,包括局域网(LAN)或广域网(WAN))连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的模块、部件或单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,模块、部件或单元的名称在某种情况下并不构成对该模块、部件或单元本身的限定。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示例性的硬件逻辑部件包括:现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、片上系统(SOC)、复杂可编程逻辑设备(CPLD)等等。
以上描述仅为本公开的一些实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。
在本文提供的描述中,阐述了许多特定细节。然而,理解的是,可以在没有这些特定细节的情况下实施本发明的实施例。在其他情况下,为了不模糊该描述的理解,没有对众所周知的方法、结构和技术进行详细展示。
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。
虽然已经通过示例对本公开的一些特定实施例进行了详细说明,但是本领域的技术人员应该理解,以上示例仅是为了进行说明,而不是为了限制本公开的范围。本领域的技术人员应该理解,可在不脱离本公开的范围和精神的情况下,对以上实施例进行修改。本公开的范围由所附权利要求来限定。

Claims (18)

  1. 一种视频处理方法,包括:
    向第一用户提供标记视频中人物的交互界面;
    接收第一用户通过交互界面输入的、对视频中至少一个人物的标记操作;
    响应于第一用户的标记操作,在社交网络发布视频时在视频显示界面之外以信息流的方式显示标记结果。
  2. 根据权利要求1所述的视频处理方法,还包括:
    在视频发布之后,响应于第一用户对标记结果的编辑操作,对视频中人物的标记进行修改。
  3. 根据权利要求1或2所述的视频处理方法,其中,第一用户是视频创作者,响应于第一用户对标记结果的编辑操作,对视频中人物的标记进行修改包括以下至少一项:
    对视频中未标记人物增加标记;
    对视频中已标记人物删除标记。
  4. 根据权利要求1至3任一项所述的视频处理方法,还包括:
    在视频发布之后,响应于第二用户对标记结果的编辑操作,对视频中人物的标记进行修改,其中第二用户与第一用户不同。
  5. 根据权利要求4所述的视频处理方法,其中,第二用户是视频中已标记人物之一,响应于第二用户对标记结果的编辑操作,对视频中人物的标记进行修改包括以至少一项:
    对视频中未标记人物增加标记;
    对第二用户删除标记。
  6. 根据权利要求4所述的视频处理方法,其中,第二用户不是视频中已标记人物,响应于第二用户对标记结果的编辑操作,对视频中人物的标记进行修改包括:
    对视频中未标记人物增加标记。
  7. 根据权利要求1至6任一项所述的视频处理方法,其中,在视频显示界面之外以信息流的方式显示标记结果包括:
    根据浏览者对视频的预期兴趣值,以相应的信息流显示视频。
  8. 根据权利要求7所述的视频处理方法,其中,在视频显示界面之外以信息流的方式显示标记结果还包括:
    根据浏览者与第一用户、视频中已标记人物之间的关系,确定浏览者对视频的预期兴趣值;
    根据浏览者与第一用户、视频中已标记人物之间的关系,确定向浏览者显示的视频的标签。
  9. 根据权利要求8所述的视频处理方法,其中:
    在浏览者是视频中已标记人物之一的情况下,将标签确定为第一标签;
    在浏览者不是视频中已标记人物之一、但与第一用户以及视频中已标记人物是关联关系的情况下,将标签确定为第二标签;
    在浏览者不是视频中已标记人物之一、与第一用户不是关联关系、但与视频中已标记人物是关联关系的情况下,将标签确定为第三标签;
    在浏览者不是视频中已标记人物之一、与第一用户和视频中已标记人物也不是关联关系的情况下,将标签确定为第四标签。
  10. 根据权利要求8或9所述的视频处理方法,其中,根据浏览者对视频的预期兴趣值,以相应的信息流显示视频包括:
    在浏览者对视频的预期兴趣值大于或等于阈值的情况下,显示视频中至少一个已标记人物的头像;
    在浏览者对视频的预期兴趣值小于阈值的情况下,显示视频中至少一个已标记人物的用户名。
  11. 根据权利要求8所述的视频处理方法,其中,在视频显示界面之外以信息流 的方式显示标记结果还包括:
    根据浏览者对视频的浏览时长,调整根据关系确定的浏览者对视频的预期兴趣值。
  12. 根据权利要求1至9任一项所述的视频处理方法,还包括:
    响应于浏览者对视频中标记结果的点击操作,显示视频中人物的标记列表。
  13. 根据权利要求1或4所述的视频处理方法,其中,提供标记视频中人物的交互界面包括以下至少一项:
    提供搜索界面;
    提供推荐的标记列表;
    显示视频中已标记人物的列表。
  14. 根据权利要求1至9任一项所述的视频处理方法,其中,在已标记人数为M位的情形下,在视频发布页面显示N位已标记人物的头像和已标记人数与显示头像数的差值X,其中,M为大于1的正整数,N为大于1的正整数,M大于N,X=M-N。
  15. 一种视频处理装置,包括:
    显示器,配置为向第一用户提供标记视频中人物的交互界面;
    处理器,配置为接收第一用户通过交互界面输入的、对视频中至少一个人物的标记操作,
    其中,显示器还配置为响应于第一用户的标记操作,在社交网络发布视频时在视频显示界面之外以信息流的方式显示标记结果。
  16. 一种视频处理装置,包括:
    存储器;和
    耦接至所述存储器的处理器,所述处理器被配置为基于存储在所述存储器中的指令,执行权利要求1-14任一项所述的视频处理方法中的一个或多个步骤。
  17. 一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现权利要求1-14任一项所述的视频处理方法。
  18. 一种计算机程序,包括:
    指令,所述指令当由处理器执行时使所述处理器执行根据权利要求1-14中任一项所述的视频处理方法。
PCT/CN2022/120111 2021-09-27 2022-09-21 视频处理方法、视频处理装置和计算机可读存储介质 WO2023045951A1 (zh)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202111139005 2021-09-27
CN202111139005.9 2021-09-27
CN202210142810.5 2022-02-16
CN202210142810.5A CN114253653A (zh) 2021-09-27 2022-02-16 视频处理方法、视频处理装置和计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2023045951A1 true WO2023045951A1 (zh) 2023-03-30

Family

ID=80796962

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/120111 WO2023045951A1 (zh) 2021-09-27 2022-09-21 视频处理方法、视频处理装置和计算机可读存储介质

Country Status (3)

Country Link
US (1) US11899717B2 (zh)
CN (1) CN114253653A (zh)
WO (1) WO2023045951A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114253653A (zh) * 2021-09-27 2022-03-29 北京字节跳动网络技术有限公司 视频处理方法、视频处理装置和计算机可读存储介质
CN115510481A (zh) * 2022-09-26 2022-12-23 北京有竹居网络技术有限公司 用于视频的权限管理方法、装置、电子设备和介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106851407A (zh) * 2017-01-24 2017-06-13 维沃移动通信有限公司 一种视频播放进度的控制方法及终端
US20170330598A1 (en) * 2016-05-10 2017-11-16 Naver Corporation Method and system for creating and using video tag
CN111629252A (zh) * 2020-06-10 2020-09-04 北京字节跳动网络技术有限公司 视频处理方法、装置、电子设备及计算机可读存储介质
CN114253653A (zh) * 2021-09-27 2022-03-29 北京字节跳动网络技术有限公司 视频处理方法、视频处理装置和计算机可读存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9143573B2 (en) 2008-03-20 2015-09-22 Facebook, Inc. Tag suggestions for images on online social networks
US20170310629A1 (en) * 2012-10-30 2017-10-26 Google Inc. Providing Reverse Preference Designations In a Network
US20180300046A1 (en) * 2017-04-12 2018-10-18 International Business Machines Corporation Image section navigation from multiple images
US10798446B2 (en) * 2018-01-04 2020-10-06 International Business Machines Corporation Content narrowing of a live feed based on cognitive profiling

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170330598A1 (en) * 2016-05-10 2017-11-16 Naver Corporation Method and system for creating and using video tag
CN106851407A (zh) * 2017-01-24 2017-06-13 维沃移动通信有限公司 一种视频播放进度的控制方法及终端
CN111629252A (zh) * 2020-06-10 2020-09-04 北京字节跳动网络技术有限公司 视频处理方法、装置、电子设备及计算机可读存储介质
CN114253653A (zh) * 2021-09-27 2022-03-29 北京字节跳动网络技术有限公司 视频处理方法、视频处理装置和计算机可读存储介质

Also Published As

Publication number Publication date
US11899717B2 (en) 2024-02-13
CN114253653A (zh) 2022-03-29
US20230099444A1 (en) 2023-03-30

Similar Documents

Publication Publication Date Title
WO2023045951A1 (zh) 视频处理方法、视频处理装置和计算机可读存储介质
WO2021164545A1 (zh) 在线文档显示方法、装置、设备及介质
US8356077B2 (en) Linking users into live social networking interactions based on the users' actions relative to similar content
EP4333439A1 (en) Video sharing method and apparatus, device, and medium
US20220326823A1 (en) Method and apparatus for operating user interface, electronic device, and storage medium
CN111970571B (zh) 视频制作方法、装置、设备及存储介质
WO2023036294A1 (zh) 视频发布方法、装置、电子设备及存储介质
WO2023116479A1 (zh) 视频的发布方法、装置、电子设备、存储介质和程序产品
US11470371B2 (en) Methods, systems, and media for indicating viewership of a video
WO2021218555A1 (zh) 信息展示方法、装置和电子设备
WO2019237958A1 (zh) 简历信息的管理方法、招聘信息的管理方法及装置
WO2023098531A1 (zh) 视频处理方法、视频处理装置和计算机可读存储介质
CN109121000A (zh) 一种视频处理方法及客户端
WO2024002047A1 (zh) 会话消息的显示方法、装置、设备及存储介质
WO2022242439A1 (zh) 信息的处理方法、装置、终端和存储介质
WO2024037491A1 (zh) 媒体内容处理方法、装置、设备及存储介质
CN113986003A (zh) 多媒体信息播放方法、装置、电子设备及计算机存储介质
CN111159584A (zh) 展示天气信息的方法、设备以及计算机可读介质
WO2023088092A1 (zh) 视频推荐的处理方法、装置和电子设备
CN114924670A (zh) 聊天频道显示方法、装置、设备、可读存储介质及产品
US11711334B2 (en) Information replying method, apparatus, electronic device, computer storage medium and product
US11902340B2 (en) Data processing method, apparatus, electronic device and storage medium
US20240160341A1 (en) Video processing methods, device, storage medium, and program product
WO2023124907A1 (zh) 一种即时消息的处理方法、装置及设备
CN116954439A (zh) 信息处理方法、装置及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22871997

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2022871997

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2022871997

Country of ref document: EP

Effective date: 20240328

NENP Non-entry into the national phase

Ref country code: DE