WO2022247220A1 - 界面处理方法及装置 - Google Patents

界面处理方法及装置 Download PDF

Info

Publication number
WO2022247220A1
WO2022247220A1 PCT/CN2021/136577 CN2021136577W WO2022247220A1 WO 2022247220 A1 WO2022247220 A1 WO 2022247220A1 CN 2021136577 W CN2021136577 W CN 2021136577W WO 2022247220 A1 WO2022247220 A1 WO 2022247220A1
Authority
WO
WIPO (PCT)
Prior art keywords
multimedia
user account
tag
multimedia content
label
Prior art date
Application number
PCT/CN2021/136577
Other languages
English (en)
French (fr)
Other versions
WO2022247220A9 (zh
Inventor
刘付家
Original Assignee
北京达佳互联信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京达佳互联信息技术有限公司 filed Critical 北京达佳互联信息技术有限公司
Publication of WO2022247220A1 publication Critical patent/WO2022247220A1/zh
Publication of WO2022247220A9 publication Critical patent/WO2022247220A9/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/44Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/48Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Definitions

  • the present disclosure relates to the field of computers, and in particular to an interface processing method, device and electronic equipment.
  • the disclosure provides an interface processing method, device and electronic equipment.
  • the disclosed technical scheme is as follows:
  • an interface processing method including: displaying the current display interface associated with the user account, the current display interface including: a video area for playing video and a display for playing The comment area where the video is commented on; obtain the associated information of the user account; determine the multimedia tag associated with the user account according to the associated information, and the multimedia tag is used to identify the multimedia content; display on the current display interface and The touch object corresponding to the multimedia tag is used to receive a trigger operation to trigger display of the multimedia content identified by the multimedia tag.
  • the displaying the touch object corresponding to the multimedia label on the current display interface includes: displaying the video area and the comment area on the first layer of the current display interface; A touch object corresponding to the multimedia label is displayed on a second layer of the current display interface, wherein the second layer is located above the first layer.
  • the second layer includes a control function sublayer and a control image sublayer, wherein the control function sublayer is located above the control image sublayer, and the control function sublayer is used to respond to For the trigger operation on the touch object, the control image sublayer is used to display elements of the multimedia content.
  • the current display interface displays the touch object corresponding to the multimedia label
  • it further includes: receiving a trigger operation on the touch object; responding to the trigger operation, displaying the multimedia label The multimedia content identified.
  • the displaying the multimedia content identified by the multimedia tag in response to the trigger operation includes: in the case that the touch object is a preview widget, responding to the preview widget
  • the first trigger operation is to play the multimedia content identified by the multimedia tag in the preview window.
  • the method further includes: in response to a second trigger operation on the preview widget, based on the link address corresponding to the second trigger operation, jumping into the multimedia display interface, and The interface plays the multimedia content; or, in response to the third trigger operation on the preview window, based on the interface switching instruction corresponding to the third trigger operation, switch to the multimedia browsing interface, wherein the multimedia browsing interface displays A multimedia list, which includes the multimedia content.
  • the determining the multimedia tag associated with the user account according to the association information includes: if the association information includes behavior data information, determining the user account according to the behavior data information interest tags, wherein the interest tags are one or more of a plurality of classification tags; by searching for multimedia tags matching the interest tags, the multimedia tags associated with the user account are determined.
  • the determining the interest tag of the user account according to the behavior data information includes: respectively obtaining a statistical quantity of each classification label in the plurality of classification labels, wherein the statistical quantity is based on the The above behavior data information is obtained by counting the various operation behaviors of the user account under each classification label, and each behavior has a corresponding weight; according to the statistical quantity of each classification label, the multiple classification labels are calculated. Sorting to obtain a sorting result; according to the sorting result, acquiring the interest tag of the user account.
  • the determining the multimedia tag associated with the user account according to the association information includes: if the association information includes attribute information, input the attribute information into the POI recognition model, and output The multimedia tag associated with the user account, wherein the POI recognition model is obtained through machine training of multiple sets of data, and the multiple sets of data include: attribute information of the user account and the multimedia tag associated with the user account.
  • an interface processing device including: a first display module, configured to display a current display interface associated with a user account, and the current display interface includes: a video area for playing videos and a comment area for displaying comments on the played video; a first acquiring module, configured to acquire associated information of the user account; a first determining module, configured to determine the associated information associated with the user account according to the associated information A multimedia label, where the multimedia label is used to identify multimedia content; a second display module, used to display a touch object corresponding to the multimedia label on the current display interface, and the touch object is used to receive a trigger operation to triggering to display the multimedia content identified by the multimedia tag.
  • the second display module includes: a first display unit, configured to display the video area and the comment area on the first layer of the current display interface; a second display unit, configured to A touch object corresponding to the multimedia label is displayed on a second layer of the current display interface, wherein the second layer is located above the first layer.
  • the second layer includes a control function sublayer and a control image sublayer, wherein the control function sublayer is located above the control image sublayer, and the control function sublayer is used to respond to For the trigger operation on the touch object, the control image sublayer is used to display elements of the multimedia content.
  • the device further includes: a first receiving module, configured to receive a trigger operation on the touch object after the touch object corresponding to the multimedia label is displayed on the current display interface; A display module, configured to display the multimedia content identified by the multimedia tag in response to the trigger operation.
  • the third display module includes: a third display unit, configured to respond to the first trigger operation on the preview window when the touch object is a preview window,
  • the preview widget plays the multimedia content identified by the multimedia tag.
  • the third display module further includes: a fourth display unit, configured to respond to a second trigger operation on the preview widget, and based on the link address corresponding to the second trigger operation, jump into A multimedia display interface, and playing the multimedia content on the multimedia display interface; or, a fifth display unit, configured to respond to a third trigger operation on the preview window, and switch the interface corresponding to the third trigger operation An instruction to switch to a multimedia browsing interface, wherein the multimedia browsing interface displays a multimedia list, and the multimedia list includes the multimedia content.
  • the first determining module includes: a first determining unit, configured to determine the interest tag of the user account according to the behavior data information when the associated information includes behavior data information, wherein , the interest tag is one or more of a plurality of classification tags; the second determining unit is configured to determine a multimedia tag associated with the user account by searching for a multimedia tag matching the interest tag.
  • the first determining unit includes: a first acquiring subunit, configured to respectively acquire a statistical quantity of each classification label in the plurality of classification labels, wherein the statistical quantity is based on the behavior data
  • the information is obtained by counting the various operation behaviors of the user account under each classification label, and each behavior has a corresponding weight; the processing subunit is used to calculate the number of the multiple operation behaviors according to the statistics of each classification label.
  • the classification tags are sorted to obtain a sorting result; the second obtaining subunit is configured to obtain the interest tags of the user account according to the sorting result.
  • the first determination module includes: a processing unit, configured to input the attribute information into the point-of-interest identification model if the associated information includes attribute information, and output the information associated with the user account.
  • Multimedia tags wherein the point-of-interest recognition model is obtained through machine training of multiple sets of data, and the multiple sets of data include: attribute information of user accounts and multimedia tags associated with the user accounts.
  • an electronic device including: a processor; a memory for storing instructions executable by the processor; wherein the processor is configured to execute the instructions to implement The interface processing method described in any one.
  • a computer-readable storage medium when the instructions in the computer-readable storage medium are executed by the processor of the electronic device, the electronic device can execute any one of the Interface handling method.
  • a computer program product including a computer program, and the computer program is executed by a processor according to any one of the interface processing methods.
  • the touch object After determining the multimedia label associated with the user account according to the associated information of the user account, displaying the touch object corresponding to the multimedia label on the current display interface of the user account, the touch object can be displayed based on the trigger operation of the touch object.
  • the multimedia content identified by the multimedia tag Since the multimedia tag is associated with the user account, the multimedia content identified by the multimedia tag is targeted to the user account, effectively improving the accuracy of displaying the multimedia content to the corresponding user account.
  • the user corresponding to the user account can obtain the multimedia content identified by the multimedia label according to the touch object corresponding to the multimedia label, which solves the technical problem of a single acquisition channel in the related art when it is necessary to obtain multimedia content.
  • the user corresponding to the user account only needs to trigger the touch object displayed on the current display interface to watch the multimedia content identified by the multimedia label. The operation is simple and the viewing experience of the user is effectively improved.
  • Fig. 1 is a block diagram showing a hardware structure of a computer terminal for implementing an interface processing method according to an exemplary embodiment.
  • Fig. 2 is a flowchart of a first interface processing method according to an exemplary embodiment.
  • Fig. 3 is a flowchart of a second interface processing method according to an exemplary embodiment.
  • Fig. 4 is a flowchart of a third interface processing method according to an exemplary embodiment.
  • Fig. 5 is a flowchart of a fourth interface processing method according to an exemplary embodiment.
  • Fig. 6 is a flowchart of a fifth interface processing method according to an exemplary embodiment.
  • Fig. 7 is a flowchart of a sixth interface processing method according to an exemplary embodiment.
  • Fig. 8 is a flowchart of a seventh interface processing method according to an exemplary embodiment.
  • Fig. 9 is a schematic diagram of a live preview display provided according to an exemplary optional implementation manner.
  • Fig. 10 is a device block diagram of an interface processing device according to an exemplary embodiment.
  • Fig. 11 is a device block diagram of a terminal according to an exemplary embodiment.
  • Fig. 12 is a structural block diagram of a server according to an exemplary embodiment.
  • a method embodiment of an interface processing method is proposed. It should be noted that the steps shown in the flowcharts of the accompanying drawings may be performed in a computer system, such as a set of computer-executable instructions, and that although a logical order is shown in the flowcharts, in some cases, The steps shown or described may be performed in an order different than here.
  • Fig. 1 is a block diagram showing a hardware structure of a computer terminal (or mobile device) for implementing an interface processing method according to an exemplary embodiment.
  • a computer terminal 10 may include one or more (shown as 12a, 12b, ..., 12n in the figure) processors 12 (processors 12 may include but not limited to microprocessor processor MCU or programmable logic device FPGA, etc.), a memory 14 for storing data, and a transmission device for communication functions.
  • FIG. 1 is only a schematic diagram, and it does not limit the structure of the above-mentioned electronic device.
  • computer terminal 10 may also include more or fewer components than shown in FIG. 1 , or have a different configuration than that shown in FIG. 1 .
  • the one or more processors 12 and/or other data processing circuits described above may generally be referred to herein as "data processing circuits".
  • the data processing circuit may be implemented in whole or in part as software, hardware, firmware or other arbitrary combinations.
  • the data processing circuit can be a single independent processing module, or be fully or partially integrated into any of the other elements in the computer terminal 10 (or mobile device).
  • the data processing circuit serves as a processor control (for example, the selection of the variable resistor terminal path connected to the interface).
  • the memory 14 can be used to store software programs and modules of application software, such as the program instruction/data storage device corresponding to the interface processing method in the embodiment of the present disclosure, and the processor 12 executes the software program and modules stored in the memory 14 by running the Various functional applications and data processing, that is, to realize the interface processing method of the above-mentioned application program.
  • the memory 14 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory.
  • the memory 14 may further include memory located remotely relative to the processor 12, and these remote memories may be connected to the computer terminal 10 through a network. Examples of the aforementioned networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • Transmission means are used to receive or transmit data via a network.
  • the specific example of the above-mentioned network may include a wireless network provided by the communication provider of the computer terminal 10 .
  • the transmission device includes a network adapter (Network Interface Controller, NIC), which can be connected to other network devices through a base station so as to communicate with the Internet.
  • the transmission device may be a radio frequency (Radio Frequency, RF) module, which is used to communicate with the Internet in a wireless manner.
  • RF Radio Frequency
  • the display may be, for example, a touchscreen liquid crystal display (LCD), which may enable a user to interact with the user interface of the computer terminal 10 (or mobile device).
  • LCD liquid crystal display
  • the computer device (or mobile device) shown in FIG. 1 may include hardware components (including circuits), software components (including computer code), or a combination of both hardware and software elements. It should be noted that FIG. 1 is only one example of a particular embodiment, and is intended to illustrate the types of components that may be present in a computer device (or mobile device) as described above.
  • FIG. 2 is a flowchart of an interface processing method 1 according to an exemplary embodiment. As shown in Fig. 2 , the method is used in the above-mentioned computer terminal and includes the following steps.
  • step S21 the current display interface associated with the user account is displayed, the current display interface includes: a video area for playing the video and a comment area for displaying comments on the played video;
  • step S22 the associated information of the user account is acquired
  • step S23 determine the multimedia label associated with the user account according to the association information, and the multimedia label is used to identify the multimedia content
  • step S24 a touch object corresponding to the multimedia tag is displayed on the current display interface, and the touch object is used to receive a trigger operation to trigger display of the multimedia content identified by the multimedia tag.
  • the touch object can be triggered based on the touch object Operation to display the multimedia content identified by the multimedia tag. Since the multimedia tag is associated with the user account, the multimedia content identified by the multimedia tag is targeted to the user account, effectively improving the accuracy of displaying the multimedia content to the corresponding user account. Moreover, the user corresponding to the user account can obtain the multimedia content identified by the multimedia label according to the touch object corresponding to the multimedia label, which solves the technical problem of a single acquisition channel in the related art when it is necessary to obtain multimedia content. In addition, The user corresponding to the user account only needs to trigger the touch object displayed on the current display interface to watch the multimedia content identified by the multimedia label. The operation is simple and the viewing experience of the user is effectively improved.
  • Fig. 3 is a flowchart of a second interface processing method according to an exemplary embodiment. As shown in Fig. 3 , in addition to the steps shown in Fig. 2 , the step S24 further includes the following steps.
  • Step S31 displaying the video area and comment area on the first layer of the current display interface
  • Step S32 displaying the touch object corresponding to the multimedia label on the second layer of the current display interface, wherein the second layer is located above the first layer.
  • the touch object corresponding to the multimedia label when the touch object corresponding to the multimedia label is displayed on the current display interface, various methods can be adopted, for example, the video area and the comment area can be displayed on the first layer of the current display interface ; Display the touch object corresponding to the multimedia label on the second layer of the current display interface, wherein the second layer is located above the first layer. That is, when the first layer of the current display interface is displayed, a second layer is superimposed on the first layer, and the touch object is displayed on the second layer. Different layers are used to display different content, which is easy to operate and easy to implement. In addition, since the second layer is located above the first layer, the user account can easily notice the touch object.
  • the second layer may include a control function sublayer and a control image sublayer, wherein the control function sublayer is located above the control image sublayer, and the control function sublayer is used to respond to touch The trigger operation of the control object, and the control image sublayer is used to display the elements of the multimedia content. Therefore, in order to achieve different control functions and display functions, different sub-layers can be divided in the second layer.
  • the control function sublayer can be located above the control image sublayer, which is convenient for the control image sublayer to describe the layer.
  • the control function sublayer can be a transparent layer, so that when the operation of the control function sublayer is realized, the realization is The operation on the touch object.
  • the elements of the multimedia content displayed on the control image sublayer may include various elements, for example, it may be the title of the multimedia content, it may be the type of the multimedia content, and it may be push information of the multimedia content.
  • Fig. 4 is a flow chart of a third interface processing method according to an exemplary embodiment. As shown in Fig. 4, the method includes the following steps in addition to the steps shown in Fig. 2 .
  • Step S41 receiving a trigger operation on the touch object
  • Step S42 displaying the multimedia content identified by the multimedia tag in response to the trigger operation.
  • the method further includes: receiving a trigger operation on the touch object; and displaying multimedia content identified by the multimedia label in response to the trigger operation.
  • the multimedia content identified by the multimedia tag is displayed in different ways in response to different trigger operations.
  • the trigger operation may include various types, for example, it may be a single-click operation on the touch object, a double-click operation on the touch object, a long-press operation on the touch object, and so on.
  • Fig. 5 is a flow chart of a fourth interface processing method according to an exemplary embodiment. As shown in Fig. 5, in addition to the steps shown in Fig. 4, when the touch object is a small preview window, the method Step S42 also includes the following steps.
  • Step S51 in response to the first trigger operation on the preview window, play the multimedia content identified by the multimedia label in the preview window;
  • Step S52 in response to the second trigger operation on the preview window, jump to the multimedia display interface based on the link address corresponding to the second trigger operation, and play the multimedia content on the multimedia display interface;
  • Step S53 in response to the third trigger operation on the preview window, switch to the multimedia browsing interface based on the interface switching instruction corresponding to the third trigger operation, wherein the multimedia browsing interface displays a multimedia list, and the multimedia list includes multimedia content.
  • the preview The widget plays the multimedia content identified by the multimedia tag.
  • the small preview window may be a window whose display size is smaller than a predetermined ratio, and the small preview window is used to preview the multimedia content. It should be noted that the small preview window can be fixedly located in multiple positions of the current display interface, for example, it can be located in the video area of the first layer in the current display interface, and can also be displayed in the comment area of the first layer.
  • the small preview window can also be located at multiple positions on the current display interface in a movable manner, so as to attract the attention of the user. Play the multimedia content identified by the multimedia label in the preview window by responding to the first trigger operation on the preview window, that is, when the preview window is not operated, the multimedia content identified by the multimedia label will not be played in the preview window , so as to effectively avoid disturbing the video playing in the video area on the current display interface.
  • the multimedia content identified by the multimedia tag is played in the preview window. That is, the multimedia content identified by the multimedia tag is played in the preview window only when a click operation on the preview window is received.
  • it is also possible to set a corresponding logic control on the preview window as the touch object for example, when the single-click operation is received, the multimedia content is played in the preview window content, and when the click operation is received again, the playback of the multimedia content in the preview window is suspended.
  • the touch object when the touch object is a small preview window, in response to a second trigger operation on the preview small window, based on the link address corresponding to the second trigger operation, jump to the multimedia display interface, and play multimedia content on the multimedia display interface.
  • the second trigger operation is a double-click operation
  • the multimedia display interface in response to the double-click operation on the preview window, based on the link address corresponding to the double-click operation, jump to the multimedia display interface to play the multimedia content.
  • information related to the multimedia content can also be displayed, for example, operations of the multimedia content can be displayed (for example, liking, forwarding, commenting on the multimedia content, etc.); Comments on the multimedia content may also be displayed (for example, a brief description of the multimedia content is displayed, and when the brief description is operated, comment information on the multimedia content is expanded, etc.).
  • corresponding control logic can also be set, for example, when the multimedia display interface receives a double-click operation, return to the current display interface including the preview window.
  • the double-click operation again is only an example, and other operations that can return to the current display interface can also be applied to this application.
  • the touch object when it is a small preview window, it may also respond to a third trigger operation on the preview small window, and switch to the multimedia interface based on the interface switching instruction corresponding to the third trigger operation.
  • the browsing interface wherein the multimedia browsing interface displays a multimedia list, and the multimedia list includes multimedia content.
  • the multimedia browsing interface displays a multimedia list, the multimedia list includes the multimedia content , and a plurality of other multimedia contents related to the multimedia content, for example, may be a plurality of other multimedia contents serialized with the multimedia content. Displaying the multimedia content and multiple other related multimedia contents in the form of a multimedia list can facilitate the user to select the interested multimedia content and improve the user's additional viewing experience of the multimedia content.
  • the aforementioned information associated with the user account refers to information associated with the user account.
  • the associated information of the user account may include various types, and may be relatively dynamic information of the user account, or may be relatively static information of the user account. For example, it may be behavior data information corresponding to the user account, or attribute information corresponding to the user account. It should be noted that no matter whether the associated information is behavioral data information or attribute information, it is data authorized by the user account. For example, it may be the data acquired under the condition of accepting the authorization agreement through the user account. Different associated information may be acquired in different ways.
  • the behavior data information corresponding to the user account can be obtained through the behavior record data authorized by the user account, and the behavior record data can be the data of a historical time period, or the behavior record data in the current scene.
  • the attribute information corresponding to the user account may be historical registration information authorized by the user account, or update information authorized by the user account, and the like.
  • the manner of determining the multimedia tag associated with the user account is also different according to different associated information. Instructions are given below.
  • Fig. 6 is a flow chart of the fifth interface processing method according to an exemplary embodiment. As shown in Fig. 6 , in addition to the steps shown in Fig. 2 , the step S23 further includes the following steps.
  • Step S61 if the associated information includes behavior data information, determine the interest tag of the user account according to the behavior data information, where the interest tag is one or more of a plurality of classification tags;
  • Step S62 Determine the multimedia tag associated with the user account by searching for the multimedia tag matching the interest tag.
  • the interest tag of the user account is obtained according to the behavior data information of the user account, so that the interest tag of the user account is obtained based on the behavior data of the user account, and the behavior data of the user account truly reflects the preferences of the user account , therefore, it can largely reflect the content that the user account is interested in, so as to push the multimedia content of interest to each user account in a targeted manner.
  • the behavior data of the user account can be obtained from the historical record data of the user account within a certain period of time.
  • the behavior data of user accounts can be divided into many types. Taking user accounts as users as an example, the behavior data can be of various types. For example, it can include the following operations of the user: click operation, like operation, follow operation, message operation, favorite operation, Share actions, etc.
  • click operation like operation, follow operation, message operation, favorite operation, Share actions, etc.
  • the behavior data of the user accounts within a certain period of time may be counted according to the actual application requirements, or each behavior data may be assigned different weights for statistics.
  • User behavior data can be based on multimedia content, generated during live broadcast or directly during live broadcast; user behavior can be based on multimedia content, click, like, follow, leave a message, bookmark, share, etc.
  • the operation generation can also be based on the live broadcast, such as clicking, liking, following, leaving a message, collecting, sharing and other operations on the live broadcast, so as to obtain the user's interest tags and obtain the content that the user is interested in.
  • Behavior data of user accounts can also be classified according to social behavior data, and can be divided into social behavior data and non-social behavior data.
  • the social behavior data can be the users who choose to mic in the live broadcast, the users who are followed by the users who are connected to the mic in the live broadcast, the users who are related to each other among the users who are connected to the mic in the live broadcast, the follow list of the users in the live broadcast, and so on.
  • the non-social behavior data may be other data. By counting a large amount of user behavior data, it can greatly and truly reflect the type of multimedia that the user is interested in, ensuring the authenticity of the obtained interest tags, and further ensuring that the live push that the user sees is what the user is interested in. according to personal preference. It should be noted that the above-mentioned user account corresponds to a user who watches multimedia content.
  • the above classification of behavior data is only an example, not exhaustive, and all behavior data performed based on user accounts can be considered as a part of the embodiment of the present application.
  • the interest tags referred to above are one or more of multiple classification tags, and the classification levels are different according to the classification tags, and the interest tags may also be tags including multiple levels.
  • the interest tag may include the category label, the subdivision label corresponding to the category label, or the subdivided subdivision label. sub-label.
  • Fig. 7 is a flow chart of interface processing method 6 according to an exemplary embodiment. As shown in Fig. 7 , in addition to the steps shown in Fig. 6, the process also includes in step S61:
  • step S71 the statistical quantity of each category label among the plurality of category labels is obtained respectively, wherein the statistical quantity is obtained by counting each operation behavior of the user account under each category label according to the behavior data information, and each behavior is respectively have corresponding weights;
  • step S72 a plurality of classification labels are sorted according to the statistical quantity of each classification label, and a sorting result is obtained;
  • step S73 according to the ranking result, the interest tags of the user account are obtained.
  • the interest tags of the user account are obtained according to the sorting results, so that when there are too many behavioral data of the statistical user account and a large number of different classification tags are involved, the top ranking results can be classified Tags are used as interest tags of user accounts, and more accurate results of interest tags can be obtained.
  • the number of occurrences of each behavior under different classification labels is counted. For example, it is possible to count the number of user behaviors under the time tag, geographic location tag, author tag, guest tag, content classification tag, title tag, content introduction tag, and poster image tag; when counting user accounts.
  • behavioral data you can set a certain weight for each behavioral data, and count the number of different behaviors according to the weight, and then get the number under different classification labels. Due to the sufficient behavioral data, multi-directional information can be aggregated and counted, so that the classification labels obtained from the statistics will be more accurate.
  • the accuracy rate of multimedia content push can be greatly improved, and user stickiness can be increased.
  • the weight value of each behavioral data when setting certain weights for different behavioral data for quantitative statistics, because different behavioral data can express different degrees of user preference, it is necessary to set the weight value of each behavioral data according to the actual The corresponding settings are required.
  • the following method can be used: determine that under the live broadcast of a certain category label, among the live broadcasts watched by the user, the number of clicks is x, the number of shares is y, the number of likes is z, the number of comments is k, the number of favorites is r, and the number of followers is The operation is p, each time the user performs a corresponding operation on the live broadcast under the classification label, the number of behavior data will be increased by one, and the statistical number corresponding to the behavior data can be set as x*a1+y*a2+z*a3+k* a4+r*a5+p*a6, where a1, a2, a3, a4, a5, and a6 are the corresponding weight factors, and the statistics of behavior data under each label
  • the multiple classification tags are sorted according to the statistical quantity of each classification tag to obtain a sorting result.
  • the behavior data generated for live broadcast due to the variety of live broadcast content, users may generate certain behavior data for many different types of live broadcasts in the process of browsing videos, but users do not know all the live broadcasts they have viewed. Interested, therefore, it is necessary to analyze a large amount of behavior data generated by users watching live products and other operations in the past period of time, and count the amount of behavior data of users under each category label according to a large amount of behavior data, and analyze The classification tags obtained by the user's behavior data are sorted.
  • the more statistics are counted in each classification tag it means that the user has generated more behavior data for the live broadcast under the classification tag, so as to judge the user's interest label type.
  • the interest tags of the user account are obtained to ensure that the type of multimedia content pushed to the user meets the user's preference and is the content that the user is interested in.
  • Fig. 8 is a flow chart of an interface processing method 7 shown according to an exemplary embodiment. As shown in Fig. 8, in addition to all the steps shown in Fig. 2, the method also includes in step S23:
  • step S81 if the associated information includes attribute information, the attribute information is input into the POI recognition model, and the multimedia label associated with the user account is output, wherein the POI recognition model is obtained by machine training with multiple sets of data, and multiple sets The data includes: attribute information of the user account and multimedia tags associated with the user account.
  • the attribute information of the user account can reflect the interests, habits, and hobbies of the user account to a certain extent. Therefore, the attribute information of the user account is input into the point-of-interest recognition model to identify the multimedia tags associated with the user account, and through artificial intelligence, based on a large amount of training data, the multimedia tags associated with the user account can be quickly and accurately identified. This makes it faster and more accurate to obtain the multimedia tags associated with the user account.
  • the attribute information of the user account may include a variety of information, for example, the user's age, gender, geographical location, multimedia content of likes, likes, favorites, and comments corresponding to the user account.
  • Content information information about followed user accounts, information about products purchased during multimedia content playback, and so on.
  • the attribute information of the user account is input into the point-of-interest recognition model, and the multimedia label of the user account is outputted, wherein the point-of-interest recognition model is obtained by machine training with multiple sets of data, and multiple sets of data includes: attribute information of the user account and the multimedia tag associated with the user account.
  • Machine training is performed based on multiple sets of training samples to obtain a point of interest recognition model.
  • the attribute information of the user account and the multimedia tags associated with the user account are used as training samples for machine training to obtain the POI recognition model. Since the point of interest recognition model can include various information included in the attribute information of the user account during training, it can correspond to more abundant points of interest in the user account, thus making the training more comprehensive and accurate. Therefore, when the POI recognition model obtained through training is used to identify the attribute information of the user account, the misidentification rate of POI recognition is reduced, and the efficiency and accuracy of POI recognition are improved.
  • the POI identification model may be based on various algorithms, for example, based on a machine learning algorithm, such as based on a neural network model algorithm, and so on. That is, the POI recognition model may be a POI recognition model based on a machine learning algorithm, for example, may be a POI recognition model based on a neural network model algorithm. It should be noted that the aforementioned POI recognition models based on multiple recognition networks are just examples, and the POI recognition models based on other recognition networks that are not enumerated one by one can also be applied to this application. Also through training, the various POI recognition models mentioned above can also recognize the attribute information of the user account, and obtain the multimedia tags associated with the user account. Point-of-interest recognition models based on different recognition networks can be selected according to different needs, providing a variety of different methods to choose, more flexible and convenient to use, and greatly improving the applicability of attribute information recognition of user accounts.
  • an optimized POI recognition model is obtained by performing optimization training on the POI recognition model. Since the attribute information corresponding to the user account may be continuously updated, the POI recognition model is optimized and trained in a timely manner based on the updated attribute information. Therefore, continuously optimize and train the POI recognition model to make the POI recognition model better and more accurately recognize the multimedia tags associated with the user account, so that the POI recognition model can better understand the user's needs and greatly improve user experience.
  • multimedia content may be in various forms, for example, it may be a live video, it may be a live preview, and it may also be a historical live review video, etc.
  • the various forms listed above are just examples, and other media content for pushing or pre-setting are part of this application.
  • the multimedia content may be multimedia content in various scenarios, for example, it may be multimedia content browsed by using an application program, or viewed on a web page, and so on.
  • Multimedia content can be obtained in many different ways, for example, it can be obtained through an application program or a web page push, it can be obtained by scanning a QR code to obtain specified multimedia content, or it can be obtained by clicking a sharing link, etc.
  • multimedia content There are also various forms of multimedia content, including: video, picture, text, voice, and so on.
  • most multimedia content uses a combination of the above-mentioned forms.
  • text content is interspersed in the video, that is, the theme of the multimedia content or important reminders can be presented on the video in colorful dynamic fonts. above, wait.
  • the specific scene and manner can be selected according to the specific content of the multimedia content.
  • the multimedia tag can represent all related content involved in the multimedia content, and the related content can also be in various forms, for example, it can be static, such as some fixed attribute information of the live broadcast; it can also be Dynamic, for example, an object used to represent that the multimedia content can be changed.
  • the multimedia tag when obtaining the multimedia tag, various methods can be adopted, for example, the multimedia tag can be obtained according to the attribute information of the multimedia content, or the multimedia tag can be obtained according to the media stream data in the multimedia content, and many more. Instructions are given below.
  • the acquired multimedia tag When acquiring the multimedia tag according to the attribute information of the multimedia content, the acquired multimedia tag can be attached to the related content of the multimedia content.
  • the attribute information of the multimedia content is generally some fixed information about the multimedia content, and may also be some basic information of the multimedia content, for example, the classification of the multimedia content.
  • the attribute information of the multimedia content may include multiple types, for example, may include at least one of the following: the playing time of the multimedia content, the geographic location corresponding to the multimedia content, the author of the multimedia content, the playing object of the multimedia content, and the guest corresponding to the multimedia content , the co-host of the live broadcast of the multimedia content, the classification of the content of the multimedia content, the title of the multimedia content, the brief introduction of the content of the multimedia content, the poster image of the multimedia content, the sponsor of the multimedia content, etc.
  • the playing time of the multimedia content can be used to determine whether the user account can watch the multimedia content at this time.
  • the playback time of the multimedia content is used as the source of information for obtaining the multimedia tag of the multimedia content, and can be used as a basis for matching the multimedia tag with the interest tag of the user account.
  • the time of interest of the user account is night, so, The multimedia content may be considered to be of interest to the user account.
  • the geographic location corresponding to the multimedia content may be a specific country and city, or may be a specific indoor or outdoor location.
  • the geographic location of the multimedia content can also be used as an information source for obtaining the multimedia tag of the multimedia content, and can be used as a basis for matching the multimedia tag with the interest tag of the user account.
  • the author of the multimedia content is the anchor of the multimedia content, that is, the protagonist in the process of the multimedia content.
  • the author of the multimedia content as the source of information for obtaining the multimedia tag of the multimedia content, can be used as a basis for matching the multimedia tag with the interest tag of the user account.
  • the anchor that the user account is interested in is XX, so the multimedia content can be considered as This user account is of interest.
  • the playing object of the multimedia content may be the main target involved in the process of the multimedia content, it may be a specific object, or it may be virtual knowledge, viewpoint, and the like.
  • the playback object of the multimedia content is used as the source of information to obtain the multimedia tag of the multimedia content, and can be used as the basis for matching the multimedia tag with the interest tag of the user account.
  • the user account is interested in mobile phones, so the multimedia content can be considered as This user account is of interest.
  • the guest corresponding to the multimedia content is the person invited in the multimedia content to help out.
  • the guest corresponding to the multimedia content is used as the source of information for obtaining the multimedia label of the multimedia content, and can be used as a basis for matching the multimedia label with the interest label of the user account.
  • the user account trusts the authority of a certain person. Therefore, when the person acts as a guest corresponding to the multimedia content, the multimedia content can be considered to be of interest to the user account.
  • the hosts of the multimedia content live broadcast are the characters who participate in the live broadcast remotely in the multimedia content.
  • the hosts of the multimedia content live broadcast are used as the source of information for obtaining the multimedia tags of the multimedia content, and can be used as the basis for matching the multimedia tags with the interest tags of the user account. For example, if the user account likes a celebrity, when the celebrity is the co-mailer corresponding to the multimedia content, the multimedia content can be considered as the user account is interested in.
  • the content classification of multimedia content as relatively fixed attribute information of multimedia content, is used to determine the general style of multimedia content, and can be used directly or indirectly as a source of information for obtaining multimedia tags of multimedia content, and can be used as a link between multimedia tags and user accounts.
  • the basis for matching interest tags for example, the user account likes movies and TV, so movies can be considered as the user account is interested in.
  • the title of the multimedia content can briefly describe the general content of the multimedia content.
  • the title of the multimedia content is used as the source of information for obtaining the multimedia tag of the multimedia content, and can be used as a basis for matching the multimedia tag with the interest tag of the user account.
  • the title of the multimedia content is If the user account is solving a certain academic problem, it can be considered that the multimedia content is of interest to the user account.
  • the content introduction of the multimedia content can describe the main content of the multimedia content in more detail.
  • the content introduction of the multimedia content is used as the source of information for obtaining the multimedia tag of the multimedia content, and can be used as a basis for matching the multimedia tag with the interest tag of the user account, for example , the content brief of the multimedia content includes specific problem solutions that the user account is looking for, and the multimedia content can be considered to be of interest to the user account.
  • the poster image of the multimedia content can also reflect the key points of the multimedia content to a certain extent.
  • the poster image of the multimedia content is used as an information source for obtaining the multimedia tag of the multimedia content, and can be used as a basis for matching the multimedia tag with the interest tag of the user account. For example, if the key points in the poster image are concerned by the user account, then the multimedia content may be considered as interested by the user account.
  • the sponsor of the multimedia content can reflect the authority of the multimedia content and the accuracy of the information. Therefore, when the user account is interested in the sponsor, it can be considered that the multimedia content is of interest to the user account.
  • the attribute information of the multimedia content can be of various types, and the above attribute information can be obtained directly from the display interface of the multimedia content, or can be obtained through intelligent identification by the system when acquiring the multimedia content.
  • the broadcast time of the multimedia content can be specified in the multimedia content, the guests corresponding to the invited multimedia content and the sponsor of the multimedia content can be set, and the title and content classification of the multimedia content can be set; at this time, the system
  • the system By analyzing the multimedia content, users who follow the anchor and interested in the title and content of the multimedia content can be analyzed, and the video of the multimedia content can be analyzed to select a frame image of a suitable theme as a poster image of the multimedia content, and so on.
  • directly obtaining the attribute information through the multimedia content interface is conducive to accurately obtaining the attribute information of the multimedia content.
  • the content is divided into different categories of tags and then processed, making the push of multimedia content more convenient and accurate.
  • the multimedia tag of the multimedia content when obtaining the multimedia tag of the multimedia content, it may also be obtained according to the media stream data in the multimedia content. For example, the media stream data in the multimedia content may be identified first to obtain the refined content of the multimedia content; then, the multimedia tag of the multimedia content may be obtained according to the refined content of the multimedia content.
  • the detailed content of the multimedia content is obtained by identifying the media stream data in the multimedia content, thereby obtaining the multimedia tag of the multimedia content. Since the media stream data in the multimedia content can contain more detailed and rich information about the multimedia content, the multimedia tag can be obtained according to the detailed content in the multimedia content. Therefore, through the above method of identifying media stream data in multimedia content, the obtained multimedia tags can be more intuitive and accurate.
  • the media stream data in the multimedia content may include multiple types of data, for example, image data, audio data, text data, etc. of the video stream.
  • the semantic analysis of the text content or voice content can be performed using artificial intelligence processing methods to segment the text or voice content and remove some less critical words. , for example, some modal particles, auxiliary words, etc., to obtain multiple multimedia tags related to key text or speech content.
  • the multimedia tag of the multimedia content includes the musical instrument tag—saxophone, and the music type tag—light music.
  • the frame-by-frame image analysis of the video stream can be performed by using artificial intelligence to extract features, compare and analyze the images, and based on the image data of people or objects in the video
  • the proportion is to determine the main content expressed by the multimedia content, and obtain a plurality of corresponding multimedia tags.
  • the video content is a wonderful fighting segment in a game
  • the multimedia tag of the multimedia content includes a game tag-fighting tag.
  • the recognition technology of image, audio data, text data, etc. based on the video stream the specific content contained in the multimedia content is identified and obtained, so as to further refine the label of the work.
  • the above media stream data is only an enumeration, not an exhaustive list.
  • the above media stream data can be used alone in the embodiments of the present disclosure, and can also be used in combination in the embodiments of the present disclosure. When used alone, it can be regarded as a single media stream data, and when used in combination, it can be regarded as media stream data including multiple forms.
  • multiple sub-level tags under the multimedia tag can be obtained, that is, more detailed tags, which can be someone, something, a song, a product, and so on.
  • the video content is a food sale
  • the multimedia tag of the multimedia content includes e-commerce - food
  • the The multimedia tags of the multimedia content can be e-commerce-food-snack products, and then classified in detail, then the multimedia tags of the multimedia content include e-commerce-food-snack products-potato chips, and so on.
  • an optional implementation manner is provided. It should be noted that, in this optional implementation manner, the multimedia content is described by taking a live preview as an example.
  • the push processing method of the live broadcast preview is directly pushed to all followed users after recording the live broadcast preview, and the recommended content is also some information about the live broadcast, which all users can see It is the same, and it will not make personalized recommendations and matches for different users and different live content. It is difficult to guarantee the user's interest in the live broadcast, and it is difficult for users to find the live broadcast content they really want to watch in the live broadcast preview.
  • the tags of the live broadcast preview and the tags that the user is interested in browsing are obtained, and at the same time, a personalized live preview is generated by combining the user and the follow list, so as to improve the preview quality.
  • the effective exposure and clicks are enough to effectively improve the accuracy and efficiency of live broadcast previews.
  • the tags of the live broadcast preview and the tags of interest of the browsing user are used to provide a more suitable display of the live broadcast preview, wherein FIG. 9 is provided according to an exemplary optional embodiment A schematic diagram of live preview display is shown in FIG. 9 , and this optional implementation manner includes the following processing.
  • the tags can include the attribute tags of the live preview, and the content tags extracted after word segmentation;
  • attribute tags of the live broadcast preview include geographical location (which can be based on country division rules, province division rules, region division rules, etc.), content tags (which can be based on preset division rule tags, such as Food, music, film and television, news, etc.), author tags (can be divided according to specific types of anchors, such as food anchors, news anchors, celebrity anchors, music anchors), live broadcast titles, live content introductions, etc.
  • the music label of the video work is Chinese music singer-masterpiece; or if it is recognized that the video is an explanation video of a skill game, the game label of the live preview work is Games of skill, etc.
  • the live broadcast preview includes e-commerce live broadcasts to further refine the labels of works.
  • the product label of the preview work is smart product-mobile phone; if it sells food, the product label of the preview work is food-snack product, and so on.
  • users who are linked to the live broadcast can be obtained to form a live broadcast user label.
  • S2 collect information on the target user, and find key points of interest to the target user;
  • the statistics of the behavior data of the target user in the past period of time including: the data of operating video works, such as the data of watching, sharing, liking, and commenting on video works, so as to obtain the target user's point of interest tags: Among them, the statistical target The time of the user's behavior data can be set according to actual needs.
  • the user views the work as x, shares as y, likes as z, and comments as k, and the corresponding statistical quantity is x*a1+y*a2 +z*a3+k*a4.
  • the number of views and the number of shares of the work will be added one time each, and a1, a2, a3, and a4 are the corresponding weight factors, and so on.
  • the user's portrait data can be used to represent information such as age, gender, and geographical location.
  • sample data includes: the user's operation and portrait data for the video, the user's point of interest label
  • obtain the corresponding user point of interest recognition model the user point of interest recognition model is used to output the target user's point of interest label for the video work according to the input target user's user's operation data on the video and user portrait data;
  • the method according to the above embodiments can be implemented by means of software plus a necessary general-purpose hardware platform, and of course also by hardware, but in many cases the former is better implementation.
  • the technical solution of the present disclosure can be embodied in the form of a software product in essence or the part that contributes to the prior art, and the computer software product is stored in a storage medium (such as ROM/RAM, disk, CD) contains several instructions to make a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) execute the methods of various embodiments of the present disclosure.
  • FIG. 10 is a device block diagram of an interface processing device according to an exemplary embodiment.
  • the device includes a first display module 101 , a first acquisition module 102 , a first determination module 103 and a second display module 104 , and the device will be described below.
  • the first display module 101 is configured to display the current display interface associated with the user account, the current display interface includes: a video area for playing video and a comment area for displaying comments on the played video; the first acquisition module 102, Connected to the above-mentioned first display module 101, used to obtain the associated information of the user account; the first determining module 103, connected to the above-mentioned first obtaining module 102, used to determine the multimedia tag associated with the user account according to the associated information, and the multimedia tag is used To identify the multimedia content; the second display module 104 is connected to the above-mentioned first determination module 103, and is used to display the touch object corresponding to the multimedia label on the current display interface, and the touch object is used to receive the trigger operation to trigger the display of the multimedia label Identified multimedia content.
  • the first display module 101, the first acquisition module 102, the first determination module 103 and the second display module 104 correspond to steps S21 to S24 in Embodiment 1, and the above modules and corresponding steps
  • the implemented examples and application scenarios are the same, but are not limited to the content disclosed in Embodiment 1 above. It should be noted that, as a part of the device, the above modules can run in the computer terminal 10 provided in Embodiment 1.
  • the second display module 104 includes a first display unit and a second display unit, wherein the first display unit is configured to display the video area and the The comment area; the second display unit, configured to display the touch object corresponding to the multimedia label on the second layer of the current display interface, wherein the second layer is located above the first layer.
  • the second layer includes a control function sublayer and a control image sublayer, wherein the control function sublayer is located above the control image sublayer, and the control function sublayer is used to respond to touch The trigger operation of the object, the control image sublayer is used to display the elements of the multimedia content.
  • the interface processing device further includes: a first receiving module and a third display module, wherein the first receiving module is connected to the above-mentioned second display module 104 for displaying After the touch object corresponding to the multimedia tag is displayed on the interface, a trigger operation on the touch object is received; the third display module is connected to the first receiving module, and is used to display the multimedia content identified by the multimedia tag in response to the trigger operation.
  • the third display module includes: a third display unit, configured to respond to the first trigger operation on the preview window when the touch object is a preview window
  • the widget plays the multimedia content identified by the multimedia tag.
  • the third display module further includes: a fourth display unit or a fifth display unit, wherein the fourth display unit is configured to respond to the second trigger operation on the preview window, Jump to the multimedia display interface based on the link address corresponding to the second trigger operation, and play the multimedia content on the multimedia display interface; the fifth display unit is used to respond to the third trigger operation on the preview window, corresponding to the third trigger operation
  • the interface switching command is used to switch to the multimedia browsing interface, wherein the multimedia browsing interface displays a multimedia list, and the multimedia list includes multimedia content.
  • the above-mentioned first determination module includes: a first determination unit and a second determination unit, wherein the first determination unit is configured to, if the associated information includes behavior data information, according to the behavior The data information determines the interest tag of the user account, wherein the interest tag is one or more of a plurality of classification tags; the second determination unit is connected to the above-mentioned first determination unit, and is used to search for multimedia tags matching the interest tag, Determines the multimedia tab associated with the user account.
  • the above-mentioned first determination unit includes: a first acquisition subunit, a processing subunit and a second acquisition subunit, wherein the first acquisition subunit is used to respectively acquire multiple categories The statistical quantity of each category label in the label, where the statistical quantity is obtained by counting the various operation behaviors of the user account under each category label according to the behavior data information, and each behavior has a corresponding weight; the processing subunit, connection To the above-mentioned first acquisition subunit, which is used to sort a plurality of classification labels according to the statistical quantity of each classification label, and obtain the sorting result; the second acquisition subunit is connected to the above-mentioned processing subunit, and is used to sort according to the sorting result, Get the interest tags for a user account.
  • the above-mentioned first determination module includes: a processing unit, configured to input attribute information into the point-of-interest recognition model when the associated information includes attribute information, and output a multimedia tag associated with the user account , wherein the POI recognition model is obtained through machine training of multiple sets of data, the multiple sets of data include: attribute information of the user account and multimedia tags associated with the user account.
  • Embodiments of the present disclosure may provide an electronic device, and the electronic device may be a terminal or a server.
  • the electronic device as a terminal, may be any computer terminal device in a group of computer terminals.
  • the foregoing terminal may also be a terminal device such as a mobile terminal.
  • the above-mentioned terminal may be located in at least one network device among a plurality of network devices in the computer network.
  • Fig. 11 is a structural block diagram of a terminal according to an exemplary embodiment.
  • the terminal may include: one or more (only one is shown in the figure) processors 111, and a memory 112 for storing processor-executable instructions; wherein, the processors are configured to execute instructions to Implement the interface processing method of any of the above items.
  • the memory can be used to store software programs and modules, such as program instructions/modules corresponding to the interface processing method and device in the embodiments of the present disclosure, and the processor executes various functional applications by running the software programs and modules stored in the memory. And data processing, that is, realizing the above-mentioned interface processing method.
  • the memory may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory.
  • the memory may further include a memory located remotely from the processor, and these remote memories may be connected to the computer terminal through a network. Examples of the aforementioned networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • the processor can call the information and application program stored in the memory through the transmission device to perform the following steps: display the current display interface associated with the user account, the current display interface includes: a video area for playing video and a display for playing The comment area for commenting on the video; obtain the associated information of the user account; determine the multimedia tag associated with the user account according to the associated information, and the multimedia tag is used to identify the multimedia content; display the touch object corresponding to the multimedia tag on the current display interface, touch The object is used to receive trigger operations to trigger the display of the multimedia content identified by the multimedia tag.
  • the above-mentioned processor can also execute the program code of the following steps: displaying the touch object corresponding to the multimedia label on the current display interface, including: displaying the video area and the comment area on the first layer of the current display interface; The touch object corresponding to the multimedia label is displayed on the second layer of the current display interface, wherein the second layer is located above the first layer.
  • the above-mentioned processor can also execute the program code of the following steps: the second layer includes a control function sublayer and a control image sublayer, wherein the control function sublayer is located above the control image sublayer, and the control function sublayer The layer is used to respond to the trigger operation on the touch object, and the control image sublayer is used to display the elements of the multimedia content.
  • the above-mentioned processor may also execute the program code of the following steps: after the touch object corresponding to the multimedia label is displayed on the current display interface, further include: receiving a trigger operation on the touch object; responding to the trigger operation, displaying The multimedia content identified by the multimedia tag.
  • the above-mentioned processor can also execute the program code of the following steps: responding to the trigger operation, displaying the multimedia content identified by the multimedia label, including: in the case that the touch object is a preview window, responding to the preview window
  • the first trigger operation is to play the multimedia content identified by the multimedia tag in the preview window.
  • the above-mentioned processor can also execute the program code of the following steps: responding to the second trigger operation on the preview window, based on the link address corresponding to the second trigger operation, jumping into the multimedia display interface, and The interface plays multimedia content; or, in response to the third trigger operation on the preview window, based on the interface switching instruction corresponding to the third trigger operation, switch to the multimedia browsing interface, wherein the multimedia browsing interface displays a multimedia list, and the multimedia list includes multimedia content.
  • the above-mentioned processor may also execute the program code of the following steps: determining the multimedia label associated with the user account according to the association information, including: determining the user account according to the behavior data information when the association information includes behavior data information interest tags, wherein the interest tags are one or more of multiple classification tags; by searching for multimedia tags matching the interest tags, the multimedia tags associated with the user account are determined.
  • the above-mentioned processor can also execute the program code of the following steps: determining the interest tags of the user account according to the behavior data information, including: separately obtaining the statistical quantity of each classification label in the plurality of classification labels, wherein the statistical quantity According to the behavior data information under each classification label, the statistics of each operation behavior of the user account are obtained, and each behavior has a corresponding weight; according to the statistical quantity of each classification label, multiple classification labels are sorted to obtain the ranking Result: According to the sorting result, get the interest tags of the user account.
  • the above-mentioned processor may also execute the program code of the following steps: determining the multimedia tag associated with the user account according to the associated information, including: when the associated information includes attribute information, input the attribute information into the point-of-interest identification model , to output the multimedia tags associated with the user account, wherein the point-of-interest recognition model is obtained through machine training of multiple sets of data, and the multiple sets of data include: attribute information of the user account and multimedia tags associated with the user account.
  • FIG. 12 is a structural block diagram of a server according to an exemplary embodiment.
  • the server 120 may include: one or more (only one is shown in the figure) processing components 121, a memory 122 for storing executable instructions of the processing components 121, and a power supply component 123 for providing power, so as to realize the communication with The network interface 124 for external network communication and the I/O input and output interface 125 for data transmission with the outside; wherein, the processing component 121 is configured to execute instructions to implement any one of the above-mentioned interface processing methods.
  • the memory can be used to store software programs and modules, such as program instructions/modules corresponding to the interface processing method and device in the embodiments of the present disclosure, and the processor executes various functional applications by running the software programs and modules stored in the memory. And data processing, that is, realizing the above-mentioned interface processing method.
  • the memory may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory.
  • the memory may further include a memory located remotely from the processor, and these remote memories may be connected to the computer terminal through a network. Examples of the aforementioned networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • the processing component can call the information stored in the memory and the application program through the transmission device to perform the following steps: display the current display interface associated with the user account, the current display interface includes: a video area for playing video and a display for playing The comment area for commenting on the video; obtain the associated information of the user account; determine the multimedia tag associated with the user account according to the associated information, and the multimedia tag is used to identify the multimedia content; display the touch object corresponding to the multimedia tag on the current display interface, touch The object is used to receive trigger operations to trigger the display of the multimedia content identified by the multimedia tag.
  • the above-mentioned processing component can also execute the program code of the following steps: displaying the touch object corresponding to the multimedia label on the current display interface, including: displaying the video area and the comment area on the first layer of the current display interface; The touch object corresponding to the multimedia label is displayed on the second layer of the current display interface, wherein the second layer is located above the first layer.
  • the above-mentioned processing component can also execute the program code of the following steps: the second layer includes a control function sublayer and a control image sublayer, wherein the control function sublayer is located above the control image sublayer, and the control function sublayer The layer is used to respond to the trigger operation on the touch object, and the control image sublayer is used to display the elements of the multimedia content.
  • the above-mentioned processing component can also execute the program code of the following steps: after the touch object corresponding to the multimedia label is displayed on the current display interface, it also includes: receiving a trigger operation on the touch object; responding to the trigger operation, displaying The multimedia content identified by the multimedia tag.
  • the above-mentioned processing component can also execute the program code of the following steps: responding to the trigger operation, displaying the multimedia content identified by the multimedia label, including: in the case that the touch object is a preview window, responding to the preview window
  • the first trigger operation is to play the multimedia content identified by the multimedia tag in the preview window.
  • the above-mentioned processing component can also execute the program code of the following steps: responding to the second trigger operation on the preview window, based on the link address corresponding to the second trigger operation, jumping into the multimedia display interface, and The interface plays multimedia content; or, in response to the third trigger operation on the preview window, based on the interface switching instruction corresponding to the third trigger operation, switch to the multimedia browsing interface, wherein the multimedia browsing interface displays a multimedia list, and the multimedia list includes multimedia content.
  • the above-mentioned processing component may also execute the program code of the following steps: determining the multimedia label associated with the user account according to the association information, including: determining the user account according to the behavior data information when the association information includes behavior data information interest tags, wherein the interest tags are one or more of multiple classification tags; by searching for multimedia tags matching the interest tags, the multimedia tags associated with the user account are determined.
  • the above-mentioned processing component can also execute the program code of the following steps: determining the interest tags of the user account according to the behavior data information, including: separately obtaining the statistical quantity of each classification label in the plurality of classification labels, wherein the statistical quantity According to the behavior data information under each classification label, the statistics of each operation behavior of the user account are obtained, and each behavior has a corresponding weight; according to the statistical quantity of each classification label, multiple classification labels are sorted to obtain the ranking Result: According to the sorting result, get the interest tags of the user account.
  • the above-mentioned processing component can also execute the program code of the following steps: determining the multimedia tag associated with the user account according to the associated information, including: when the associated information includes attribute information, input the attribute information into the point-of-interest identification model , to output the multimedia tags associated with the user account, wherein the point-of-interest recognition model is obtained through machine training of multiple sets of data, and the multiple sets of data include: attribute information of the user account and multimedia tags associated with the user account.
  • Figure 11 and Figure 12 is only a schematic representation, for example, the above-mentioned terminal can also be a smart phone (such as an Android phone, an iOS phone, etc.), a tablet computer, an applause computer, and a mobile Internet device ( Mobile Internet Devices, MID), PAD and other terminal equipment.
  • Figure 11 and Figure 12 do not limit the structure of the above-mentioned electronic device. For example, it may also include more or less components than those shown in 11 and FIG. 12 (such as network interfaces, display devices, etc.), or have different configurations from those shown in 11 and FIG. 12 .
  • the computer-readable storage medium may be a non-transitory computer-readable storage medium, for example, the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk and optical data storage devices, etc.
  • the above-mentioned computer-readable storage medium may be used to store the program code executed by the interface processing method provided in the above-mentioned embodiments.
  • the above-mentioned computer-readable storage medium may be located in any computer terminal in the group of computer terminals in the computer network, or in any mobile terminal in the group of mobile terminals.
  • the computer-readable storage medium is configured to store program codes for performing the following steps: display the current display interface associated with the user account, the current display interface includes: a video area for playing videos and a user To display the comment area for commenting on the played video; obtain the associated information of the user account; determine the multimedia tag associated with the user account according to the associated information, and the multimedia tag is used to identify the multimedia content; display the touch corresponding to the multimedia tag on the current display interface A control object, the touch object is used to receive a trigger operation, so as to trigger and display the multimedia content identified by the multimedia tag.
  • the computer-readable storage medium is configured to store program codes for performing the following steps: displaying the touch object corresponding to the multimedia label on the current display interface, including: The layer displays the video area and comment area; the touch object corresponding to the multimedia label is displayed on the second layer of the current display interface, wherein the second layer is located above the first layer.
  • the computer-readable storage medium is configured to store program codes for performing the following steps: the second layer includes a control function sublayer and a control image sublayer, wherein the control function sublayer is located in the control image Above the sublayer, the control function sublayer is used to respond to the trigger operation on the touch object, and the control image sublayer is used to display the elements of the multimedia content.
  • the computer-readable storage medium is configured to store program codes for performing the following steps: after the touch object corresponding to the multimedia label is displayed on the current display interface, further include: receiving the touch object A trigger operation; in response to the trigger operation, the multimedia content identified by the multimedia tag is displayed.
  • the computer-readable storage medium is configured to store program codes for performing the following steps: responding to a trigger operation, displaying the multimedia content identified by the multimedia tag, including: when the touch object is a small preview window Next, in response to the first trigger operation on the preview window, the multimedia content identified by the multimedia tag is played in the preview window.
  • the computer-readable storage medium is configured to store program codes for performing the following steps: in response to the second trigger operation on the preview window, based on the link address corresponding to the second trigger operation, jump into The multimedia display interface, and playing multimedia content on the multimedia display interface; or, in response to the third trigger operation on the preview window, based on the interface switching instruction corresponding to the third trigger operation, switch to the multimedia browsing interface, wherein the multimedia browsing interface displays The multimedia list includes multimedia content.
  • the computer-readable storage medium is configured to store program codes for performing the following steps: determining a multimedia label associated with a user account according to associated information, including: when the associated information includes behavior data information , determining the interest tag of the user account according to the behavior data information, wherein the interest tag is one or more of a plurality of classification tags; and determining the multimedia tag associated with the user account by searching for a multimedia tag matching the interest tag.
  • the computer-readable storage medium is configured to store program codes for performing the following steps: determining the interest tags of the user account according to the behavior data information, including: obtaining each of the multiple classification tags respectively The statistical quantity, among which, the statistical quantity is obtained by counting the various operation behaviors of the user account under each classification label according to the behavior data information, and each behavior has a corresponding weight; according to the statistical quantity of each classification label, the multi- According to the sorting results, the interest tags of the user account are obtained.
  • the computer-readable storage medium is configured to store program codes for performing the following steps: determining the multimedia tag associated with the user account according to the associated information, including: when the associated information includes attribute information, Input the attribute information into the POI recognition model, and output the multimedia label associated with the user account.
  • the POI recognition model is obtained through machine training of multiple sets of data, and the multiple sets of data include: the attribute information of the user account and the information associated with the user account. Multimedia tab for .
  • a computer program product is also provided.
  • the computer program in the computer program product is executed by the processor of the electronic device, the electronic device can execute any one of the interface processing methods described above.
  • the disclosed technical content can be realized in other ways.
  • the device embodiments described above are only illustrative, such as the division of units, which is only a logical function division, and there may be other division methods in actual implementation, for example, multiple units or components can be combined or integrated into Another system, or some features may be ignored, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of units or modules may be in electrical or other forms.
  • a unit described as a separate component may or may not be physically separated, and a component displayed as a unit may or may not be a physical unit, that is, it may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units can be implemented in the form of hardware or in the form of software functional units.
  • the integrated unit is realized in the form of a software function unit and sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • the technical solution of the present disclosure is essentially or part of the contribution to the prior art, or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , including several instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods in various embodiments of the present disclosure.
  • the aforementioned storage media include: U disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), mobile hard disk, magnetic disk or optical disc, etc., which can store program codes. .

Abstract

本公开关于一种界面处理方法、装置及电子设备,其中,该方法包括:显示与用户账户关联的当前显示界面,所述当前显示界面包括:用于播放视频的视频区域和用于显示对播放的视频进行评论的评论区域;获取所述用户账户的关联信息;根据所述关联信息确定与所述用户账户关联的多媒体标签,所述多媒体标签用于标识多媒体内容;在所述当前显示界面显示与所述多媒体标签对应的触控对象,所述触控对象用于接收触发操作,以触发展示所述多媒体标签标识的所述多媒体内容。

Description

界面处理方法及装置
相关申请的交叉引用
本公开基于申请日为2021年05月28日、申请号为202110593725.6号的中国专利申请,并要求该中国专利申请的优先权,在此全文引用上述中国专利申请公开的内容以作为本公开的一部分。
技术领域
本公开涉及计算机领域,尤其涉及一种界面处理方法、装置及电子设备。
背景技术
目前,在互联网视频的相关应用中,用户想要观看某一多媒体内容(例如,直播)时,需要浏览该多媒体内容对应的用户账户下的任一其它多媒体内容,通过播放该任一其它多媒体内容的界面上的控件,进入该用户账户当前所进行的多媒体内容(直播)。在对多媒体内容进行操作(例如,预约或者推送)时,也均是在创建好多媒体内容后,需要进入播放该多媒体内容的显示界面,或者显示该多媒体内容的浏览页面时,才能对该多媒体内容进行操作,操作较为复杂。
发明内容
本公开提供一种界面处理方法、装置及电子设备。本公开的技术方案如下:
根据本公开实施例的第一方面,提供一种界面处理方法,包括:显示与用户账户关联的当前显示界面,所述当前显示界面包括:用于播放视频的视频区域和用于显示对播放的视频进行评论的评论区域;获取所述用户账户的关联信息;根据所述关联信息确定与所述用户账户关联的多媒体标签,所述多媒体标签用于标识多媒体内容;在所述当前显示界面显示与所述多媒体标签对应的触控对象,所述触控对象用于接收触发操作,以触发展示所述多媒体标签标识的所述多媒体内容。
在一些实施例中,所述在所述当前显示界面显示与所述多媒体标签对应的触控对象,包括:在所述当前显示界面的第一图层显示所述视频区域和所述评论区域;在所述当前显示界面的第二图层显示与所述多媒体标签对应的触控对象,其中,所述第二图层位于所述第一图层的上方。
在一些实施例中,所述第二图层包括控件功能子层和控件图像子层,其中,所述控件功能子层位于所述控件图像子层的上方,所述控件功能子层用于响应对所述触控对象的所述触发操作,所述控件图像子层用于显示所述多媒体内容的元素。
在一些实施例中,在所述当前显示界面显示与所述多媒体标签对应的触控对象之后,还包括:接收对所述触控对象的触发操作;响应所述触发操作,显示所述多媒体标签标识的所述多媒体内容。
在一些实施例中,所述响应所述触发操作,显示所述多媒体标签标识的所述多媒体内容,包括:在所述触控对象为预览小窗的情况下,响应对所述预览小窗的第一触发操作,在所述预览小窗播放所述多媒体标签标识的所述多媒体内容。
在一些实施例中,所述方法还包括:响应对所述预览小窗的第二触发操作,基于所述第二触发操作对应的链接地址,跳转进入多媒体展示界面,以及在所述多媒体展示界面播放所述多媒体内容;或者,响应对所述预览小窗的第三触发操作,基于所述第三触发操作对应的界面切换指令,切换到多媒体浏览界面,其中,所述多媒体浏览界面展示有多媒体列表,所述多媒体列表中包括所述多媒体内容。
在一些实施例中,所述根据所述关联信息确定与所述用户账户关联的多媒体标签,包括:在所述关联信息包括行为数据信息的情况下,根据所述行为数据信息确定所述用户账户的兴趣标签,其中,所述兴趣标签为多个分类标签中的一个或多个;通过查找与所述兴趣标签匹配的多媒体标签,确定与所述用户账户关联的多媒体标签。
在一些实施例中,所述根据所述行为数据信息确定所述用户账户的兴趣标签,包括:分别获取所述多个分类标签中每个分类标签的统计数量,其中,所述统计数量根据所述行为数据信息在每个分类标签下,对所述用户账户发生的各个操作行为进行统计得到,各个行为分别有对应的权重;依据每个分类标签的统计数量,对所述多个分类标签进行排序,得到排序结果;依据所述排序结果,获取所述用户账户的兴趣标签。
在一些实施例中,所述根据所述关联信息确定与所述用户账户关联的多媒体标签,包括:在所述关联信息包括属性信息的情况下,将所述属性信息输入兴趣点识别模型,输出与所述用户账户关联的多媒体标签,其中,所述兴趣点识别模型通过多组数据进行机器训练得到,多组数据中包括:用户账户的属性信息和与该所述用户账户关联的多媒体标签。
根据本公开实施例的第二方面,提供一种界面处理装置,包括:第一显示模块,用于显示与用户账户关联的当前显示界面,所述当前显示界面包括:用于播放视频的视频区域和用于显示对播放的视频进行评论的评论区域;第一获取模块,用于获取所述用户账户的关联信息;第一确定模块,用于根据所述关联信息确定与所述用户账户关联的多媒体标签,所述多媒体标签用于标识多媒体内容;第二显示模块,用于在所述当前显示界面显示与所述多媒体标签对应的触控对象,所述触控对象用于接收触发操作,以触发展示所述多媒体标签标识的所述多媒体内容。
在一些实施例中,所述第二显示模块包括:第一显示单元,用于在所述当前显示界面的第一图层显示所述视频区域和所述评论区域;第二显示单元,用于在所述当前显示界面的第二图层显示与所述多媒体标签对应的触控对象,其中,所述第二图层位于所述第一图层的上方。
在一些实施例中,所述第二图层包括控件功能子层和控件图像子层,其中,所述控件功能子层位于所述控件图像子层的上方,所述控件功能子层用于响应对所述触控对象的所述触发操作,所述控件图像子层用于显示所述多媒体内容的元素。
在一些实施例中,所述装置还包括:第一接收模块,用于在所述当前显示界面显示与所述多媒体标签对应的触控对象之后,接收对所述触控对象的触发操作;第三显示模块,用于响应所述触发操作,显示所述多媒体标签标识的所述多媒体内容。
在一些实施例中,所述第三显示模块包括:第三显示单元,用于在所述触控对象为预览小窗的情况下,响应对所述预览小窗的第一触发操作,在所述预览小窗播放所述多媒体标签标识的所述多媒体内容。
在一些实施例中,所述第三显示模块还包括:第四显示单元,用于响应对所述预览小窗的第二触发操作,基于所述第二触发操作对应的链接地址,跳转进入多媒体展示界面,以及在所述多媒体展示界面播放所述多媒体内容;或者,第五显示单元,用于响应对所述预览小窗的第三触发操作,基于所述第三触发操作对应的界面切换指令,切换到多媒体浏览界面,其中,所述多媒体浏览界面展示有多媒体列表,所述多媒体列表中包括所述多媒体内容。
在一些实施例中,所述第一确定模块包括:第一确定单元,用于在所述关联信息包括行为数据信息的情况下,根据所述行为数据信息确定所述用户账户的兴趣标签,其中,所述兴趣标签为多个分类标签中的一个或多个;第二确定单元,用于通过查找与所述兴趣标签匹配的多媒体标签,确定与所述用户账户关联的多媒体标签。
在一些实施例中,所述第一确定单元包括:第一获取子单元,用于分别获取所述多个分类标签中每个分类标签的统计数量,其中,所述统计数量根据所述行为数据信息在每个分类标签下,对所述用户账户发生的各个操作行为进行统计得到,各个行为分别有对应的权重;处理子单元,用于依据每个分类标签的统计数量,对所述多个分类标签进行排序,得到排序结果;第二获取子单元,用于依据所述排序结果,获取所述用户账户的兴趣标签。
在一些实施例中,所述第一确定模块包括:处理单元,用于在所述关联信息包括属性信息的情况下,将所述属性信息输入兴趣点识别模型,输出与所述用户账户关联的多媒体标签,其中,所述兴趣点识别模型通过多组数据进行机器训练得到,多组数据中包括:用户账户的属性信息和与该所述用户账户关联的多媒体标签。
根据本公开实施例的第三方面,提供一种电子设备,包括:处理器;用于存储所述处理器可执行指令的存储器;其中,所述处理器被配置为执行所述指令,以实现任一项所述的界面处理方法。
根据本公开实施例的第四方面,提供一种计算机可读存储介质,当所述计算机可读存储介质中的指令由电子设备的处理器执行时,使得电子设备能够执行任一项所述的界面处理方法。
根据本公开实施例的第五方面,提供一种计算机程序产品,包括计算机程序,所述计 算机程序被处理器执行任一项所述的界面处理方法。
通过在根据用户账户的关联信息确定与该用户账户关联的多媒体标签后,在用户账户的当前显示界面显示与该多媒体标签对应的触控对象,可以基于对该触控对象的触发操作,展示该多媒体标签标识的多媒体内容。由于多媒体标签是与用户账户关联的,因此,该多媒体标签标识的多媒体内容与该用户账户是有针对性的,有效地提高了多媒体内容向对应用户账户展示的准确性。而且,用户账户对应的用户依据该多媒体标签对应的触控对象即可获取该多媒体标签标识的多媒体内容,解决了相关技术中,在需要获取多媒体内容时,存在获取渠道单一的技术问题,另外,在用户账户对应的用户仅需要对在当前显示界面显示的该触控对象进行触发操作,即可实现对多媒体标签标识的多媒体内容进行观看,操作简单,有效提高了用户观看体验。
附图说明
图1是根据一示例性实施例示出的一种用于实现界面处理方法的计算机终端的硬件结构框图。
图2是根据一示例性实施例示出的界面处理方法一的流程图。
图3是根据一示例性实施例示出的界面处理方法二的流程图。
图4是根据一示例性实施例示出的界面处理方法三的流程图。
图5是根据一示例性实施例示出的界面处理方法四的流程图。
图6是根据一示例性实施例示出的界面处理方法五的流程图。
图7是根据一示例性实施例示出的界面处理方法六的流程图。
图8是根据一示例性实施例示出的界面处理方法七的流程图。
图9是根据一示例性可选实施方式提供的直播预告展示的示意图。
图10是根据一示例性实施例示出的界面处理装置的装置框图。
图11是根据一示例性实施例示出的终端的装置框图。
图12是根据一示例性实施例示出的服务器的结构框图。
具体实施方式
为了使本领域普通人员更好地理解本公开的技术方案,下面将结合附图,对本公开实施例中的技术方案进行清楚、完整地描述。
需要说明的是,本公开的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本公开的实施例能够以除了在这里图示或描述的那些以外的顺序实施。以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的装置和方法的例子。
根据本公开实施例,提出了一种界面处理方法的方法实施例。需要说明的是,在附图的流程图示出的步骤可以在诸如一组计算机可执行指令的计算机系统中执行,并且,虽然在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤。
本公开实施例1所提供的方法实施例可以在移动终端、计算机终端或者类似的运算装置中执行。图1是根据一示例性实施例示出的用于实现界面处理方法的计算机终端(或移动设备)的硬件结构框图。如图1所示,计算机终端10(或移动设备)可以包括一个或多个(图中采用12a、12b,……,12n来示出)处理器12(处理器12可以包括但不限于微处理器MCU或可编程逻辑器件FPGA等的处理装置)、用于存储数据的存储器14、以及用于通信功能的传输装置。除此以外,还可以包括:显示器、输入/输出接口(I/O接口)、通用串行总线(USB)端口(可以作为BUS总线的端口中的一个端口被包括)、网络接口、电源和/或相机。本领域普通技术人员可以理解,图1所示的结构仅为示意,其并不对上述电子装置的结构造成限定。例如,计算机终端10还可包括比图1中所示更多或者更少的组件,或者具有与图1所示不同的配置。
应当注意到的是上述一个或多个处理器12和/或其他数据处理电路在本文中通常可以被称为“数据处理电路”。该数据处理电路可以全部或部分的体现为软件、硬件、固件或其他任意组合。此外,数据处理电路可为单个独立的处理模块,或全部或部分的结合到计算机终端10(或移动设备)中的其他元件中的任意一个内。如本公开实施例中所涉及到的,该数据处理电路作为一种处理器控制(例如与接口连接的可变电阻终端路径的选择)。
存储器14可用于存储应用软件的软件程序以及模块,如本公开实施例中的界面处理方法对应的程序指令/数据存储装置,处理器12通过运行存储在存储器14内的软件程序以及模块,从而执行各种功能应用以及数据处理,即实现上述的应用程序的界面处理方法。存储器14可包括高速随机存储器,还可包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中,存储器14可进一步包括相对于处理器12远程设置的存储器,这些远程存储器可以通过网络连接至计算机终端10。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
传输装置用于经由一个网络接收或者发送数据。上述的网络具体实例可包括计算机终端10的通信供应商提供的无线网络。在一个实例中,传输装置包括一个网络适配器(Network Interface Controller,NIC),其可通过基站与其他网络设备相连从而可与互联网进行通讯。在一个实例中,传输装置可以为射频(Radio Frequency,RF)模块,其用于通过无线方式与互联网进行通讯。
显示器可以例如触摸屏式的液晶显示器(LCD),该液晶显示器可使得用户能够与计算机终端10(或移动设备)的用户界面进行交互。
此处需要说明的是,在一些可选实施例中,上述图1所示的计算机设备(或移动设备)可以包括硬件元件(包括电路)、软件元件(包括存储在计算机可读介质上的计算机代码)、 或硬件元件和软件元件两者的结合。应当指出的是,图1仅为特定具体实例的一个实例,并且旨在示出可存在于上述计算机设备(或移动设备)中的部件的类型。
在上述运行环境下,本公开提供了如图2所示的界面处理方法。图2是根据一示例性实施例示出的界面处理方法一的流程图,如图2所示,该方法用于上述的计算机终端中,包括以下步骤。
在步骤S21中,显示与用户账户关联的当前显示界面,当前显示界面包括:用于播放视频的视频区域和用于显示对播放的视频进行评论的评论区域;
在步骤S22中,获取用户账户的关联信息;
在步骤S23中,根据关联信息确定与用户账户关联的多媒体标签,多媒体标签用于标识多媒体内容;
在步骤S24中,在当前显示界面显示与多媒体标签对应的触控对象,触控对象用于接收触发操作,以触发展示多媒体标签标识的多媒体内容。
采用上述步骤,通过在根据用户账户的关联信息确定与该用户账户关联的多媒体标签后,在用户账户的当前显示界面显示与该多媒体标签对应的触控对象,可以基于对该触控对象的触发操作,展示该多媒体标签标识的多媒体内容。由于多媒体标签是与用户账户关联的,因此,该多媒体标签标识的多媒体内容与该用户账户是有针对性的,有效地提高了多媒体内容向对应用户账户展示的准确性。而且,用户账户对应的用户依据该多媒体标签对应的触控对象即可获取该多媒体标签标识的多媒体内容,解决了相关技术中,在需要获取多媒体内容时,存在获取渠道单一的技术问题,另外,在用户账户对应的用户仅需要对在当前显示界面显示的该触控对象进行触发操作,即可实现对多媒体标签标识的多媒体内容进行观看,操作简单,有效提高了用户观看体验。
图3是根据一示例性实施例示出的界面处理方法二的流程图,如图3所示,该方法除包括图2所示的步骤外,该步骤S24还包括以下步骤。
步骤S31,在当前显示界面的第一图层显示视频区域和评论区域;
步骤S32,在当前显示界面的第二图层显示与多媒体标签对应的触控对象,其中,第二图层位于第一图层的上方。
在一个或多个可选实施例中,在当前显示界面显示与多媒体标签对应的触控对象时,可以采用多种方式,例如,可以在当前显示界面的第一图层显示视频区域和评论区域;在当前显示界面的第二图层显示与多媒体标签对应的触控对象,其中,第二图层位于第一图层的上方。即在显示当前显示界面的第一图层时,在该第一图层的上方叠加第二图层,在该第二图层显示该触控对象。采用不同图层显示不同内容的方式,操作简单,便于实现。另外,由于第二图层位于第一图层的上方,因此,用户账户能够比较容易地注意到该触控对象。
在一个或多个可选实施例中,第二图层可以包括控件功能子层和控件图像子层,其中,控件功能子层位于控件图像子层的上方,控件功能子层用于响应对触控对象的触发操作, 控件图像子层用于显示多媒体内容的元素。因此,为实现控制功能和显示功能的不同,可以在第二图层中划分不同的子图层。控件功能子层可以位于控件图像子层的上方,便于控件图像子层对该图层的说明,该控件功能子层可以是透明层,从而实现对该控件功能子层的操作时,实现的是对该触控对象的操作。需要说明的是,控件图像子层上显示的多媒体内容的元素可以包括多种,例如,可以是该多媒体内容的标题,可以是该多媒体内容的类型,还可以是该多媒体内容的推送信息等。
图4是根据一示例性实施例示出的界面处理方法三的流程图,如图4所示,该方法除包括图2所示的步骤外,该流程还包括以下步骤。
步骤S41,接收对触控对象的触发操作;
步骤S42,响应触发操作,显示多媒体标签标识的多媒体内容。
在一个或多个可选实施例中,在当前显示界面显示与多媒体标签对应的触控对象之后,还包括:接收对触控对象的触发操作;响应触发操作,显示多媒体标签标识的多媒体内容。其中,响应不同的触发操作,采用不同的方式显示多媒体标签标识的多媒体内容。例如,触发操作可以包括多种,例如,可以是对该触控对象的单击操作,对该触控对象的双击操作,对该触控对象的长按操作,等。
图5是根据一示例性实施例示出的界面处理方法四的流程图,如图5所示,该方法除包括图4所示的步骤外,在触控对象为预览小窗的情况下,该步骤S42还包括以下步骤。
步骤S51,响应对预览小窗的第一触发操作,在预览小窗播放多媒体标签标识的多媒体内容;
步骤S52,响应对预览小窗的第二触发操作,基于第二触发操作对应的链接地址,跳转进入多媒体展示界面,以及在多媒体展示界面播放多媒体内容;
步骤S53,响应对预览小窗的第三触发操作,基于第三触发操作对应的界面切换指令,切换到多媒体浏览界面,其中,多媒体浏览界面展示有多媒体列表,多媒体列表中包括多媒体内容。
在一个或多个可选实施例中,在响应触发操作,显示多媒体标签标识的多媒体内容时,在触控对象为预览小窗的情况下,响应对预览小窗的第一触发操作,在预览小窗播放多媒体标签标识的多媒体内容。其中,该预览小窗可以是显示大小小于预定比例的窗口,该预览小窗用于对该多媒体内容进行预览。需要说明的是,该预览小窗可以固定位于该当前显示界面的多个位置,例如,可以位于当前显示界面中第一图层的视频区域,也可以显示在该第一图层的评论区域。该预览小窗也可以以移动的方式位于该当前显示界面的多个位置,从而便于引起用户注意。通过响应对预览小窗的第一触发操作,在预览小窗播放多媒体标签标识的多媒体内容,即在不对预览小窗进行操作时,在预览小窗中是不会播放该多媒体标签标识的多媒体内容的,从而有效避免对当前显示界面上视频区域播放视频的打扰。
例如,在该第一触发操作为单击操作的情况下,响应对预览小窗的单击操作,在预览 小窗播放多媒体标签标识的多媒体内容。即在接收到对该预览小窗的单击操作时,才在预览小窗播放该多媒体标签标识的多媒体内容。为更于对该多媒体内容的播放进行控制,还可以通过对该作为触控对象的预览小窗设置对应的逻辑控制,例如,在接收到一次该单击操作时,在预览小窗播放该多媒体内容,而在再一次接收到该单击操作时,暂停在预览小窗播放该多媒体内容。
在一个或多个可选实施例中,在触控对象为预览小窗的情况下,还可以响应对预览小窗的第二触发操作,基于第二触发操作对应的链接地址,跳转进入多媒体展示界面,以及在多媒体展示界面播放多媒体内容。
例如,在该第二触发操作为双击操作的情况下,响应对预览小窗的双击操作,基于该双击操作对应的链接地址,跳转进入多媒体展示界面,播放该多媒体内容。其中,在该多媒体展示界面播放该多媒体内容时,还可以展示与该多媒体内容相关的信息,例如,可以展示该多媒体内容的操作(例如,对该多媒体内容的点赞,转发,评论等);也可以展示对该多媒体内容的评论(例如,显示对该多媒体内容的简要说明,在对该简要说明进行操作时,展开对该多媒体内容的评论信息等)。需要说明的是,也可以通过设置对应的控制逻辑,比如,在该多媒体展示界面接收到双击操作时,退回到包括该预览小窗的当前显示界面。其中,再次双击操作也仅为一种举例,其它可以退回到当前显示界面的操作也可以应用于本申请。
在一个或多个可选实施例中,在触控对象为预览小窗的情况下,还可以响应对预览小窗的第三触发操作,基于第三触发操作对应的界面切换指令,切换到多媒体浏览界面,其中,多媒体浏览界面展示有多媒体列表,多媒体列表中包括多媒体内容。
例如,在该第三触发操作为长按操作的情况下,响应对该预览小窗的长按操作,切换到该多媒体浏览界面,该多媒体浏览界面展示有多媒体列表,该多媒体列表包括该多媒体内容,以及与该多媒体内容相关的多个其它多媒体内容,比如,可以是与该多媒体内容连载的多个其它多媒体内容。采用多媒体列表的形式对该多媒体内容以及相关的多个其它多媒体内容进行展示,能够便于用户对感兴趣的多媒体内容进行选择,提高用户对多媒体内容的附加观看体验。
在一个或多个可选实施例中,上述用户账户的关联信息是指与用户账户相关联的信息。用户账户的关联信息可以包括多种,可以是用户账户相对动态的信息,也可以是用户账户相对静态的信息。例如,可以是用户账户对应的行为数据信息,也可以是用户账户对应的属性信息。需要说明的是,不管该关联信息是行为数据信息,还是属性信息均是经过用户账户授权的数据。例如,可以是在经过用户账户接受关于授权协议的情况下获取的数据。不同的关联信息,获取方式可以不同。例如,用户账户对应的行为数据信息可以通过用户账户授权的行为记录数据获得,该行为记录数据可以是一段历史时间段的数据,也可以是当前场景下的行为记录数据。用户账户对应的属性信息可以是用户账户授权的历史登记信息,也可以是用户账户授权的更新信息,等。
在一个或多个可选实施例中,根据关联信息确定与用户账户关联的多媒体标签时,根据不同的关联信息,确定与用户账户关联的多媒体标签的方式也不同。下面分别说明。
图6是根据一示例性实施例示出的界面处理方法五的流程图,如图6所示,该方法除包括图2所示的步骤外,该步骤S23还包括以下步骤。
步骤S61,在关联信息包括行为数据信息的情况下,根据行为数据信息确定用户账户的兴趣标签,其中,兴趣标签为多个分类标签中的一个或多个;
步骤S62,通过查找与兴趣标签匹配的多媒体标签,确定与用户账户关联的多媒体标签。
通过上述步骤,依据用户账户的行为数据信息获取该用户账户的兴趣标签,可以使得用户账户的兴趣标签是基于用户账户的行为数据所获取的,用户账户的行为数据是真实反映用户账户的喜好的,因此,能够极大程度上反映用户账户感兴趣的内容,以针对性地对各用户账户推送各自感兴趣的多媒体内容。
在一个或多个可选实施例中,用户账户的行为数据可以在用户账户在历史一段时间内发生的历史记录数据中获取。用户账户的行为数据分为许多类型,以用户账户为用户为例,行为数据可以是多种,例如,可以包括用户的以下操作:点击操作,点赞操作,关注操作,留言操作,收藏操作,分享操作,等等。在统计上述用户账户的行为数据时,可以根据实际应用的需求,统计用户账户某段时间内的行为数据,也可以将每个行为数据赋予不同的权重进行统计。例如,想要调查第一季度,用户的购买偏好,可以选择性地抽取用户,统计用户在一月到三月的历史行为数据进行分析,为一些行为数据赋予较高的权重。用户的行为数据可以是基于多媒体内容,在直播中产生的或者是直接在直播中产生的;用户的行为可以是基于多媒体内容,对多媒体内容进行点击、点赞、关注、留言、收藏、分享等操作产生,也可以是基于直播,对直播进行点击、点赞、关注、留言、收藏、分享等操作产生,从而得到用户的兴趣标签,以此得到用户感兴趣的内容。用户账户的行为数据还可以按社交行为数据分类,分为社交行为数据与非社交行为数据。社交行为数据可以是直播选择连麦的用户,直播连麦的用户中关注的用户,直播连麦的用户中互关的用户,直播中用户的关注列表,等等。非社交行为数据则可以为其他数据。通过统计用户的大量行为数据,可以极大程度地、真实地反映出用户所感兴趣的多媒体类型,保证了所得兴趣标签的真实性,进而保证了用户所看到的直播推送是用户感兴趣的,符合个人偏好的。需要说明的是,上述用户账户,即对应观看多媒体内容的用户。上述行为数据的分类也仅是一种举例,并非穷举,所有基于用户账户所执行的行为数据均可以认为属于本申请实施例的一部分。
需要说明的是,上述所指的兴趣标签作为多个分类标签中的一个或多个,依据分类标签进行分类时的级别不同,兴趣标签也可以是包括多个层级的标签。例如,在对一个大类进行过细分,或者再细分时,该兴趣标签可以包括该大类标签,也可以包括该大类标签对应的细分标签,还可以包括再细分的再细分标签。
在依据用户账户的行为数据,获取用户账户的兴趣标签时,可以采取多种方式,例如, 包括如下方法。图7是根据一示例性实施例示出的界面处理方法六的流程图,如图7所示,该流程除包括图6所示的步骤外,在该步骤S61中还包括:
在步骤S71中,分别获取多个分类标签中每个分类标签的统计数量,其中,统计数量根据行为数据信息在每个分类标签下,对用户账户发生的各个操作行为进行统计得到,各个行为分别有对应的权重;
在步骤S72中,依据每个分类标签的统计数量,对多个分类标签进行排序,得到排序结果;
在步骤S73中,依据排序结果,获取用户账户的兴趣标签。
通过上述步骤,在多个分类标签中,依据排序结果得到用户账户的兴趣标签,使得统计的用户账户的行为数据过多时,涉及大量、多种的分类标签时,可以将排序结果靠前的分类标签作为用户账户的兴趣标签,能够获得更为准确的兴趣标签的结果。
在一个或多个可选实施例中,在统计该用户账户产生的行为数据时,统计在不同分类标签下各个行为发生的数量。例如,可以统计用户在时间标签下、地理位置标签下、作者标签下、嘉宾标签下、内容分类标签下、标题标签下、内容简介标签下、海报图像标签下各个行为的数量;在统计用户账户的行为数据时,可以依据针对每个行为数据设置一定的权重,根据该权重,统计不同行为的数量,进而得到在不同分类标签下的数量。由于行为数据充分,可以对多方位信息进行汇总统计,这样统计所得的分类标签也会较为准确。通过分析用户账户的行为数据,在了解用户账户兴趣偏好,可以极大程度地提高多媒体内容推送的精准率的基础上,增加用户粘度。
在一个或多个可选实施例中,在针对不同行为数据设置一定的权重进行数量统计时,因为不同行为数据所能表达出的用户偏好程度不同,需要对每项行为数据的权重值依据实际需求进行相应的设置。例如,可以采用如下方式:确定在某分类标签的直播下,用户观看过的直播中,点击量为x,分享量为y,点赞量为z,评论量为k,收藏量为r,关注操作为p,用户每对该分类标签下的直播多进行一次对应操作,行为数据数量对应加一,行为数据对应的统计数量可以设置为,x*a1+y*a2+z*a3+k*a4+r*a5+p*a6,其中a1、a2、a3、a4、a5、a6为对应的权重因子,即可得到在各标签下行为数据的统计数量,得到统计后排序在前N位的分类标签,作为用户的兴趣标签。
在一个或多个可选实施例中,依据每个分类标签的统计数量,对多个分类标签进行排序,得到排序结果。以对直播产生的行为数据为例,由于直播内容多种多样,用户可能在浏览视频的过程中对多种不同类型的直播都产生了一定的行为数据,但是用户不是对所有浏览过的直播都感兴趣,因此,需要对用户在过去一段时间内,观看直播中的产品等等操作所产生的大量行为数据进行分析,依照大量的行为数据统计用户在各分类标签下的行为数据数量,并对该用户的行为数据得到的分类标签进行排序,在每个分类标签中所统计的数量越多,说明用户对该分类标签下的直播产生了更多的行为数据,以此来判断出用户感兴趣的标签类型。依据排序结果,获取用户账户的兴趣标签,以保证推送给用户的多媒体 内容类型符合用户的偏好,是用户感兴趣的内容。
在一个或多个可选实施例中,在根据关联信息确定与用户账户关联的多媒体标签时,还可以依据人工智能的方式,获取用户账户的兴趣标签。图8是根据一示例性实施例示出的界面处理方法七的流程图,如图8所示,该方法除包括图2所示的所有步骤外,在该步骤S23中还包括:
在步骤S81中,在关联信息包括属性信息的情况下,将属性信息输入兴趣点识别模型,输出与用户账户关联的多媒体标签,其中,兴趣点识别模型通过多组数据进行机器训练得到,多组数据中包括:用户账户的属性信息和与该用户账户关联的多媒体标签。
通过上述步骤,用户账户的属性信息能够在一定程度上体现用户账户的兴趣,习惯,爱好特点。因此,将用户账户的属性信息输入至兴趣点识别模型识别出与用户账户关联的多媒体标签,通过人工智能的方式,能够基于大量训练数据,快速、准确地的识别出用户账户关联的多媒体标签,使得获得与用户账户的多媒体标签更为快速、准确。
在一个或多个可选实施例中,该用户账户的属性信息可以包括多种信息,例如,用户账户对应的用户的年龄,性别,所在的地理位置,喜爱、点赞、收藏、评论的多媒体内容信息,关注的用户账户的信息,在多媒体内容播放时购买的商品信息,等等。
在一个或多个可选实施例中,将用户账户的属性信息输入兴趣点识别模型,输出得到该用户账户的多媒体标签,其中,兴趣点识别模型通过多组数据进行机器训练得到,多组数据中包括:用户账户的属性信息和与该用户账户关联的多媒体标签。基于多组训练样本进行机器训练,得到兴趣点识别模型。采用用户账户的属性信息和与该用户账户关联的多媒体标签作为训练样本进行机器训练,得到兴趣点识别模型。由于兴趣点识别模型训练时可以包括用户账户的属性信息中所包括的多种信息,因而能够对应用户账户更为丰富的兴趣点,因而使得训练更全面,精准。因此,后续采用训练得到的兴趣点识别模型对用户账户的属性信息进行识别时,降低了兴趣点识别的误识别率,提升了兴趣点识别效率与准确率。
在一个或多个可选实施例中,该兴趣点识别模型可以基于多种算法,例如,基于机器学习算法,比如,基于神经网络模型算法,等等。即,该兴趣点识别模型可以是基于机器学习算法的兴趣点识别模型,例如,可以是基于神经网络模型算法的兴趣点识别模型。需要说明的是,上述基于多种识别网络的兴趣点识别模型仅仅是一种举例,没有一一举出的基于其他识别网络的兴趣点识别模型也可应用于本申请。同样通过训练,上述各种兴趣点识别模型也可以对用户账户的属性信息进行识别,得到与用户账户关联的多媒体标签。基于不同的识别网络的兴趣点识别模型,可以根据不同的需要选择,提供了选择不同方法的多样性,使用起来更加灵活,便捷,大大地提高了用户账户的属性信息识别的适用性。
在一个或多个可选实施例中,通过对兴趣点识别模型进行优化训练,得到优化后的兴趣点识别模型。由于用户账户对应的属性信息可能会不断地进行更新,因此,及时地依据更新的属性信息对兴趣点识别模型进行优化训练。因此,不断地进行优化训练兴趣点识别 模型,使兴趣点识别模型更优,能够更为准确地识别与用户账户关联的多媒体标签,从而使兴趣点识别模型更能理解用户的需求,极大地提升了用户的使用体验。
需要指出的是,上述多媒体内容的形式可以是多样的,例如,可以是直播视频,可以是直播预告,还可以是历史直播回看视频等。上述所列举的各种形式仅仅为一种举例,其它用于推送或者预定的媒体内容均属于本申请的一部分。
在一个或多个可选实施例中,多媒体内容可以是多种场景下的多媒体内容,例如,可以是使用应用程序浏览到的多媒体内容,也可以是在网页上观看的多媒体内容,等等。多媒体内容可以依据多种不同的方式获得,例如,可以是通过应用程序或网页推送的方式获得,可以是扫描二维码获得指定的多媒体内容,还可以是通过点入分享链接获取,等等。多媒体内容的形式也有多种,包括:视频,图片,文本,语音,等等。在大多数的场景中,多数多媒体内容采用上述多种形式结合的方式,例如,在视频中穿插文本内容,即可以将多媒体内容的主题或者是重要提醒事项以炫彩动态字体的方式呈现在视频上方,等等。具体的场景与方式可以根据多媒体内容的具体内容进行选择。
需要说明的是,该多媒体标签可以表示多媒体内容中所涉及的所有相关内容,该相关内容的形式也可以多种,例如,可以是静态的,比如,该直播的一些固定属性信息;也可以是动态的,比如,用于表示该多媒体内容可以变更的对象等。
在一个或多个可选实施例中,获取多媒体标签时,可以采取多种方式,例如,可以根据多媒体内容的属性信息获取该多媒体标签,也可以根据多媒体内容中的媒体流数据获取多媒体标签,等等。下面分别说明。
在根据多媒体内容的属性信息,获取该多媒体标签时,能够使得获取的多媒体标签贴合于该该多媒体内容的相关内容。多媒体内容的属性信息一般是关于多媒体内容的一些固定信息,也可以是多媒体内容的一些基本信息,例如,多媒体内容的分类等。其中,多媒体内容的属性信息可以包括多种,例如,可以包括以下至少之一:多媒体内容的播放时间,多媒体内容对应的地理位置,多媒体内容的作者,多媒体内容的播放对象,多媒体内容对应的嘉宾,多媒体内容直播的连麦者,多媒体内容的内容分类,多媒体内容的标题,多媒体内容的内容简介,多媒体内容的海报图像,多媒体内容的主办方,等等。
多媒体内容的播放时间,可以用于确定用户账户是否能在此时间观看该多媒体内容。需要说明的是,多媒体内容的播放时间作为获取多媒体内容的多媒体标签的信息来源,可以作为多媒体标签与用户账户的兴趣标签进行匹配的依据点,例如,用户账户感兴趣的时间是晚上,因此,该多媒体内容可以认为是该用户账户感兴趣的。多媒体内容对应的地理位置可以是具体的国家和城市,也可以是具体室内或者户外等。多媒体内容的地理位置也可以作为获取多媒体内容的多媒体标签的信息来源,可以作为多媒体标签与用户账户的兴趣标签进行匹配的依据点,例如,用户账户感兴趣的地理位置是海边,因此,该多媒体内容可以认为是该用户账户感兴趣的。多媒体内容的作者即是多媒体内容的主播,即多媒体内容过程中的主角。多媒体内容的作者作为获取多媒体内容的多媒体标签的信息来源,可 以作为多媒体标签与用户账户的兴趣标签进行匹配的依据点,例如,用户账户感兴趣的主播是XX,因此,该多媒体内容可以认为是该用户账户感兴趣的。多媒体内容的播放对象可以是多媒体内容过程中主要涉及的目标,可以是具体的物体,也可以是虚拟的知识,观点等。多媒体内容的播放对象作为获取多媒体内容的多媒体标签的信息来源,可以作为多媒体标签与用户账户的兴趣标签进行匹配的依据点,例如,用户账户感兴趣的是手机,因此,该多媒体内容可以认为是该用户账户感兴趣的。多媒体内容对应的嘉宾是多媒体内容中邀请的可以助阵的人物,多媒体内容对应的嘉宾作为获取多媒体内容的多媒体标签的信息来源,可以作为多媒体标签与用户账户的兴趣标签进行匹配的依据点,例如,用户账户信任某一人物的权威,因此,该人物作为多媒体内容对应的嘉宾时,该多媒体内容即可以认为是用户账户感兴趣的。多媒体内容直播的连麦者是多媒体内容中远程参与直播的人物,多媒体内容直播的连麦者作为获取多媒体内容的多媒体标签的信息来源,可以作为多媒体标签与用户账户的兴趣标签进行匹配的依据点,例如,用户账户喜爱某一明星,当该明星作为多媒体内容对应的连麦者时,该多媒体内容即可以认为是用户账户感兴趣的。多媒体内容的内容分类作为多媒体内容的相对固定的属性信息,用于确定多媒体内容的大致风格,可以用于直接或间接地作为获取多媒体内容的多媒体标签的信息来源,可以作为多媒体标签与用户账户的兴趣标签进行匹配的依据点,例如,用户账户喜好影视一类,因此,电影则可以认为是该用户账户感兴趣的。多媒体内容的标题可以简单描述多媒体内容的大致内容,多媒体内容的标题作为获取多媒体内容的多媒体标签的信息来源,可以作为多媒体标签与用户账户的兴趣标签进行匹配的依据点,例如,多媒体内容标题为用户账户正在攻克的某一学术难题,则可以认为该多媒体内容是用户账户感兴趣的。多媒体内容的内容简介可以更为细致地描述多媒体内容的主要内容,多媒体内容的内容简介作为获取多媒体内容的多媒体标签的信息来源,可以作为多媒体标签与用户账户的兴趣标签进行匹配的依据点,例如,多媒体内容的内容简介所包括的具体问题解决办法正是用户账户要寻找的,该多媒体内容可以认为是用户账户感兴趣的。多媒体内容的海报图像也可以在一定程度上体现多媒体内容的关键点,多媒体内容的海报图像作为获取多媒体内容的多媒体标签的信息来源,可以作为多媒体标签与用户账户的兴趣标签进行匹配的依据点。例如,海报图像中的关键点是用户账户关注的,则该多媒体内容可以认为是用户账户感兴趣的。多媒体内容的主办方可以体现多媒体内容的权威性以及信息的准确性,因此,当用户账户对主办方感兴趣时,则可以认为该多媒体内容是用户账户感兴趣的。
如上述,多媒体内容的属性信息可以是多种类型的,上述属性信息的获取可以是从多媒体内容的显示界面直接获取的,也可以是系统在获取多媒体内容时智能识别得到的。例如,主播制作多媒体内容时,多媒体内容中可以指定多媒体内容的播出时间,邀请的多媒体内容对应的嘉宾与多媒体内容的主办方,设置多媒体内容的标题与多媒体内容的内容分类;此时,系统可以通过分析多媒体内容用以分析关注该主播以及对该多媒体内容标题与内容感兴趣的用户、分析多媒体内容的视频用以选取合适主题的某帧图像作为多媒体内容 的海报图像,等等。获取多媒体内容的属性信息时,通过多媒体内容界面直接获取属性信息有利于精准地得到多媒体内容的属性信息,系统智能获取则能够更加智能地分析多媒体内容,有利于达到快速识别的目的,基于将多媒体内容分为不同类别的标签再进行处理,使得多媒体内容的推送更为方便、准确。
在一个或多个可选实施例中,获取多媒体内容的多媒体标签时,还可以依据多媒体内容中的媒体流数据来获取。例如,可以先识别多媒体内容中的媒体流数据,得到多媒体内容的细化内容;之后,根据多媒体内容的细化内容,获取多媒体内容的多媒体标签。
通过上述处理,采用识别多媒体内容中媒体流数据的方式,得到多媒体内容的细节内容,从而得到多媒体内容的多媒体标签。由于多媒体内容中的媒体流数据可以包含更为详细,丰富的关于多媒体内容的信息,因此,能够根据多媒体内容中的详细内容得到多媒体标签。因此,通过上述识别多媒体内容中媒体流数据的方式,能够使得到的多媒体标签更为直观,准确。
在一个或多个可选实施例中,多媒体内容中的媒体流数据,可以包括多种类型的数据,例如,视频流的图像数据,音频数据,文本数据,等等。比如,当媒体流中数据包含文字内容或是语音内容时,对文字内容或是语音内容进行语义分析,可以采取人工智能的处理方式,对文本或语音内容进行分词,去除一些不太关键的词,例如,一些语气词,助词,等等,得出多个与关键文本或语音内容相关的多媒体标签。举例说明,对多媒体内容进行识别时,识别到一首萨克斯的轻音乐,该多媒体内容的多媒体标签则包含乐器标签-萨克斯,音乐类型标签-轻音乐。又例如,当媒体流中数据包含视频流的图像数据时,对视频流进行逐帧图像分析,可以采用人工智能的方式,对图像进行特征提取、比对分析,根据人或物在视频中的比重,判断该多媒体内容所表达的主要内容,得出多个对应的多媒体标签。例如,对多媒体内容的逐帧图片进行识别时,识别到视频内容为游戏中一段精彩打斗片段,该多媒体内容的多媒体标签则包含游戏标签-打斗标签。通过基于视频流的图像、音频数据、文本数据等等识别技术,识别获取多媒体内容中含有的具体内容,以进一步细化作品标签。有效提高得到的多媒体标签准确性,优化分类多媒体标签的效果。需要说明的是,上述媒体流数据也仅仅是一种列举,并非穷举。上述媒体流数据可以单独用于本公开实施例,也可以结合应用于本公开实施例中,单独应用时可以认为单一的媒体流数据,结合应用时可以认为属于包括多种形式的媒体流数据。
在一个或多个可选实施例中,在识别多媒体内容中的媒体流数据,得到多个多媒体标签时,根据多媒体内容的细化内容,可以得到多媒体标签下的多个子级标签,即更加细致的标签,可以是某人,某物,某歌曲,某商品,等等。例如,对多媒体内容进行识别时,识别到视频内容为美食特卖,该多媒体内容的多媒体标签则包含电商类-美食,但是对多媒体内容进行识别的过程中,识别到为薯片产品,则该多媒体内容的多媒体标签则可以为电商类-美食-零食产品,再进行细致分类,则该多媒体内容的多媒体标签则包含电商类-美食-零食产品-薯片,等等。在对多媒体内容的媒体流数据分析时,往往得到越细致化、 越具象的内容,越容易将多媒体内容推送给感兴趣的用户,有效的提高多媒体内容投放的准确率和效率,保证了观看用户对多媒体内容是感兴趣的,增加用户的粘度。
基于上述实施例及可选实施例,提供了一种可选实施方式。需要说明的是,在本可选实施方式中,多媒体内容以直播预告为例进行说明。
在相关技术中,直播预告的推送处理方式,均是在录制完直播预告后直接推送给所有已关注的用户,推荐的内容也都是对于本场直播的一些信息,所有用户能够看到的都是一样的,并不会去针对不同的用户以及不同的直播内容做个性化的推荐和匹配。很难保证用户对于本场直播的感兴趣程度,用户很难在直播预告中找到自己真正想看的直播内容。
基于上述问题,在本公开可选实施方式中,根据直播的类型和用户的偏好,获取直播预告的标签以及浏览用户感兴趣标签,同时结合用户和关注列表,生成个性化的直播预览,提高预览的有效曝光和点击,够有效的提高直播预告的准确率和效率。
在本公开可选实施方式提供的界面处理方法中,利用直播预告的标签以及浏览用户的感兴趣标签,提供更加合适的直播预告展示,其中,图9是根据一示例性可选实施方式提供的直播预告展示的示意图,如图9所示,该可选实施方式包括以下处理。
S1,分析直播预告的内容,确定直播预告的标签;
1,确定直播预告的标签;
标签可以包括直播预告的属性标签,进行分词处理后抽取的内容标签;
需要说明的是,在直播预告的属性标签中,包括地理位置(可以根据国家的划分规则,省份的划分规则,地区的划分规则,等)、内容标签(可以根据预设的划分规则标签,例如美食、音乐、影视、新闻等)、作者标签(可以根据具体的主播类型划分,例如美食主播、新闻主播、明星主播、音乐主播),直播的标题,直播的内容简介,等等。
1.1细化直播预告的标签;
结合基于视频流的图像、音频数据,等各种识别技术,识别获取直播预告视频流中含有的具体内容(例如,音乐内容以及游戏内容),进一步细化作品标签。
举例说明,识别到该直播预告演唱的歌曲是华语乐坛歌手的代表作,视频作品的音乐标签是华语乐坛歌手-代表作;或者识别到该视频是技巧类游戏的讲解视频,直播预告作品的游戏标签就是技巧类游戏,等等。
1.2细化直播预告中作品的标签;
直播预告中包含电商类直播,进一步细化作品标签。
举例说明,识别到该直播预告中卖的商品为手机,该预告作品的商品标签是智能产品-手机,如果是卖美食,该预告作品的商品标签是美食-零食产品,等等。
需要说明的是,最后可以获取该直播预告中,直播连麦的用户,来形成直播用户标签。
S2,对目标用户进行信息采集,查找目标用户感兴趣的关键点;
需要说明的是,对目标用户进行信息采集时,可以采用多种方式,例如,可以提取历史时间段内保存的周期性采集的数据,也可以实时采集数据。即,根据目标用户的历史行 为数据,获取该目标用户的兴趣点标签。具体处理时可以采用下述多种方法,下面进行举例说明:
方法一:
依赖目标用户自身的行为数据生成对应的兴趣点标签;
即,统计目标用户在过去一段时间的行为数据,包括:操作视频作品的数据,比如,观看、分享、点赞、评论的视频作品的数据,从而得到目标用户的兴趣点标签:其中,统计目标用户的行为数据的时间可以根据实际需求进行设置。
1,在一些实施例中,统计目标用户操作过的作品,在不同标签下的分类数量;
即,包括在不同地理位置标签下、内容标签下、作者标签下以及不同数据标签下的作品数量,等等。
2,统计目标用户在网络平台发生的行为,形成目标用户的属性标签,例如,支付标签;
2.1,在统计作者标签、地理标签,等等这类行为数据直接关联的作品数量时,可以针对用户观看、分享、点赞、评论行为设置不同的权重,来统计数量;
举例说明,目标用户操作的属于地理标签为某个市的作品中,用户观看的作品为x、分享为y、点赞为z、评论为k,对应的统计数量为x*a1+y*a2+z*a3+k*a4。其中,用户观看并分享该作品,该作品的观看数量与分享数量就各加1次,a1、a2、a3、a4为对应的权重因子,以此类推。
2.2,提取用户的关注列表,看是否和该场直播的连麦用户有关注或者是互关等社交属性标签。
3,对于不同类型的标签,统计得到排序在前N位的作品标签,作为用户的兴趣点标签。
方法二:
通过画像数据生成对应的兴趣点标签;
1,对用户设置用户的画像数据;
需要说明的是,用户的画像数据可以用于表征年龄、性别、所在地理位置等信息。
2,获取用户针对视频操作以及该用户的画像数据;
3,根据机器学习算法或者采用现有的神经网络模型,训练采集的样本数据(样本数据包括:用户针对视频的操作和画像数据,用户的兴趣点标签),得到对应的用户兴趣点识别模型;其中,用户兴趣点识别模型,用于根据输入的目标用户的用户针对视频的操作数据以及用户画像数据,输出对应的目标用户对视频作品的兴趣点标签;
4,在每个目标用户浏览视频作品展示页时或打开视频作品平台时,使用用户兴趣点识别模型,获取该目标用户的兴趣点标签。
S3,根据对目标用户生成的兴趣点标签,在直播预告作品标签中找到相匹配的标签,展示给该目标用户的匹配的直播预告。
通过上述可选实施方式,可以达到以下效果:
1)有效地提高直播预告投放的准确率和效率,保证了观看用户对直播是感兴趣的,符合他们的个人偏好的,增加用户的粘度。
2)直播预告的内容针进行了有对性地推送,提高了直播预告投放的准确率。
3)个性化的直播预览,有效地提升了用户体验。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本公开并不受所描述的动作顺序的限制,因为依据本公开,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本公开所必须的。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到根据上述实施例的方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本公开各个实施例的方法。
根据本公开实施例,还提供了一种用于实施上述界面处理方法的装置,图10是根据一示例性实施例示出的界面处理装置的装置框图。参照图10,该装置包括第一显示模块101,第一获取模块102,第一确定模块103和第二显示模块104,下面对该装置进行说明。
第一显示模块101,用于显示与用户账户关联的当前显示界面,当前显示界面包括:用于播放视频的视频区域和用于显示对播放的视频进行评论的评论区域;第一获取模块102,连接至上述第一显示模块101,用于获取用户账户的关联信息;第一确定模块103,连接至上述第一获取模块102,用于根据关联信息确定与用户账户关联的多媒体标签,多媒体标签用于标识多媒体内容;第二显示模块104,连接至上述第一确定模块103,用于在当前显示界面显示与多媒体标签对应的触控对象,触控对象用于接收触发操作,以触发展示多媒体标签标识的多媒体内容。
此处需要说明的是,上述第一显示模块101,第一获取模块102,第一确定模块103和第二显示模块104对应于实施例1中的步骤S21至步骤S24,上述模块与对应的步骤所实现的实例和应用场景相同,但不限于上述实施例1所公开的内容。需要说明的是,上述模块作为装置的一部分可以运行在实施例1提供的计算机终端10中。
在一个或多个可选实施例中,第二显示模块104包括第一显示单元和第二显示单元,其中,该第一显示单元,用于在当前显示界面的第一图层显示视频区域和评论区域;第二显示单元,用于在当前显示界面的第二图层显示与多媒体标签对应的触控对象,其中,第二图层位于第一图层的上方。
在一个或多个可选实施例中,第二图层包括控件功能子层和控件图像子层,其中,控件功能子层位于控件图像子层的上方,控件功能子层用于响应对触控对象的触发操作,控件图像子层用于显示多媒体内容的元素。
在一个或多个可选实施例中,该界面处理装置还包括:第一接收模块和第三显示模块,其中,该第一接收模块,连接至上述第二显示模块104,用于在当前显示界面显示与多媒体标签对应的触控对象之后,接收对触控对象的触发操作;该第三显示模块,连接至上述第一接收模块,用于响应触发操作,显示多媒体标签标识的多媒体内容。
在一个或多个可选实施例中,该第三显示模块包括:第三显示单元,用于在触控对象为预览小窗的情况下,响应对预览小窗的第一触发操作,在预览小窗播放多媒体标签标识的多媒体内容。
在一个或多个可选实施例中,该第三显示模块还包括:第四显示单元或者第五显示单元,其中,该第四显示单元,用于响应对预览小窗的第二触发操作,基于第二触发操作对应的链接地址,跳转进入多媒体展示界面,以及在多媒体展示界面播放多媒体内容;第五显示单元,用于响应对预览小窗的第三触发操作,基于第三触发操作对应的界面切换指令,切换到多媒体浏览界面,其中,多媒体浏览界面展示有多媒体列表,多媒体列表中包括多媒体内容。
在一个或多个可选实施例中,上述第一确定模块包括:第一确定单元和第二确定单元,其中,第一确定单元,用于在关联信息包括行为数据信息的情况下,根据行为数据信息确定用户账户的兴趣标签,其中,兴趣标签为多个分类标签中的一个或多个;第二确定单元,连接到上述第一确定单元,用于通过查找与兴趣标签匹配的多媒体标签,确定与用户账户关联的多媒体标签。
在一个或多个可选实施例中,上述第一确定单元包括:第一获取子单元,处理子单元和第二获取子单元,其中,该第一获取子单元,用于分别获取多个分类标签中每个分类标签的统计数量,其中,统计数量根据行为数据信息在每个分类标签下,对用户账户发生的各个操作行为进行统计得到,各个行为分别有对应的权重;处理子单元,连接至上述第一获取子单元,用于依据每个分类标签的统计数量,对多个分类标签进行排序,得到排序结果;第二获取子单元,连接到上述处理子单元,用于依据排序结果,获取用户账户的兴趣标签。
在一个或多个可选实施例中,上述第一确定模块包括:处理单元,用于在关联信息包括属性信息的情况下,将属性信息输入兴趣点识别模型,输出与用户账户关联的多媒体标签,其中,兴趣点识别模型通过多组数据进行机器训练得到,多组数据中包括:用户账户的属性信息和与该用户账户关联的多媒体标签。
关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。
本公开的实施例可以提供一种电子设备,该电子设备可以是一种终端,也可以是一种 服务器。在本公开的实施例中,该电子设备作为一种终端可以是计算机终端群中的任意一个计算机终端设备。在本公开的实施例中,上述终端也可以为移动终端等终端设备。
在本公开的实施例中,上述终端可以位于计算机网络的多个网络设备中的至少一个网络设备。
在一些实施例中,图11是根据一示例性实施例示出的终端的结构框图。如图11所示,该终端可以包括:一个或多个(图中仅示出一个)处理器111、用于存储处理器可执行指令的存储器112;其中,处理器被配置为执行指令,以实现上述任一项的界面处理方法。
其中,存储器可用于存储软件程序以及模块,如本公开实施例中的界面处理方法和装置对应的程序指令/模块,处理器通过运行存储在存储器内的软件程序以及模块,从而执行各种功能应用以及数据处理,即实现上述的界面处理方法。存储器可包括高速随机存储器,还可以包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中,存储器可进一步包括相对于处理器远程设置的存储器,这些远程存储器可以通过网络连接至计算机终端。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
处理器可以通过传输装置调用存储器存储的信息及应用程序,以执行下述步骤:显示与用户账户关联的当前显示界面,当前显示界面包括:用于播放视频的视频区域和用于显示对播放的视频进行评论的评论区域;获取用户账户的关联信息;根据关联信息确定与用户账户关联的多媒体标签,多媒体标签用于标识多媒体内容;在当前显示界面显示与多媒体标签对应的触控对象,触控对象用于接收触发操作,以触发展示多媒体标签标识的多媒体内容。
在一些实施例中,上述处理器还可以执行如下步骤的程序代码:在当前显示界面显示与多媒体标签对应的触控对象,包括:在当前显示界面的第一图层显示视频区域和评论区域;在当前显示界面的第二图层显示与多媒体标签对应的触控对象,其中,第二图层位于第一图层的上方。
在一些实施例中,上述处理器还可以执行如下步骤的程序代码:第二图层包括控件功能子层和控件图像子层,其中,控件功能子层位于控件图像子层的上方,控件功能子层用于响应对触控对象的触发操作,控件图像子层用于显示多媒体内容的元素。
在一些实施例中,上述处理器还可以执行如下步骤的程序代码:在当前显示界面显示与多媒体标签对应的触控对象之后,还包括:接收对触控对象的触发操作;响应触发操作,显示多媒体标签标识的多媒体内容。
在一些实施例中,上述处理器还可以执行如下步骤的程序代码:响应触发操作,显示多媒体标签标识的多媒体内容,包括:在触控对象为预览小窗的情况下,响应对预览小窗的第一触发操作,在预览小窗播放多媒体标签标识的多媒体内容。
在一些实施例中,上述处理器还可以执行如下步骤的程序代码:响应对预览小窗的第二触发操作,基于第二触发操作对应的链接地址,跳转进入多媒体展示界面,以及在多媒 体展示界面播放多媒体内容;或者,响应对预览小窗的第三触发操作,基于第三触发操作对应的界面切换指令,切换到多媒体浏览界面,其中,多媒体浏览界面展示有多媒体列表,多媒体列表中包括多媒体内容。
在一些实施例中,上述处理器还可以执行如下步骤的程序代码:根据关联信息确定与用户账户关联的多媒体标签,包括:在关联信息包括行为数据信息的情况下,根据行为数据信息确定用户账户的兴趣标签,其中,兴趣标签为多个分类标签中的一个或多个;通过查找与兴趣标签匹配的多媒体标签,确定与用户账户关联的多媒体标签。
在一些实施例中,上述处理器还可以执行如下步骤的程序代码:根据行为数据信息确定用户账户的兴趣标签,包括:分别获取多个分类标签中每个分类标签的统计数量,其中,统计数量根据行为数据信息在每个分类标签下,对用户账户发生的各个操作行为进行统计得到,各个行为分别有对应的权重;依据每个分类标签的统计数量,对多个分类标签进行排序,得到排序结果;依据排序结果,获取用户账户的兴趣标签。
在一些实施例中,上述处理器还可以执行如下步骤的程序代码:根据关联信息确定与用户账户关联的多媒体标签,包括:在关联信息包括属性信息的情况下,将属性信息输入兴趣点识别模型,输出与用户账户关联的多媒体标签,其中,兴趣点识别模型通过多组数据进行机器训练得到,多组数据中包括:用户账户的属性信息和与该用户账户关联的多媒体标签。
在本公开的实施例中,该电子设备作为一种服务器,图12是根据一示例性实施例示出的服务器的结构框图。如图12所示,该服务器120可以包括:一个或多个(图中仅示出一个)处理组件121、用于存储处理组件121可执行指令的存储器122、提供电源的电源组件123,实现与外部网络通信的网络接口124和与外部进行数据传输的I/O输入输出接口125;其中,处理组件121被配置为执行指令,以实现上述任一项的界面处理方法。
其中,存储器可用于存储软件程序以及模块,如本公开实施例中的界面处理方法和装置对应的程序指令/模块,处理器通过运行存储在存储器内的软件程序以及模块,从而执行各种功能应用以及数据处理,即实现上述的界面处理方法。存储器可包括高速随机存储器,还可以包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中,存储器可进一步包括相对于处理器远程设置的存储器,这些远程存储器可以通过网络连接至计算机终端。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
处理组件可以通过传输装置调用存储器存储的信息及应用程序,以执行下述步骤:显示与用户账户关联的当前显示界面,当前显示界面包括:用于播放视频的视频区域和用于显示对播放的视频进行评论的评论区域;获取用户账户的关联信息;根据关联信息确定与用户账户关联的多媒体标签,多媒体标签用于标识多媒体内容;在当前显示界面显示与多媒体标签对应的触控对象,触控对象用于接收触发操作,以触发展示多媒体标签标识的多媒体内容。
在一些实施例中,上述处理组件还可以执行如下步骤的程序代码:在当前显示界面显示与多媒体标签对应的触控对象,包括:在当前显示界面的第一图层显示视频区域和评论区域;在当前显示界面的第二图层显示与多媒体标签对应的触控对象,其中,第二图层位于第一图层的上方。
在一些实施例中,上述处理组件还可以执行如下步骤的程序代码:第二图层包括控件功能子层和控件图像子层,其中,控件功能子层位于控件图像子层的上方,控件功能子层用于响应对触控对象的触发操作,控件图像子层用于显示多媒体内容的元素。
在一些实施例中,上述处理组件还可以执行如下步骤的程序代码:在当前显示界面显示与多媒体标签对应的触控对象之后,还包括:接收对触控对象的触发操作;响应触发操作,显示多媒体标签标识的多媒体内容。
在一些实施例中,上述处理组件还可以执行如下步骤的程序代码:响应触发操作,显示多媒体标签标识的多媒体内容,包括:在触控对象为预览小窗的情况下,响应对预览小窗的第一触发操作,在预览小窗播放多媒体标签标识的多媒体内容。
在一些实施例中,上述处理组件还可以执行如下步骤的程序代码:响应对预览小窗的第二触发操作,基于第二触发操作对应的链接地址,跳转进入多媒体展示界面,以及在多媒体展示界面播放多媒体内容;或者,响应对预览小窗的第三触发操作,基于第三触发操作对应的界面切换指令,切换到多媒体浏览界面,其中,多媒体浏览界面展示有多媒体列表,多媒体列表中包括多媒体内容。
在一些实施例中,上述处理组件还可以执行如下步骤的程序代码:根据关联信息确定与用户账户关联的多媒体标签,包括:在关联信息包括行为数据信息的情况下,根据行为数据信息确定用户账户的兴趣标签,其中,兴趣标签为多个分类标签中的一个或多个;通过查找与兴趣标签匹配的多媒体标签,确定与用户账户关联的多媒体标签。
在一些实施例中,上述处理组件还可以执行如下步骤的程序代码:根据行为数据信息确定用户账户的兴趣标签,包括:分别获取多个分类标签中每个分类标签的统计数量,其中,统计数量根据行为数据信息在每个分类标签下,对用户账户发生的各个操作行为进行统计得到,各个行为分别有对应的权重;依据每个分类标签的统计数量,对多个分类标签进行排序,得到排序结果;依据排序结果,获取用户账户的兴趣标签。
在一些实施例中,上述处理组件还可以执行如下步骤的程序代码:根据关联信息确定与用户账户关联的多媒体标签,包括:在关联信息包括属性信息的情况下,将属性信息输入兴趣点识别模型,输出与用户账户关联的多媒体标签,其中,兴趣点识别模型通过多组数据进行机器训练得到,多组数据中包括:用户账户的属性信息和与该用户账户关联的多媒体标签。
本领域普通技术人员可以理解,图11,图12所示的结构仅为示意,例如,上述终端也可以是智能手机(如Android手机、iOS手机等)、平板电脑、掌声电脑以及移动互联网设备(Mobile Internet Devices,MID)、PAD等终端设备。图11,图12其并不对上述 电子装置的结构造成限定。例如,还可包括比11,图12中所示更多或者更少的组件(如网络接口、显示装置等),或者具有与11,图12所示不同的配置。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令终端设备相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:闪存盘、只读存储器(Read-Only Memory,ROM)、随机存取器(Random Access Memory,RAM)、磁盘或光盘等。
在示例性实施例中,还提供了一种包括指令的计算机可读存储介质,当计算机可读存储介质中的指令由终端的处理器执行时,使得终端能够执行上述任一项的界面处理方法。在一些实施例中,计算机可读存储介质可以是非临时性计算机可读存储介质,例如,非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
在本公开的实施例中,上述计算机可读存储介质可以用于保存上述实施例所提供的界面处理方法所执行的程序代码。
在本公开的实施例中,上述计算机可读存储介质可以位于计算机网络中计算机终端群中的任意一个计算机终端中,或者位于移动终端群中的任意一个移动终端中。
在本公开的实施例中,计算机可读存储介质被设置为存储用于执行以下步骤的程序代码:显示与用户账户关联的当前显示界面,当前显示界面包括:用于播放视频的视频区域和用于显示对播放的视频进行评论的评论区域;获取用户账户的关联信息;根据关联信息确定与用户账户关联的多媒体标签,多媒体标签用于标识多媒体内容;在当前显示界面显示与多媒体标签对应的触控对象,触控对象用于接收触发操作,以触发展示多媒体标签标识的多媒体内容。
在本公开的实施例中,计算机可读存储介质被设置为存储用于执行以下步骤的程序代码:在当前显示界面显示与多媒体标签对应的触控对象,包括:在当前显示界面的第一图层显示视频区域和评论区域;在当前显示界面的第二图层显示与多媒体标签对应的触控对象,其中,第二图层位于第一图层的上方。
在本公开的实施例中,计算机可读存储介质被设置为存储用于执行以下步骤的程序代码:第二图层包括控件功能子层和控件图像子层,其中,控件功能子层位于控件图像子层的上方,控件功能子层用于响应对触控对象的触发操作,控件图像子层用于显示多媒体内容的元素。
在本公开的实施例中,计算机可读存储介质被设置为存储用于执行以下步骤的程序代码:在当前显示界面显示与多媒体标签对应的触控对象之后,还包括:接收对触控对象的触发操作;响应触发操作,显示多媒体标签标识的多媒体内容。
在本公开的实施例中,计算机可读存储介质被设置为存储用于执行以下步骤的程序代码:响应触发操作,显示多媒体标签标识的多媒体内容,包括:在触控对象为预览小窗的情况下,响应对预览小窗的第一触发操作,在预览小窗播放多媒体标签标识的多媒体内容。
在本公开的实施例中,计算机可读存储介质被设置为存储用于执行以下步骤的程序代码:响应对预览小窗的第二触发操作,基于第二触发操作对应的链接地址,跳转进入多媒体展示界面,以及在多媒体展示界面播放多媒体内容;或者,响应对预览小窗的第三触发操作,基于第三触发操作对应的界面切换指令,切换到多媒体浏览界面,其中,多媒体浏览界面展示有多媒体列表,多媒体列表中包括多媒体内容。
在本公开的实施例中,计算机可读存储介质被设置为存储用于执行以下步骤的程序代码:根据关联信息确定与用户账户关联的多媒体标签,包括:在关联信息包括行为数据信息的情况下,根据行为数据信息确定用户账户的兴趣标签,其中,兴趣标签为多个分类标签中的一个或多个;通过查找与兴趣标签匹配的多媒体标签,确定与用户账户关联的多媒体标签。
在本公开的实施例中,计算机可读存储介质被设置为存储用于执行以下步骤的程序代码:根据行为数据信息确定用户账户的兴趣标签,包括:分别获取多个分类标签中每个分类标签的统计数量,其中,统计数量根据行为数据信息在每个分类标签下,对用户账户发生的各个操作行为进行统计得到,各个行为分别有对应的权重;依据每个分类标签的统计数量,对多个分类标签进行排序,得到排序结果;依据排序结果,获取用户账户的兴趣标签。
在本公开的实施例中,计算机可读存储介质被设置为存储用于执行以下步骤的程序代码:根据关联信息确定与用户账户关联的多媒体标签,包括:在关联信息包括属性信息的情况下,将属性信息输入兴趣点识别模型,输出与用户账户关联的多媒体标签,其中,兴趣点识别模型通过多组数据进行机器训练得到,多组数据中包括:用户账户的属性信息和与该用户账户关联的多媒体标签。
在示例性实施例中,还提供了一种计算机程序产品,当计算机程序产品中的计算机程序由电子设备的处理器执行时,使得电子设备能够执行上述任一项的界面处理方法。
上述本公开实施例序号仅仅为了描述,不代表实施例的优劣。
在本公开的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的技术内容,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。
作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本公开各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本公开各个实施例方法的全部或部分步骤。而前述的存储介质包括:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
本公开所有实施例均可以单独被执行,也可以与其他实施例相结合被执行,均视为本公开要求的保护范围。
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限制。

Claims (21)

  1. 一种界面处理方法,其特征在于,包括
    显示与用户账户关联的当前显示界面,所述当前显示界面包括:用于播放视频的视频区域和用于显示对播放的视频进行评论的评论区域;
    获取所述用户账户的关联信息;
    根据所述关联信息确定与所述用户账户关联的多媒体标签,所述多媒体标签用于标识多媒体内容;
    在所述当前显示界面显示与所述多媒体标签对应的触控对象,所述触控对象用于接收触发操作,以触发展示所述多媒体标签标识的所述多媒体内容。
  2. 根据权利要求1所述的方法,其特征在于,所述在所述当前显示界面显示与所述多媒体标签对应的触控对象,包括:
    在所述当前显示界面的第一图层显示所述视频区域和所述评论区域;
    在所述当前显示界面的第二图层显示与所述多媒体标签对应的触控对象,其中,所述第二图层位于所述第一图层的上方。
  3. 根据权利要求2所述的方法,其特征在于,所述第二图层包括控件功能子层和控件图像子层,其中,所述控件功能子层位于所述控件图像子层的上方,所述控件功能子层用于响应对所述触控对象的所述触发操作,所述控件图像子层用于显示所述多媒体内容的元素。
  4. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    接收对所述触控对象的触发操作;
    响应所述触发操作,显示所述多媒体标签标识的所述多媒体内容。
  5. 根据权利要求4所述的方法,其特征在于,所述响应所述触发操作,显示所述多媒体标签标识的所述多媒体内容,包括:
    在所述触控对象为预览小窗的情况下,响应对所述预览小窗的第一触发操作,在所述预览小窗播放所述多媒体标签标识的所述多媒体内容。
  6. 根据权利要求5所述的方法,其特征在于,还包括:
    响应对所述预览小窗的第二触发操作,基于所述第二触发操作对应的链接地址,跳转进入多媒体展示界面,以及在所述多媒体展示界面播放所述多媒体内容;或者,
    响应对所述预览小窗的第三触发操作,基于所述第三触发操作对应的界面切换指令,切换到多媒体浏览界面,其中,所述多媒体浏览界面展示有多媒体列表,所述多媒体列表中包括所述多媒体内容。
  7. 根据权利要求1所述的方法,其特征在于,所述根据所述关联信息确定与所述用户账户关联的多媒体标签,包括:
    在所述关联信息包括行为数据信息的情况下,根据所述行为数据信息确定所述用户账户的兴趣标签,其中,所述兴趣标签为多个分类标签中的一个或多个;
    通过查找与所述兴趣标签匹配的多媒体标签,确定与所述用户账户关联的多媒体标签。
  8. 根据权利要求7所述的方法,其特征在于,所述根据所述行为数据信息确定所述用户账户的兴趣标签,包括:
    分别获取所述多个分类标签中每个分类标签的统计数量,其中,所述统计数量根据所述行为数据信息在每个分类标签下,对所述用户账户发生的各个操作行为进行统计得到,各个行为分别有对应的权重;
    依据每个分类标签的统计数量,对所述多个分类标签进行排序,得到排序结果;
    依据所述排序结果,获取所述用户账户的兴趣标签。
  9. 根据权利要求1所述的方法,其特征在于,所述根据所述关联信息确定与所述用户账户关联的多媒体标签,包括:
    在所述关联信息包括属性信息的情况下,将所述属性信息输入兴趣点识别模型,输出与所述用户账户关联的多媒体标签,其中,所述兴趣点识别模型通过多组数据进行机器训练得到,多组数据中包括:用户账户的属性信息和与该所述用户账户关联的多媒体标签。
  10. 一种界面处理装置,其特征在于,包括
    第一显示模块,用于显示与用户账户关联的当前显示界面,所述当前显示界面包括:用于播放视频的视频区域和用于显示对播放的视频进行评论的评论区域;
    第一获取模块,用于获取所述用户账户的关联信息;
    第一确定模块,用于根据所述关联信息确定与所述用户账户关联的多媒体标签,所述多媒体标签用于标识多媒体内容;
    第二显示模块,用于在所述当前显示界面显示与所述多媒体标签对应的触控对象,所述触控对象用于接收触发操作,以触发展示所述多媒体标签标识的所述多媒体内容。
  11. 根据权利要求10所述的装置,其特征在于,所述第二显示模块包括:
    第一显示单元,用于在所述当前显示界面的第一图层显示所述视频区域和所述评论区域;
    第二显示单元,用于在所述当前显示界面的第二图层显示与所述多媒体标签对应的触控对象,其中,所述第二图层位于所述第一图层的上方。
  12. 根据权利要求11所述的装置,其特征在于,所述第二图层包括控件功能子层和控件图像子层,其中,所述控件功能子层位于所述控件图像子层的上方,所述控件功能子层用于响应对所述触控对象的所述触发操作,所述控件图像子层用于显示所述多媒体内容的元素。
  13. 根据权利要求10所述的装置,其特征在于,所述装置还包括:
    第一接收模块,用于在所述当前显示界面显示与所述多媒体标签对应的触控对象之后,接收对所述触控对象的触发操作;
    第三显示模块,用于响应所述触发操作,显示所述多媒体标签标识的所述多媒体内容。
  14. 根据权利要求13所述的装置,其特征在于,所述第三显示模块包括:
    第三显示单元,用于在所述触控对象为预览小窗的情况下,响应对所述预览小窗的第一触发操作,在所述预览小窗播放所述多媒体标签标识的所述多媒体内容。
  15. 根据权利要求14所述的装置,其特征在于,所述第三显示模块还包括:
    第四显示单元,用于响应对所述预览小窗的第二触发操作,基于所述第二触发操作对应的链接地址,跳转进入多媒体展示界面,以及在所述多媒体展示界面播放所述多媒体内容;或者,
    第五显示单元,用于响应对所述预览小窗的第三触发操作,基于所述第三触发操作对应的界面切换指令,切换到多媒体浏览界面,其中,所述多媒体浏览界面展示有多媒体列表,所述多媒体列表中包括所述多媒体内容。
  16. 根据权利要求10所述的装置,其特征在于,所述第一确定模块包括:
    第一确定单元,用于在所述关联信息包括行为数据信息的情况下,根据所述行为数据信息确定所述用户账户的兴趣标签,其中,所述兴趣标签为多个分类标签中的一个或多个;
    第二确定单元,用于通过查找与所述兴趣标签匹配的多媒体标签,确定与所述用户账户关联的多媒体标签。
  17. 根据权利要求16所述的装置,其特征在于,所述第一确定单元包括:
    第一获取子单元,用于分别获取所述多个分类标签中每个分类标签的统计数量,其中,所述统计数量根据所述行为数据信息在每个分类标签下,对所述用户账户发生的各个操作行为进行统计得到,各个行为分别有对应的权重;
    处理子单元,用于依据每个分类标签的统计数量,对所述多个分类标签进行排序,得到排序结果;
    第二获取子单元,用于依据所述排序结果,获取所述用户账户的兴趣标签。
  18. 根据权利要求10所述的装置,其特征在于,所述第一确定模块包括:
    处理单元,用于在所述关联信息包括属性信息的情况下,将所述属性信息输入兴趣点识别模型,输出与所述用户账户关联的多媒体标签,其中,所述兴趣点识别模型通过多组数据进行机器训练得到,多组数据中包括:用户账户的属性信息和与该所述用户账户关联的多媒体标签。
  19. 一种电子设备,其特征在于,包括:
    处理器;
    用于存储所述处理器可执行指令的存储器;
    其中,所述处理器被配置为执行所述指令,以实现以下处理:
    显示与用户账户关联的当前显示界面,所述当前显示界面包括:用于播放视频的视频区域和用于显示对播放的视频进行评论的评论区域;
    获取所述用户账户的关联信息;
    根据所述关联信息确定与所述用户账户关联的多媒体标签,所述多媒体标签用于标识 多媒体内容;
    在所述当前显示界面显示与所述多媒体标签对应的触控对象,所述触控对象用于接收触发操作,以触发展示所述多媒体标签标识的所述多媒体内容。
  20. 一种非易失性计算机可读存储介质,其特征在于,当所述计算机可读存储介质中的指令由电子设备的处理器执行时,使得电子设备能够执行以下处理:
    显示与用户账户关联的当前显示界面,所述当前显示界面包括:用于播放视频的视频区域和用于显示对播放的视频进行评论的评论区域;
    获取所述用户账户的关联信息;
    根据所述关联信息确定与所述用户账户关联的多媒体标签,所述多媒体标签用于标识多媒体内容;
    在所述当前显示界面显示与所述多媒体标签对应的触控对象,所述触控对象用于接收触发操作,以触发展示所述多媒体标签标识的所述多媒体内容。
  21. 一种计算机程序产品,包括计算机程序,其特征在于,所述计算机程序被处理器执行时实现以下处理:
    显示与用户账户关联的当前显示界面,所述当前显示界面包括:用于播放视频的视频区域和用于显示对播放的视频进行评论的评论区域;
    获取所述用户账户的关联信息;
    根据所述关联信息确定与所述用户账户关联的多媒体标签,所述多媒体标签用于标识多媒体内容;
    在所述当前显示界面显示与所述多媒体标签对应的触控对象,所述触控对象用于接收触发操作,以触发展示所述多媒体标签标识的所述多媒体内容。
PCT/CN2021/136577 2021-05-28 2021-12-08 界面处理方法及装置 WO2022247220A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110593725.6A CN113254135A (zh) 2021-05-28 2021-05-28 界面处理方法、装置及电子设备
CN202110593725.6 2021-05-28

Publications (2)

Publication Number Publication Date
WO2022247220A1 true WO2022247220A1 (zh) 2022-12-01
WO2022247220A9 WO2022247220A9 (zh) 2023-02-23

Family

ID=77185119

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/136577 WO2022247220A1 (zh) 2021-05-28 2021-12-08 界面处理方法及装置

Country Status (2)

Country Link
CN (1) CN113254135A (zh)
WO (1) WO2022247220A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113254135A (zh) * 2021-05-28 2021-08-13 北京达佳互联信息技术有限公司 界面处理方法、装置及电子设备
CN114296598B (zh) * 2021-12-01 2024-03-15 北京达佳互联信息技术有限公司 信息显示方法、装置、设备、系统及计算机产品
CN114201943A (zh) * 2022-02-17 2022-03-18 北京搜狐新媒体信息技术有限公司 一种评论展示方法及相关设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017096953A1 (zh) * 2015-12-10 2017-06-15 乐视控股(北京)有限公司 热点视频展示方法及装置
CN111309940A (zh) * 2020-02-14 2020-06-19 北京达佳互联信息技术有限公司 一种信息展示方法、系统、装置、电子设备及存储介质
CN111708901A (zh) * 2020-06-19 2020-09-25 腾讯科技(深圳)有限公司 多媒体资源推荐方法、装置、电子设备及存储介质
CN111722766A (zh) * 2020-06-04 2020-09-29 北京达佳互联信息技术有限公司 多媒体资源的展示方法及装置
CN113254135A (zh) * 2021-05-28 2021-08-13 北京达佳互联信息技术有限公司 界面处理方法、装置及电子设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017096953A1 (zh) * 2015-12-10 2017-06-15 乐视控股(北京)有限公司 热点视频展示方法及装置
CN111309940A (zh) * 2020-02-14 2020-06-19 北京达佳互联信息技术有限公司 一种信息展示方法、系统、装置、电子设备及存储介质
CN111722766A (zh) * 2020-06-04 2020-09-29 北京达佳互联信息技术有限公司 多媒体资源的展示方法及装置
CN111708901A (zh) * 2020-06-19 2020-09-25 腾讯科技(深圳)有限公司 多媒体资源推荐方法、装置、电子设备及存储介质
CN113254135A (zh) * 2021-05-28 2021-08-13 北京达佳互联信息技术有限公司 界面处理方法、装置及电子设备

Also Published As

Publication number Publication date
WO2022247220A9 (zh) 2023-02-23
CN113254135A (zh) 2021-08-13

Similar Documents

Publication Publication Date Title
WO2021052085A1 (zh) 视频推荐方法、装置、电子设备及计算机可读介质
US10681432B2 (en) Methods and apparatus for enhancing a digital content experience
WO2022247220A1 (zh) 界面处理方法及装置
US9813779B2 (en) Method and apparatus for increasing user engagement with video advertisements and content by summarization
US20180152767A1 (en) Providing related objects during playback of video data
KR101777242B1 (ko) 동영상 컨텐츠의 하이라이트 영상을 추출하여 제공하는 방법과 시스템 및 기록 매체
CN109118290B (zh) 方法、系统和计算机可读非暂时性存储介质
KR101944469B1 (ko) 컴퓨터 실행 방법, 시스템 및 컴퓨터 판독 가능 매체
WO2023051102A1 (zh) 视频推荐方法、装置、设备及介质
US10545954B2 (en) Determining search queries for obtaining information during a user experience of an event
US20150293928A1 (en) Systems and Methods for Generating Personalized Video Playlists
US10440435B1 (en) Performing searches while viewing video content
CN108429927A (zh) 智能电视以及搜索用户界面中虚拟商品信息的方法
CN104053063A (zh) 用于控制显示装置的直观的基于图像的节目指南
WO2015149321A1 (en) Personal digital engine for user empowerment and method to operate the same
CN103918277B (zh) 用于确定媒体项正被呈现的置信水平的系统和方法
WO2023241321A1 (zh) 推荐方法、装置、设备、存储介质及计算机程序产品
CN111417021B (zh) 外挂识别方法、装置、计算机设备和可读存储介质
US20180032223A1 (en) Methods, systems, and media for presenting messages
CN115203539A (zh) 一种媒体内容推荐方法、装置、设备及存储介质
CN109213894A (zh) 一种视频结果项的展示、提供方法、客户端及服务器
JP5805134B2 (ja) 端末装置および装置のプログラム
KR20140129569A (ko) 빅데이터를 활용한 사용자 위치기반 광고제공장치 및 광고제공방법
US20230254521A1 (en) Video distribution device, video distribution method, and recording media
US20230300395A1 (en) Aggregating media content using a server-based system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21942783

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE