CN114268801A - Media information processing method, media information presenting method and device - Google Patents

Media information processing method, media information presenting method and device Download PDF

Info

Publication number
CN114268801A
CN114268801A CN202111568529.XA CN202111568529A CN114268801A CN 114268801 A CN114268801 A CN 114268801A CN 202111568529 A CN202111568529 A CN 202111568529A CN 114268801 A CN114268801 A CN 114268801A
Authority
CN
China
Prior art keywords
media information
information
media
category
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111568529.XA
Other languages
Chinese (zh)
Inventor
刘朋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202111568529.XA priority Critical patent/CN114268801A/en
Publication of CN114268801A publication Critical patent/CN114268801A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The disclosure relates to a media information processing method, a media information presenting method and a media information presenting device. The media information processing method comprises the following steps: acquiring first media information; performing information conversion processing on the first media information to obtain at least one second media information; and responding to the browsing instruction, selecting at least one third media information from the at least one second media information, and sending the at least one third media information. The media information presentation method comprises the following steps: monitoring a preset operation behavior of the first media information; responding to a preset operation behavior, and generating and sending a browsing instruction; receiving and presenting at least one third media information; responding to the interface operation behavior, generating and sending a browsing instruction; at least one piece of third media information of the same category is received and presented. The method and the device for obtaining the media information of the application program improve the content ecology richness and experience of the application program and solve the problem that the media information of various media forms cannot be selectively obtained according to requirements.

Description

Media information processing method, media information presenting method and device
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a media information processing method, a media information presenting method, and a media information presenting device.
Background
With the rapid development of network technology and mobile services, media information services have been widely used. People share information such as news, experience, comprehension and the like by utilizing media information services such as short video services and the like, and the method is beneficial to meeting the increasing material culture requirements of people, particularly the requirements of the mental culture level.
However, the media format provided by the existing media information service platform is still single, and it is difficult to obtain multiple formats of media information services from one media information service platform. Especially in some special demand environments, such as environments where information cannot be obtained in the form of video, it is difficult to obtain content related to the information in the form of audio, text, pictures, and the like as well. Therefore, how to expand the media information into multiple media forms for selective acquisition becomes an urgent problem to be solved.
Disclosure of Invention
The present disclosure provides a media information processing method, a media information presenting method and a media information presenting device, so as to at least solve the problem in the related art that media information in multiple media formats cannot be selectively acquired according to requirements.
According to an aspect of the embodiments of the present disclosure, there is provided a media information processing method, including:
acquiring first media information;
performing information conversion processing on the first media information to obtain at least one type of second media information;
and responding to a preset browsing instruction triggered based on the first media information, selecting at least one third media information from at least one second media information, and sending the at least one third media information.
In one possible embodiment, the obtaining the first media information includes:
in response to a fetch instruction, fetching the first media information, wherein the first media information is associated with the fetch instruction.
In a possible embodiment, the analyzing the first media information to obtain at least one second media information includes:
analyzing the first media information in response to an analysis instruction to obtain at least one second media information;
wherein the analysis instruction is associated with the first media information;
the media category of the second media information is associated with the analysis instruction.
In one possible implementation, the performing information conversion processing on the first media information to obtain at least one second media information includes:
responding to an information conversion processing instruction, and performing information conversion processing on the first media information to obtain at least one type of second media information;
wherein the information conversion processing instruction is associated with the first media information;
the media category of the second media information is associated with the information conversion processing instruction.
In one possible embodiment, the performing, in response to the information conversion processing instruction, information conversion processing on the first media information to obtain at least one type of second media information includes:
according to the information identification of the first media information and the second media category specified in the information conversion processing instruction, performing information conversion processing related to the second media category on the first media information associated with the information identification of the first media information to obtain second media information corresponding to the second media category;
wherein the second media category is different from the first media category to which the first media information belongs.
In one possible implementation, the performing, on the first media information associated with the information identifier of the first media information, information conversion processing about the second media category to obtain second media information corresponding to the second media category includes:
and extracting media information associated with the second media category from the first media information, and determining the extracted media information as second media information corresponding to the second media category.
In one possible implementation manner, the performing, on the first media information associated with the information identifier of the first media information, information conversion processing about the second media category to obtain second media information corresponding to the second media category further includes:
performing feature analysis on the extracted media information to obtain media information meeting feature requirements as second media information corresponding to the second media category;
wherein the feature requirements are associated with the information conversion processing instructions.
In one possible embodiment, the selecting, in response to a preset browsing instruction triggered based on the first media information, at least one third media information from at least one second media information includes:
selecting second media information with a media category of a third media category from at least one second media information as the third media information according to an information identifier of the first media information and the third media category specified in the preset browsing instruction;
wherein the second media information is associated with an information identifier of the first media information.
In one possible implementation, the first media category to which the first media information belongs includes a video category;
the second media category to which the second media information belongs includes: at least one of an audio category, an image category, a text category, and a graphic category; wherein the image-text category is composed of the image category and the text category.
In one possible implementation, the image categories include: at least one of a picture set sub-category, a dynamic picture sub-category, and an expression bag sub-category.
In one possible implementation, the first media category to which the first media information belongs includes a text category;
the second media category to which the second media information belongs includes an audio category.
In one possible embodiment, the method further comprises:
and analyzing the live media information to obtain at least one piece of first media information from the live media information.
According to another aspect of the embodiments of the present disclosure, there is provided a media information presentation method, including:
monitoring a preset operation behavior of the first media information presented currently;
responding to the preset operation behavior, and generating and sending a preset browsing instruction;
receiving and presenting at least one third media information;
the at least one third media information is selected from the at least one second media information in response to the preset browsing instruction; the at least one second media information is obtained by performing information conversion processing on the first media information.
In one possible implementation, the third media information is associated with the first media information.
In one possible implementation, the first media category to which the first media information belongs includes a video category;
the second media category to which the second media information belongs includes: at least one of an audio category, an image category, a text category, and a graphic category; wherein the image-text category is composed of the image category and the text category.
In one possible embodiment, the image category includes at least one of a photo set sub-category, a dynamic image sub-category, and an expression bag sub-category.
In one possible implementation, the first media category to which the first media information belongs includes a text category;
the second media category to which the second media information belongs includes an audio category.
According to another aspect of the embodiments of the present disclosure, there is provided a media information presentation method, including:
monitoring operation behaviors of a preset interface;
responding to the preset interface operation behavior, and generating and sending a preset browsing instruction;
receiving and presenting at least one piece of third media information in the same category;
each piece of third media information is selected from at least one piece of second media information;
the at least one piece of second media information is obtained by performing information conversion processing on the at least one piece of first media information.
In one possible embodiment, the receiving and presenting at least one piece of third media information in the same category includes:
receiving and presenting summary information of at least one piece of third media information in the same category;
and presenting third media information associated with any piece of summary information in response to the preset interface operation behavior of the any piece of summary information.
In one possible implementation, the first media category to which the first media information belongs includes a video category;
the second media category to which the second media information belongs includes: at least one of an audio category, an image category, a text category, and a graphic category; wherein the image-text category is composed of the image category and the text category.
In one possible embodiment, the image category includes at least one of a photo set sub-category, a dynamic image sub-category, and an expression bag sub-category.
In one possible implementation, the first media category to which the first media information belongs includes a text category;
the second media category to which the second media information belongs includes an audio category.
According to another aspect of the embodiments of the present disclosure, there is provided a media information processing apparatus including:
an acquisition module configured to perform acquisition of first media information;
the information conversion module is configured to perform information conversion processing on the first media information to obtain at least one type of second media information, and the second media information is associated with the first media information;
and the selection sending module is configured to execute a preset browsing instruction triggered based on the first media information, select at least one third media information from at least one second media information, and send the at least one third media information.
In one possible embodiment, the obtaining module is configured to perform:
in response to a fetch instruction, fetching the first media information, wherein the first media information is associated with the fetch instruction.
In one possible embodiment, the information conversion module is configured to perform:
responding to an information conversion processing instruction, and performing information conversion processing on the first media information to obtain at least one type of second media information;
wherein the information conversion processing instruction is associated with the first media information;
the media category of the second media information is associated with the information conversion processing instruction.
In one possible embodiment, the information conversion module is further configured to perform:
according to the information identification of the first media information and the second media category specified in the information conversion processing instruction, performing information conversion processing related to the second media category on the first media information associated with the information identification of the first media information to obtain second media information corresponding to the second media category;
wherein the second media category is different from the first media category to which the first media information belongs.
In one possible embodiment, the information conversion module is further configured to perform:
and extracting media information associated with the second media category from the first media information, and determining the extracted media information as second media information corresponding to the second media category.
In one possible embodiment, the information conversion module is further configured to perform:
performing feature analysis on the extracted media information to obtain media information meeting feature requirements as second media information corresponding to the second media category;
wherein the feature requirements are associated with the information conversion processing instructions.
In one possible embodiment, the selecting and sending module is configured to perform:
selecting second media information with a media category of a third media category from at least one second media information as the third media information according to an information identifier of the first media information and the third media category specified in the preset browsing instruction;
wherein the second media information is associated with an information identifier of the first media information.
In one possible implementation, the first media category to which the first media information belongs includes a video category;
the second media category to which the second media information belongs includes: at least one of an audio category, an image category, a text category, and a graphic category; wherein the image-text category is composed of the image category and the text category.
In one possible implementation, the image categories include: at least one of a picture set sub-category, a dynamic picture sub-category, and an expression bag sub-category.
In one possible implementation, the first media category to which the first media information belongs includes a text category;
the second media category to which the second media information belongs includes an audio category.
In one possible embodiment, the method further comprises:
and the live broadcast analysis module is configured to analyze the live broadcast media information so as to obtain at least one piece of first media information from the live broadcast media information.
According to another aspect of the embodiments of the present disclosure, there is provided a media information presentation apparatus including:
the monitoring module is configured to execute monitoring of preset operation behaviors of the first media information presented currently;
the instruction sending module is configured to execute responding to the preset operation behavior, and generate and send a preset browsing instruction;
a receiving and presenting module configured to receive and present at least one third media information;
the at least one third media information is selected from the at least one second media information in response to the preset browsing instruction;
the at least one second media information is obtained by performing information conversion processing on the first media information.
In one possible implementation, the third media information is associated with the first media information.
In one possible implementation, the first media category to which the first media information belongs includes a video category;
the second media category to which the second media information belongs includes: at least one of an audio category, an image category, a text category, and a graphic category; wherein the image-text category is composed of the image category and the text category.
In one possible implementation, the image categories include: at least one of a picture set sub-category, a dynamic picture sub-category, and an expression bag sub-category.
In one possible implementation, the first media category to which the first media information belongs includes a text category;
the second media category to which the second media information belongs includes an audio category.
According to another aspect of the embodiments of the present disclosure, there is provided a media information presentation apparatus including:
the monitoring module is configured to execute monitoring of preset interface operation behaviors;
the instruction sending module is configured to execute a preset interface operation behavior in response to the preset interface operation behavior, and generate and send a preset browsing instruction;
the receiving and presenting module is configured to receive and present at least one piece of third media information in the same category;
each piece of third media information is selected from at least one piece of second media information;
the at least one piece of second media information is obtained by performing information conversion processing on the at least one piece of first media information.
In one possible implementation, the receiving presentation module includes:
the first receiving and presenting sub-module is configured to receive and present summary information of at least one piece of third media information in the same category;
and the second receiving and presenting submodule is configured to execute the action of responding to the preset interface operation of any piece of summary information and present the third media information associated with the any piece of summary information.
In one possible implementation, the first media category to which the first media information belongs includes a video category;
the second media category to which the second media information belongs includes: at least one of an audio category, an image category, a text category, and a graphic category; wherein the image-text category is composed of the image category and the text category.
In one possible embodiment, the image category includes at least one of a photo set sub-category, a dynamic image sub-category, and an expression bag sub-category.
In one possible implementation, the first media category to which the first media information belongs includes a text category;
the second media category to which the second media information belongs includes an audio category.
According to another aspect of the embodiments of the present disclosure, there is provided an electronic device including:
a processor;
a memory for storing executable instructions of the processor;
wherein the processor is configured to execute the executable instructions to implement the media information processing method of any of the above embodiments and/or the media information presenting method of any of the above embodiments.
According to another aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium, wherein at least one instruction of the computer-readable storage medium, when executed by a processor of an electronic device, enables the electronic device to implement the media information processing method according to any one of the above-mentioned embodiments and/or the media information presenting method according to any one of the above-mentioned embodiments.
According to another aspect of the embodiments of the present disclosure, there is provided a computer program product, which includes a computer program, and when the computer program is executed by a processor, the computer program implements the media information processing method according to any of the above embodiments and/or the media information presenting method according to any of the above embodiments.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
the method comprises the steps of analyzing first media information to obtain at least one type of second media information associated with the first media information, sending corresponding second media information for viewing when needed, obtaining other types of media information associated with the presented first media information for viewing based on preset operation behaviors, obtaining various types of media information for viewing based on the preset operation behaviors, and switching between the displayed media types based on the preset operation behaviors, so that the ecological richness and experience of the content of an application program are improved, and the problem that the media information in various media forms cannot be selectively obtained according to requirements is solved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
FIG. 1 is a schematic diagram illustrating an implementation environment of a media information processing method and a media information presentation method according to an exemplary embodiment;
FIG. 2 is a flow diagram illustrating a method of media information processing in accordance with one illustrative embodiment;
FIG. 3 is an interaction flow diagram illustrating a method of media information processing in accordance with an exemplary embodiment;
FIG. 4 is a flow chart illustrating analyzing first media information to obtain second media information according to an embodiment of the disclosure;
FIG. 5 is a flowchart illustrating an application scenario of a method of media information processing according to an exemplary embodiment;
FIG. 6 is a flow diagram illustrating a method of media information presentation in accordance with an illustrative embodiment;
FIG. 7 is an interaction flow diagram illustrating a method of media information presentation, according to an exemplary embodiment;
FIG. 8 is a diagram illustrating operations performed on a terminal in accordance with one illustrative embodiment;
FIG. 9 is a flowchart illustrating an application scenario of a method of media information presentation, according to an exemplary embodiment;
FIG. 10 is a flow chart illustrating another method of media information presentation in accordance with an illustrative embodiment;
FIG. 11 is an interaction flow diagram illustrating another method of media information presentation in accordance with an exemplary embodiment;
FIG. 12 is an application scenario flow diagram illustrating another method of media information presentation in accordance with an exemplary embodiment;
FIG. 13 is a flowchart illustrating an application scenario in which a terminal performs a media information presentation method according to an embodiment of the disclosure;
FIG. 14 is a block diagram illustrating a logical configuration of a media information acquisition device in accordance with an illustrative embodiment;
FIG. 15 is a block diagram illustrating a logical configuration of a media information presentation device in accordance with an illustrative embodiment;
fig. 16 shows a block diagram of a terminal according to an exemplary embodiment of the present disclosure;
fig. 17 is a schematic structural diagram of a server according to an embodiment of the present disclosure.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Experience is directly influenced to a great extent by the richness of the internet ecological content, and in order to improve the experience, more in the prior art is the richness of the content in a single media form, so as to improve the diversity of the content. For example, whether the short video service promotes the diversity of the content by promoting the richness of the short video content. However, the display method based on only a single media format cannot satisfy various requirements, and when the content in the form of video media needs to be acquired through other media formats (such as audio, pictures, text, etc.) for various reasons or purposes, the prior art cannot realize corresponding functions.
In view of this, the present disclosure provides a media information processing method, which is capable of analyzing first media information to obtain second media information having at least one media category different from the first media information, and sending the corresponding second media information for viewing when needed. For example, the short video can be analyzed to obtain other categories of media information associated with the short video, such as categories of audio, pictures, animations, emoticons, text, graphics, and so forth, for viewing. The problem that the media information in various media forms cannot be selectively acquired according to the requirements is solved.
Meanwhile, the embodiment of the disclosure also provides a media information presentation method, which can acquire media information of other categories associated with the presented first media information for viewing based on the preset operation behavior, and switch between the displayed media categories based on the preset operation behavior, so as to solve the problem that the media information in multiple media forms cannot be selectively acquired according to requirements.
Meanwhile, the embodiment of the disclosure also provides a media information presentation method, which can acquire various types of media information for viewing based on the preset operation behavior, and switch between the displayed media types based on the preset operation behavior so as to solve the problem that the media information in various media forms cannot be selectively acquired according to requirements.
Fig. 1 is a schematic diagram of an implementation environment of a media information processing method and a media information presenting method according to an exemplary embodiment, and referring to fig. 1, at least one terminal 101 and a server 102 may be included in the implementation environment, which is described in detail below.
The at least one terminal 101 is used for browsing multimedia resources, each of the at least one terminal 101 may have an application installed thereon, the application may be any client capable of providing a multimedia resource browsing service, the application may be started to browse the multimedia resources, the application may be at least one of a short video application, an audio and video application, a shopping application, a take-out application, a travel application, a game application or a social application, and the multimedia resources may include at least one of a video resource, an audio resource, a picture resource, a text resource or a web page resource.
At least one terminal 101 may be directly or indirectly connected with the server 102 through wired or wireless communication, which is not limited in the embodiment of the present disclosure.
The server 102 is a computer device for providing a multimedia resource search service to the at least one terminal 101. The server 102 may include at least one of a server, a plurality of servers, a cloud computing platform, or a virtualization center. Alternatively, the server 102 may undertake primary computational work and the at least one terminal 101 may undertake secondary computational work; alternatively, the server 102 may undertake secondary computing work and the at least one terminal 101 may undertake primary computing work; alternatively, the server 102 and the at least one terminal 101 perform cooperative computing by using a distributed computing architecture.
It should be noted that the device type of any one of the at least one terminal 101 may include: at least one of a smart phone, a tablet computer, an e-book reader, an MP3(Moving Picture Experts Group Audio Layer III, motion Picture Experts compression standard Audio Layer 3) player, an MP4(Moving Picture Experts Group Audio Layer IV, motion Picture Experts compression standard Audio Layer 4) player, a laptop portable computer, or a desktop computer. For example, the any terminal may be a smartphone, or other hand-held portable electronic device. The following embodiments are illustrated with the terminal comprising a smartphone.
Those skilled in the art will appreciate that the number of terminals described above may be greater or fewer. For example, the number of the terminals may be only one, or several tens or hundreds of the terminals, or more. The number of terminals and the type of the device are not limited in the embodiments of the present disclosure.
Fig. 2 is a flowchart illustrating a media information processing method according to an exemplary embodiment, and referring to fig. 2, the media information processing method is applied to a computer device, and the computer device is taken as an example for explanation.
In step 201, first media information is acquired.
In step 202, the first media information is subjected to an information conversion process to obtain at least one second media information.
In step 203, in response to a preset browsing instruction triggered based on the first media information, at least one third media information is selected from the at least one second media information, and the at least one third media information is sent.
Wherein the second media information is associated with the first media information.
According to the media information processing method provided by the embodiment of the disclosure, the first media information is analyzed to obtain at least one second media information associated with the first media information, and the corresponding second media information is sent for viewing when needed. The problem that the media information in various media forms cannot be selectively acquired according to the requirements is solved.
In some examples, the first media information may be automatically acquired and subjected to information conversion processing, for example, the first media information may be directly acquired and subjected to information conversion processing in a case where the server receives the first media information. In some other examples, the obtaining of the first media information in step 201 may be performed as needed, that is, obtaining the specified first media information and performing the information conversion process, and includes:
in response to the acquisition instruction, first media information is acquired, wherein the first media information is associated with the acquisition instruction.
In some examples, the first media information needs to be converted based on a conversion requirement of the media information, for example, the first media information to be converted and a category of new media information to be converted need to be specified. In this case, step 202 further comprises:
responding to the information conversion processing instruction, and performing information conversion processing on the first media information to obtain at least one type of second media information;
wherein the information conversion processing instruction is associated with the first media information;
the media category of the second media information is associated with the information conversion processing analysis instruction.
By adopting the mode, the purpose of converting the first media information according to the first media information to be specified and the target media category to be converted is realized.
In some examples, the first media information to be converted may be marked by an information identifier of the first media information, and the server may determine the first media information to be converted according to the information identifier of the first media information carried in the information conversion processing instruction. In this case, the performing, in response to the information conversion processing instruction, the information conversion processing on the first media information to obtain at least one type of second media information includes:
according to the information identification and the second media category of the first media information specified in the information conversion processing instruction, performing information conversion processing related to the second media category on the first media information associated with the information identification of the first media information to obtain second media information corresponding to the second media category;
the second media category is different from the first media category to which the first media information belongs.
In some examples, the information identification may be an Identity Document (ID).
In some examples, the second media information may be part of the information in the first media information, for example, the audio information may be audio part content in video information, and when the corresponding audio information needs to be obtained from the video information, the associated audio information may be extracted from the video information. In this case, the performing of the information conversion process on the first media information associated with the information identifier of the first media information about the second media category to obtain the second media information corresponding to the second media category includes:
and extracting media information associated with the second media type from the first media information, and determining the extracted media information as second media information corresponding to the second media type.
By adopting the mode, the second media information can be directly extracted from the first media information.
In some examples, the second media information may not be part of the information in the first media information, for example, the text information may not be part of content in the video information (the video information may not include text information), and when corresponding text information needs to be obtained according to the video information, the video information needs to be analyzed (e.g., analyzing voice information, video image information, etc. in the video) to obtain relevant text information, and for example, the media information to be extracted is a certain piece of content in the video information, which has a feature associated with certain information (e.g., keyword information, key frame image information, etc. appearing in the video). In this case, the above-mentioned performing information conversion processing on the first media information associated with the information identifier of the first media information about the second media category to obtain the second media information corresponding to the second media category further includes:
performing characteristic analysis on the extracted media information to obtain media information meeting characteristic requirements as second media information corresponding to a second media category;
wherein the feature requirements are associated with the information conversion processing instructions.
In this way, the purpose of acquiring the second media information meeting the characteristic requirement from the first media information can be achieved.
In some examples, the server may provide the converted second media information according to a requirement of the client, for example, when the client requests to obtain audio information, image information, text information, or image-text information corresponding to the video information, the server needs to provide the audio information, image information, text information, or image-text information associated with the video information to the client for presentation, and when the client sends a requirement to the server, the server needs to specify the first media information associated with the media information to be obtained. In this case, the step 203 of selecting at least one third media information from the at least one second media information in response to the preset browsing instruction triggered based on the first media information includes:
selecting second media information with a media category being a third media category from at least one second media information as third media information according to the information identifier of the first media information and the third media category specified in the preset browsing instruction;
wherein the second media information is associated with the information identification of the first media information.
By adopting the mode, the preset browsing instruction carries the information identifier of the first media information and the media type (namely, the third media type) of the media information to be acquired, and after the server receives the preset browsing instruction, the media information (namely, the third media information) meeting the requirements of the information identifier of the first media information and the media type of the media information to be acquired can be selected from the second media information associated with the information identifier of the first media information according to the information identifier of the first media information and the media type of the media information to be acquired, so that the association between the selected third media information and the preset browsing instruction is ensured, and the client can acquire accurate media information.
In some examples, it is desirable to obtain relevant audio information, image information, text information, and teletext information from the video information. In this case, the first media category to which the first media information belongs includes a video category; the second media category to which the second media information belongs includes: at least one of an audio category, an image category, a text category, and a graphic category; wherein the image-text category is composed of an image category and a text category.
In some examples, the image may include a static map, a dynamic map, an emoticon. In this case, the image categories include: at least one of a picture set sub-category, a dynamic picture sub-category, and an expression bag sub-category. Correspondingly, the image information may include at least one of a photo album, a dynamic image, and an emoticon. In some examples, the picture set may be a set of at least one still picture.
In some examples, there is a need to obtain corresponding speech information based on text information. To meet this need, in some examples, the first media category to which the first media information belongs includes a text category; the second media category to which the second media information belongs includes an audio category.
Based on the above example, the media information processing method of the embodiment of the present disclosure realizes the conversion processing from text information to audio information, and meets the requirements for various presentation forms of media information.
In some examples, in a live media broadcast such as a live video broadcast, there is a need to perform a post-presentation on specified content in live media information, and a media category of the post-presentation may have a need to be different from a media category of the live media information, for example, an introduction of a commodity involved in a live delivery process may need to be converted into image-text information and image-text information display on a commodity page, so that the specified content in the live media information needs to be extracted from the live media information and subjected to conversion processing of the media information. In this case, the media information processing method of the embodiment of the present disclosure further includes:
and analyzing the live media information to obtain at least one piece of first media information from the live media information.
By adopting the method, the media information of the specified content part contained in the live media information is extracted as the first media information, and then the third media information of other media types related to the live media information can be obtained after the processing of the step 201, the step 202 and the step 203, so that the requirement of various media type presentation modes aiming at the live media information is met.
All the above optional technical solutions may be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
Fig. 3 is an interaction flowchart illustrating a media information processing method according to an exemplary embodiment, where the media information processing method is used in an interaction process between a terminal and a server, the server is a computer device, and the terminal may be a smart phone, a tablet computer, an e-book reader, an MP3 player, an MP4 player, a laptop portable computer, or a desktop computer, and the embodiment includes the following steps.
In step 301, the terminal creates and generates first media information.
In some embodiments, an application program supporting media information (such as short videos and the like) is installed and run on the terminal, the application program can be started on the terminal and logged in, first media information can be made in the application program, and in some embodiments, the first media information is short videos.
In step 302, in response to the uploading operation, the terminal uploads the first media information to the server.
In some embodiments, the upload operation may be implemented based on an interface element in the presentation interface, for example, by clicking on an interface element (e.g., icon, text, etc.) associated with the upload option in the presentation interface. In some embodiments, the upload operation may be implemented based on a gesture motion monitored by the terminal, for example, the application binds the upload operation with a spatial gesture, and the terminal performs the upload operation associated with the spatial gesture by monitoring the spatial gesture.
In step 303, the server obtains first media information.
In some embodiments, the server retrieves the first media information in response to the retrieval instruction. In some embodiments, the fetch instruction is issued by a server. In some embodiments, the obtaining instruction is issued by a terminal, wherein the terminal issuing the obtaining instruction may be a terminal uploading the first media information, the terminal issuing the obtaining instruction may also be a terminal not uploading the first media information, in some embodiments, the terminal issuing the acquisition instruction is a terminal operating in conjunction with the server, and in some embodiments, the acquisition instruction is input via a peripheral device (e.g., keyboard, touch screen, etc.) connected to the server, in some embodiments, the acquisition instruction is generated by an associated script program in the server, the acquisition instruction is generated when the server receives the message that the terminal uploads the first media information, the obtaining instruction may be an instruction to obtain the first media information from the terminal or an instruction to obtain the first media information from the server, where the obtaining instruction is a preamble step for analyzing the first media information subsequently.
The first media information is associated with the obtaining instruction, that is, the obtaining instruction includes an information identifier, such as a first media information ID, related to the first media information.
In step 304, the server performs information conversion processing on the first media information in response to the information conversion processing instruction to obtain at least one type of second media information.
In some embodiments, the analysis instruction includes a first media information ID of the first media information to be analyzed and a second media category to which the second media information to be obtained belongs. The server can determine first media information to be analyzed according to the first media information ID, and can determine a second media type of second media information to be obtained according to the second media type, so that the server performs targeted information conversion processing according to the first media information and the second media type to obtain the second media information, wherein the second media information is obtained from the first media information through information conversion processing, and therefore the second media information is associated with the first media information.
In some embodiments, the first media category to which the first media information belongs includes a video category, and the first media information is video information.
In some embodiments, the second media category to which the second media information belongs includes: the image-text information display system comprises at least one of an audio category, an image category, a text category and a picture-text category, wherein the picture-text category is composed of the image category and the text category, and the picture-text information is composed of the image information and the text information. The second media information may be audio information, image information, text information, and graphic information. In some embodiments, the image category includes at least one of a photo album sub-category, a dynamic image sub-category, and an emoticon sub-category, and correspondingly, the image information includes at least one of a photo album, a dynamic image, and an emoticon.
In step 305, the server selects at least one third media information from the at least one second media information in response to the browsing instruction, and sends the at least one third media information.
In some embodiments, the browsing instruction is issued by the terminal, and the server transmits the third media information specified by the browsing instruction to the terminal according to the content of the browsing instruction. The third media information is media information selected from at least one second media information.
In some embodiments, the terminal may encapsulate the browsing instruction by using a data Transmission Protocol to obtain a browsing instruction data packet, where the data Transmission Protocol may be a Transmission Control Protocol (TCP), a User Datagram Protocol (UDP), an Internet Protocol (Internet Protocol), and the like, and this is not particularly limited in this disclosure.
In some embodiments, before sending the browsing instruction data message, the terminal may further compress and encrypt the browsing instruction data message, and the embodiment of the present disclosure does not specifically limit a compression algorithm used for compression and an encryption algorithm used for encryption.
In some embodiments, the server may receive the browsing instruction data packet sent by the terminal, and if the browsing instruction data packet is compressed and encrypted, the browsing instruction data packet may be decrypted and decompressed, where an encryption algorithm used for decryption matches an encryption algorithm used for encryption, and a decompression algorithm used for decompression matches a compression algorithm used for compression. Further, after the preprocessing, the browsing instruction data message is analyzed to obtain the browsing instruction.
In some embodiments, the server may encapsulate the third media information by using a data transmission protocol to obtain a third media information data packet, where the data transmission protocol may be TCP, UDP, IP, and the like, and this is not specifically limited in this embodiment of the disclosure.
In some embodiments, before sending the third media information data packet, the server may further compress and encrypt the third media information data packet, and the embodiments of the present disclosure do not specifically limit a compression algorithm used for compression and an encryption algorithm used for encryption.
In some embodiments, the terminal may receive the third media information data packet sent by the server, and if the third media information data packet is compressed and encrypted, the terminal may decrypt and decompress the third media information data packet, where an encryption algorithm used for decryption matches an encryption algorithm used for encryption, and a decompression algorithm used for decompression matches a compression algorithm used for compression. Further, after the preprocessing, the third media information data packet is analyzed to obtain the third media information.
Fig. 4 is a flowchart illustrating an embodiment of the present disclosure for performing an information conversion process on first media information to obtain second media information, where, as shown in fig. 4, performing an information conversion process on the first media information to obtain the second media information includes the following steps.
In step 3041, the server obtains the specified first media information ID and second media category from the information conversion processing instruction.
The first media information ID is used to select the first media information, and the second media type is used to indicate the media type to which the second media information to be obtained after the information conversion processing is performed on the first media information belongs.
In some embodiments, the second media category is different from the first media category to which the first media information belongs.
In step 3042, the server performs information conversion processing on the first media information associated with the first media information ID with respect to the second media category to obtain second media information corresponding to the second media category.
In some embodiments, the second media category includes: the image-text information display system comprises at least one of an audio category, an image category, a text category and a picture-text category, wherein the picture-text category is composed of the image category and the text category, and the picture-text information is composed of the image information and the text information.
In some embodiments, the server performs an information conversion process on the first media information associated with the first media information ID with respect to the second media category to obtain second media information corresponding to the second media category, including: and extracting media information associated with the second media type from the first media information, and determining the extracted media information as second media information corresponding to the second media type.
For example, when the first media information is short video information and the second media category is an audio category, the audio information extracted from the first media information by an audio-video format conversion method, a soundtrack extraction method, or the like is determined as the second media information. The method for obtaining the audio information from the video information can be implemented by using the prior art, and is not described herein again.
For example, in the case where the first media information is short video information and the second media category is an image category: in some embodiments, a framing intelligent recognition algorithm is adopted to perform information conversion processing on the short video information to obtain at least one key frame image in the short video information, and the key frame image is used as a picture set associated with the short video information; in some embodiments, an artificial intelligence algorithm is employed to identify key frame images to obtain a refined collection of pictures; in some embodiments, the artificial intelligence algorithm is adopted to identify the short video information so as to obtain a highlight video in the short video information, and a video frame of the highlight video is converted into a dynamic graph; in some embodiments, the format of the picture set, the picture collection and the dynamic graph is modified to obtain the emoticon.
For example, in the case where the first media information is short video information and the second media category is a text category: in some embodiments, a voice recognition method is used to convert voice in a video into text, an introduction to the video when a client uploads the video is obtained, a video content tag is obtained, an artificial intelligence algorithm is used to extract features in a short video and convert the features into a text description, and all the obtained text information about the short video information are synthesized by an artificial intelligence means to form an article associated with first media information.
For example, in the case where the first media information is short video information and the second media category is a teletext category: in some embodiments, the content of the photo collection, animation, emoticon, and emoticon associated with the short video information is combined with the article associated with the short video information to form the teletext information associated with the short video information.
For example, the short video information is a piece of cooking video, based on the above embodiments, the information conversion processing is performed on the cooking video to obtain audio information associated with the cooking video, a picture set, a dynamic graph, an expression package related to each segment in the cooking video, and an article related to the cooking video (e.g., a cooking method step), and the picture set, the dynamic graph, the expression package and the article are combined to form image-text information associated with the cooking video (e.g., a cooking method step introduction combined with image-text display).
In some embodiments, the server performs feature analysis on the extracted media information in response to the information conversion processing instruction to obtain media information meeting feature requirements as second media information corresponding to the second media category. Wherein the feature requirements are associated with the information conversion processing instructions. For example, for short video information, a feature requirement is for features required to be intercepted for short video content, and when content corresponding to the feature requirement appears in the short video, a video clip related to the content corresponding to the feature requirement is extracted to obtain audio information, image information, text information and image-text information related to the extracted video clip. The characteristic analysis of the extracted media information can be realized by adopting an artificial intelligence means.
In step 3043, the server stores the obtained second media information.
In some embodiments, the media information processing method of the embodiments of the present disclosure further includes:
and analyzing the live media information to obtain at least one piece of first media information from the live media information.
In some embodiments, the live media information is live video information and the first media information is short video. Because the live media information has long duration, useful video content is often interspersed throughout the live media information.
In some embodiments, based on the analysis of the change of the live related information during the live process, the live video data is intercepted accordingly to obtain at least one piece of first media information.
In some embodiments, the live related information includes conversational expressions of live people during the live process, interaction status of people watching the live, and the like. For example, live video data with goods is monitored, if a trigger condition occurs in the live video data with goods, the video data within a set time range before and after the trigger condition occurs is intercepted to generate a short video, or the video data within a period of time range before and after the trigger condition occurs is intercepted to generate the short video according to an artificial intelligence method, wherein the trigger condition includes scripting (such as reciprocal words before order grabbing), commodity link pushing, order placing amount surge, bullet screen surge and the like. The method for judging the rapid increase of the order quantity comprises the following steps of judging the increase degree of the order quantity in unit time, and determining the rapid increase of the order quantity if the increase rate of the order quantity in the current unit time period compared with the order quantity in the previous unit time period reaches a set threshold value; the method for judging the sudden increase of the bullet screen includes, for example, judging the increase degree of the number of the bullet screens in a unit time, and determining that the bullet screen is suddenly increased if the increase rate of the number of the bullet screens generated in the current unit time period compared with the number of the bullet screens generated in the previous unit time period reaches a set threshold.
In some embodiments, after a live video is captured and a short video is generated, the generated short video serves as first media information, and after the media information processing method according to the embodiments of the present disclosure is performed, obtained second media information can be used for presentation after live broadcasting is completed.
In some embodiments, after the live delivery is finished, the obtained audio information, image information, text information and image-text information about various commodities in the live delivery can be used for full display of commodity pages in an application program in the terminal.
Fig. 5 is a flowchart illustrating an application scenario of a media information processing method according to an exemplary embodiment, which includes the following steps, as shown in fig. 5.
In step 501, live video data is monitored, and video content is extracted according to a trigger condition to generate a short video.
In some embodiments, step 501 is an optional step, and is created by an application in the terminal and uploaded to the server for a short video that is not live.
In some embodiments, monitoring live video data and generating short videos are performed on the server side.
In some embodiments, the monitoring of the live video data and the generation of the short video are performed on the terminal side, and the short video is uploaded to the server after the terminal generates the short video.
In some embodiments, the trigger condition is, for example, a conversational expression condition, an interaction condition, etc. during live broadcast. For example, in a live tape scene, trigger conditions such as scripted dialogue, pushing of merchandise links, ordering surge, barrage surge, etc.
In step 502, the audio in the short video is extracted to generate audio information.
In step 503, the short video is subjected to information conversion processing to obtain image information, where the image information includes at least one of a picture set, a picture set album, a dynamic image, and an emoticon.
In step 504, the short video is processed to obtain text information.
In some embodiments, step 502, step 503, and step 504 may be performed simultaneously, or not.
In some embodiments, step 502, step 503, and step 504 are performed by different information conversion processing systems, respectively.
In some embodiments, steps 502, 503, 504 are performed in conjunction with artificial intelligence means to obtain more accurate audio information, image information and text information.
In step 505, the image information and the text information are combined to obtain the teletext information.
In some embodiments, when the terminal uploads the short video to the server, the server performs information conversion processing on the short video by means of an intelligent algorithm, and analyzes contents such as audio information, a picture set, articles, pictures and texts related to the short video, so that the content richness of the server is improved.
In some embodiments, the server decomposes the short video to obtain audio information.
In some embodiments, the server extracts the core pictures from the short video, and merges the same category to obtain a picture set, a motion picture, an emoticon, and the like.
In some embodiments, the server obtains textual information from the short video to form an article.
In some embodiments, the server synthesizes into the teletext information based on articles and photo collections, motion pictures, emoticons, and the like.
In some embodiments, the server obtains short videos by extracting highlight moments from the interaction for live data. Wherein, the highlight moment can be determined through the speech expression in the live broadcast process, the interaction state of the personnel watching the live broadcast and the like. For example, live broadcast video data with goods is monitored, if a highlight moment trigger condition occurs in the live broadcast video data with goods, video data within a set time range before and after the trigger condition occurs are intercepted to generate highlight moment short videos, or video data within a period of time range before and after the highlight moment trigger condition occurs are intercepted to generate highlight moment short videos according to an artificial intelligence method, wherein the highlight moment trigger condition includes scripting technology (such as reciprocal words before order grabbing), commodity link pushing, order placing quantity surge, bullet screen surge and the like. The method for judging the rapid increase of the order quantity comprises the following steps of judging the increase degree of the order quantity in unit time, and determining the rapid increase of the order quantity if the increase rate of the order quantity in the current unit time period compared with the order quantity in the previous unit time period reaches a set threshold value; the method for judging the sudden increase of the bullet screen includes, for example, judging the increase degree of the number of the bullet screens in a unit time, and determining that the bullet screen is suddenly increased if the increase rate of the number of the bullet screens generated in the current unit time period compared with the number of the bullet screens generated in the previous unit time period reaches a set threshold.
In some embodiments, the first media category to which the first media information belongs includes a text category, and the first media information is text information.
In some embodiments, the second media category to which the second media information belongs includes an audio category, and the second media information is audio information.
In some embodiments, the server performs an information conversion process on the text information to obtain audio information associated with the text information.
In some embodiments, the terminal performs an information conversion process on the text information to obtain audio information associated with the text information.
In some embodiments, the server is a feed. The supply end converts the media information by using the media information presentation method of the embodiment of the disclosure, and the ecological abundance of the media information service is improved, for example, short videos are converted into audio, texts and picture sets, and texts are converted into audio, and the like, so that the content ecology of the media information service software is greatly enriched, more content format supports are provided for scenes such as information flow, shopping and searching, the information flow has various forms such as videos, audios, picture sets, texts and pictures, and the shopping scenes have various display forms such as videos, audios, picture sets, texts and pictures, and the searching scenes have various search result forms such as videos, audios, picture sets, texts and pictures.
All the above optional technical solutions may be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
According to the media information processing method provided by the embodiment of the disclosure, the first media information is subjected to information conversion processing to obtain at least one type of second media information associated with the first media information, and the corresponding second media information is sent for viewing when needed. The problem that the media information in various media forms cannot be selectively acquired according to the requirements is solved.
Fig. 6 is a flowchart illustrating a media information presenting method according to an exemplary embodiment, and referring to fig. 6, the media information presenting method is applied to a computer device, and the computer device is taken as an example for description.
In step 601, a preset operation behavior of the currently presented first media information is monitored.
In step 602, a preset browsing instruction is generated and sent in response to a preset operation behavior.
In step 603, at least one third media information is received and presented.
And the at least one third media information is selected from at least one second media information in response to a preset browsing instruction, and the at least one second media information is obtained by performing information conversion processing on the first media information.
The media information presentation method provided by the embodiment of the disclosure can receive other types of media information associated with the currently presented first media information for presentation based on the preset operation behavior, and switches among media types based on the preset operation behavior, thereby solving the problem that a terminal cannot selectively acquire media information in various media forms according to requirements, and expanding the support of the terminal on the requirements of various media information acquisition modes.
In some examples, the at least one third media information is selected from the at least one second media information in response to a preset browsing instruction, and the at least one second media information is obtained by performing an information conversion process on the first media information, in which case the third media information is associated with the first media information.
In some examples, it is desirable to obtain relevant audio information, image information, text information, and teletext information from the video information. In this case, the first media category to which the first media information belongs includes a video category;
the second media category to which the second media information belongs includes: the audio category, the image category, the text category and the image-text category, wherein the image-text category is composed of the image category and the text category.
In some examples, the image may include a static map, a dynamic map, an emoticon. In this case, the image categories include: at least one of a picture set sub-category, a dynamic picture sub-category, and an expression bag sub-category. Correspondingly, the image information may include at least one of a photo album, a dynamic image, and an emoticon. In some examples, the picture set may be a set of at least one still picture.
In some examples, there is a need to obtain corresponding speech information based on text information. To meet this need, in some examples, the first media category to which the first media information belongs includes a text category; the second media category to which the second media information belongs includes an audio category.
All the above optional technical solutions may be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
Fig. 7 is an interaction flowchart illustrating a media information presentation method according to an exemplary embodiment, where, as shown in fig. 7, the media information presentation method is used in an interaction process between a terminal and a server, where the server is a computer device, and the terminal may be a smart phone, a tablet computer, an e-book reader, an MP3 player, an MP4 player, a laptop portable computer, or a desktop computer, and the embodiment includes the following steps.
In step 701, the terminal monitors a preset operation behavior of the currently presented first media information.
The terminal is installed and operated with an application program supporting media information (such as short videos and the like), the application program can be started on the terminal and logged in, and first media information presented in the application program can be browsed, wherein in some embodiments, the first media information is a short video.
In some embodiments, the preset operation behavior includes at least one of a click, a slide, a press, a spatial gesture, and the like. In some embodiments, the preset operation behavior of the currently presented first media information includes at least one of a direct operation on the first media information presentation interface, an operation on an element associated with the first media information interface, and the like. In some embodiments, the interface element associated with the first media information includes at least one of an icon, text, etc. interface display element.
Fig. 8 is a diagram illustrating an operation performed on a terminal according to an exemplary embodiment, and as shown in fig. 8, the terminal 801 is a mobile terminal device, the terminal 801 has a display screen 8011, the display screen 8011 has a touch sensing function, and a finger 802 makes an input to the terminal 801 by a contact on the display screen 8011. In some embodiments, an application is installed and run in the terminal 801. In some embodiments, the display interface of the application includes an interface presentation area 80111 that is presented in the display screen 8011. In some embodiments, the first media information is displayed in the interface presentation area 80111. In some embodiments, the display interface of the application further includes a first interface operation menu 80112, the first interface operation menu 80112 is located below the interface presentation area 80111, and the first interface operation menu 80112 includes operation elements for the finger 802 to operate, and in some embodiments, the operation elements include at least one of a label, an icon, and text, or a combination of at least two of the labels, the icons, and the text. In some embodiments, the display interface of the application further includes a second interface operation menu 80113, the second interface operation menu 80113 is located above the interface presentation area 80111, and the second interface operation menu 80113 includes operation elements for the operation of the finger 802, and in some embodiments, the operation elements include at least one of a label, an icon, and text, or a combination of at least two of the labels, the icons, and the text.
In some embodiments, the predetermined operational behavior of the currently presented first media information includes operational behavior of the interface presentation area 80111 presenting the first media information. In some embodiments, the operational behavior of the interface presentation area 80111 presenting the first media information includes at least one of clicking, sliding, and pressing. In some embodiments, the operational behavior of the interface presentation area 80111 that presents the first media information includes at least one of a single-finger operation, a two-finger operation, and a multi-finger operation. In some embodiments, the operational behavior of the interface presentation area 80111 presenting the first media information includes at least one of a single-finger click, a single-finger swipe, a single-finger press, a double-finger click, a double-finger swipe, a double-finger press, a multi-finger click, a multi-finger swipe, and a multi-finger press.
In some embodiments, the preset operation behavior of the currently presented first media information includes an operation on an operation element in the first interface operation menu 80112. In some embodiments, the manipulation of the operational element in the first interface operational menu 80112 includes at least one of clicking, sliding, and pressing the operational element. In some embodiments, the operation on the operation element in the first interface operation menu 80112 includes at least one of a single-finger operation, a two-finger operation, and a multi-finger operation on the operation element. In some embodiments, the manipulation of the operational element in the first interface operational menu 80112 includes at least one of a single-finger click, a single-finger swipe, a single-finger press, a double-finger click, a double-finger swipe, a double-finger press, a multi-finger click, a multi-finger swipe, and a multi-finger press of the operational element.
In some embodiments, the preset operation behavior of the currently presented first media information includes an operation on an operation element in the second interface operation menu 80113. In some embodiments, the manipulation of the operational element in the second interface operational menu 80113 comprises at least one of clicking, sliding, and pressing the operational element. In some embodiments, the operation on the operation element in the second interface operation menu 80113 includes at least one of a single-finger operation, a two-finger operation, and a multi-finger operation on the operation element. In some embodiments, the manipulation of the operational element in the second interface operational menu 80113 comprises at least one of a single-finger click, a single-finger swipe, a single-finger press, a double-finger click, a double-finger swipe, a double-finger press, a multi-finger click, a multi-finger swipe, and a multi-finger press of the operational element.
In some embodiments, the preset operation behavior of the first media information currently being presented includes a spatial gesture motion monitored by a terminal camera. In some embodiments, the spatial gesture motion includes a side-to-side swing, a top-to-bottom swing, a diagonal swing of the hand relative to the display screen 8011, a movement of the hand toward proximity to the display screen 8011, a movement of the hand away from the display screen 8011, or a particular gesture of the hand.
In step 702, the terminal generates and sends a preset browsing instruction in response to a preset operation behavior.
In some embodiments, the preset browsing instructions are instructions regarding browsing other media categories associated with the first media information.
In some embodiments, the browsing instruction includes the first media information ID and a third media category, where the third media category is a media category to which the media information to be browsed belongs.
In step 703, the server receives a browsing instruction.
In step 704, the server selects at least one third media information from the at least one second media information in response to the browsing instruction.
In some embodiments, the second media information is associated with the first media information, and the server determines the second media information associated with the first media information by the first media information ID. In some embodiments, the server searches all the second media information associated with the first media information for the second media information of which the media category is the third media category in the browsing instruction, and determines the found second media information as the third media information.
In some embodiments, the second media information is obtained by the media information processing method described in the above embodiments.
In step 705, the server transmits the selected at least one third media information.
In step 706, the terminal receives and presents at least one third media information.
In some embodiments, the terminal presents the received third media information in the interface presentation area 80111.
All the above optional technical solutions may be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
Fig. 9 is a flowchart illustrating an application scenario of a media information presenting method according to an exemplary embodiment, where the application scenario is an embodiment that obtains other media category information associated with video information by operating on the video information currently presented in a terminal, and the embodiment includes the following steps.
In step 901, the terminal monitors a preset operation behavior on the currently presented video information.
In step 902, the terminal generates and sends a graphics browsing instruction in response to a preset operation behavior.
In some embodiments, the teletext browsing instructions are instructions relating to browsing a category of teletext media associated with the video information.
In some embodiments, the teletext browsing instruction includes a video information ID and a teletext media category, where the teletext media category is a media category to which the teletext information to be browsed belongs.
In step 903, the server receives a teletext browsing instruction.
In step 904, the server selects the teletext information from the audio information, the image information, the text information, and the teletext information in response to the teletext instruction.
In some embodiments, the audio information, the image information, the text information and the image-text information are all associated with the video information currently presented by the terminal, and the server determines the audio information, the image information, the text information and the image-text information associated with the video information through the video information ID. The server searches the image-text information of the image-text media category in the browsing instruction, which is the media category, in all the audio information, the image information, the text information and the image-text information which are associated with the video information currently presented by the terminal.
In step 905, the server sends the selected teletext information.
In step 906, the terminal receives and presents the teletext information.
All the above optional technical solutions may be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
Fig. 10 is a flowchart illustrating another media information presenting method according to an exemplary embodiment, and referring to fig. 10, the media information presenting method is applied to a computer device, and the computer device is taken as an example for description.
In step 1001, a preset interface operation behavior is monitored.
In step 1002, a preset browsing instruction is generated and sent in response to a preset interface operation behavior.
In step 1003, at least one piece of third media information of the same category is received and presented.
Each piece of third media information is selected from at least one piece of second media information; the at least one piece of second media information is obtained by performing information conversion processing on the at least one piece of first media information.
The media information presentation method provided by the embodiment of the disclosure can acquire various types of media information for presentation based on the preset operation behavior, and switches between the displayed media types based on the preset operation behavior, thereby solving the problem that the media information in various media forms cannot be selectively acquired according to requirements.
In some examples, the number of the third media information of the same category that is presented may be larger, in which case the terminal cannot present all the third media information at the same time, in which case, in order for the terminal to present as much content of the third media information as possible at the same time, the receiving and presenting at least one piece of the third media information of the same category in step 1003 includes:
receiving and presenting summary information of at least one piece of third media information in the same category;
and presenting third media information associated with any piece of summary information in response to the preset operation behavior of the any piece of summary information.
In some examples, it is desirable to obtain relevant audio information, image information, text information, and teletext information from the video information. In this case, the first media category to which the first media information belongs includes a video category; the second media category to which the second media information belongs includes: at least one of an audio category, an image category, and a text category; wherein the image-text category is composed of an image category and a text category.
In some examples, the image may include a static map, a dynamic map, an emoticon. In this case, the image category includes at least one of a photo set sub-category, a dynamic image sub-category, and an expression bag sub-category. Correspondingly, the image information may include at least one of a photo album, a dynamic image, and an emoticon. In some examples, the picture set may be a set of at least one still picture.
In some examples, there is a need to obtain corresponding speech information based on text information. To meet this need, in some examples, the first media category to which the first media information belongs includes a text category; the second media category to which the second media information belongs includes an audio category.
All the above optional technical solutions may be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
Fig. 11 is an interaction flowchart illustrating another media information presentation method according to an exemplary embodiment, where, as shown in fig. 11, the media information presentation method is used in an interaction process between a terminal and a server, where the server is a computer device, and the terminal may be a smart phone, a tablet computer, an e-book reader, an MP3 player, an MP4 player, a laptop computer, or a desktop computer, and the embodiment includes the following steps.
In step 1101, the terminal monitors the operation behavior of the preset interface.
In some embodiments, the preset interface operation behavior comprises an operation behavior of a display interface of the application program.
In step 1102, the terminal generates and sends a preset browsing instruction in response to a preset interface operation behavior.
In some embodiments, the interface operation behavior includes at least one of a click, a slide, a press, a spatial gesture, and the like. In some embodiments, the interface operation behavior includes an operation on a display interface of the application. In some embodiments, the display interface of the application includes a media information display area in the display interface and at least one of an icon, text, etc. interface display element. In some embodiments, as shown in fig. 8, the display interface of the application includes at least one of an interface presentation area 80111, a first interface operation menu 80112, and a second interface operation menu 80113.
In some embodiments, the interface operation behavior includes a click operation on a tab (tab) in a display interface of the application. In some embodiments, the tabs are configured to be presented in the first interface operations menu 80112, in some embodiments, the tabs are configured to be presented in the second interface operations menu 80113, and in some embodiments, the tabs are configured to be presented in the interface presentation area 80111.
In some embodiments, tags are configured to be associated with media categories. In some embodiments, the tag comprises: at least one of a video class tag, an audio class tag, an image class tag, and a text class tag.
In an optional embodiment, the preset browsing instruction is associated with a tag corresponding to the preset interface operation behavior. In an optional embodiment, the preset browsing instruction in response to the preset interface operation behavior of clicking the video-class tag is a video information browsing instruction, the preset browsing instruction in response to the preset interface operation behavior of clicking the audio-class tag is an audio information browsing instruction, the preset browsing instruction in response to the preset interface operation behavior of clicking the image-class tag is an image information browsing instruction, and the preset browsing instruction in response to the preset interface operation behavior of clicking the text-class tag is a text information browsing instruction.
In step 1103, the server receives a preset browsing instruction.
In step 1104, the server selects at least one piece of third media information in the same category in response to a preset browsing instruction.
In some embodiments, at least one same category of third media information is respectively associated with different first media information.
In step 1105, the server transmits summary information of the selected at least one piece of third media information of the same category.
In step 1106, the terminal receives and presents summary information of at least one piece of third media information of the same category.
In an optional embodiment, the terminal receives and displays summary information of the media information associated with the media category of the preset browsing instruction. In an optional embodiment, a terminal receives and displays summary information of video information according to a preset browsing instruction of a preset interface operation behavior associated with a video type label; the method comprises the steps that a preset browsing instruction related to a preset interface operation behavior of an audio label is received and displayed by a terminal; the method comprises the steps that a preset browsing instruction related to a preset interface operation behavior of an image type label is received and displayed by a terminal; and receiving and displaying the abstract information of the text information by the terminal according to a preset browsing instruction of a preset interface operation behavior associated with the text type label.
In step 1107, the terminal generates and transmits a content browsing instruction in response to a preset interface operation behavior for any piece of summary information.
In some embodiments, the content browsing instructions are associated with third media information corresponding to the summary information.
In step 1108, the server receives content browsing instructions.
In step 1109, the server responds to the content browsing instruction and selects the third media information corresponding to the summary information.
In step 1110, the server transmits the selected third media information.
In step 1111, the terminal receives and presents the third media information.
All the above optional technical solutions may be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
Fig. 12 is a flowchart illustrating an application scenario of another media information presentation method according to an exemplary embodiment, where the application scenario is an embodiment of operating a display interface in a terminal to obtain at least one piece of media information in the same media category, and the embodiment includes the following steps.
In step 1201, the terminal monitors the operation behavior of the preset interface.
In step 1202, the terminal generates and sends a preset browsing instruction in response to a preset interface operation behavior, where the preset browsing instruction indicates to browse the image information.
In step 1203, the server receives a preset browsing instruction.
In step 1204, the server responds to a preset browsing instruction to select at least one piece of image information.
In step 1205, the server transmits summary information of the selected at least one piece of image information.
In step 1206, the terminal receives and presents summary information of at least one piece of image information.
In step 1207, the terminal generates and sends a content browsing instruction in response to a preset interface operation behavior for any piece of summary information.
In step 1208, the server receives a content browsing instruction.
In step 1209, the server responds to the content browsing instruction, and selects the image information corresponding to the summary information.
In step 1210, the server transmits the selected image information.
In step 1211, the terminal receives and presents the image information.
All the above optional technical solutions may be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
Fig. 13 is a flowchart illustrating an application scenario in which a terminal executes a method for presenting media information, as shown in fig. 13, where the embodiment includes the following steps.
In step 1301, the terminal opens an application.
In some embodiments, the application includes both a short video service and a shopping service.
In step 13021, the terminal presents the pushed short video.
In step 12022, click tab or long press short video switches to audio version mode, image version mode or text version mode, or click tab or long press short video switches between short video, audio version mode, image version mode and text version mode.
In step 12023, the audio version mode is entered and the audio version of the short video is played.
In some embodiments, the audio layout mode may be used in a scene such as running.
In step 13024, an image version mode is entered to quickly browse various pictures associated with the short video.
In step 13025, a texting mode is entered to quickly browse through various articles related to the short video.
In step 13026, the short video may be switched back to when the content of interest is heard, or the modes may be switched to each other.
In step 13031, the shopping module is entered.
In step 13032, click tab switches shopping mode, switching between live and shelf e-commerce.
In step 13033, the live video with cargo is played in the live version.
At step 13034, item list information is displayed, including item picture information, text information, price information, the item list information being derived from information uploaded by the merchant.
In step 13035, a product detail page is displayed, which includes short videos, pictures, and article descriptions of highlight moments in the live video, and this information is obtained by the media information processing method described in each of the above embodiments.
All the above optional technical solutions may be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
In some embodiments, the terminal is a consumer terminal. The consumption end utilizes the media information presentation method of the embodiment of the disclosure to meet the conversion requirements of the same content on multiple media formats under different network conditions and blind scenes.
Referring to fig. 13, in some embodiments, the supply end includes several scenarios.
(1) Brushing short video scenes
The terminal can call out mode switching by monitoring and clicking tab or long-time and short-time video operation behaviors, wherein the modes comprise an audio mode, a text mode, an atlas mode and the like;
the terminal can convert the short video playing into audio by monitoring the operation behaviors selected by different modes, read an album, read an article and the like;
the terminal can switch back to the short video playing through the monitored operation behavior when playing the video or presenting the atlas, the article and the image and text so as to realize the mutual switching among various media information.
(2) Shopping demand scenarios
The terminal can switch the presentation mode by monitoring and clicking the operation behavior of the tab when watching live-broadcast taken goods, wherein the presentation mode comprises a live-broadcast taken goods preview list mode and a goods shelf e-commerce mode;
the terminal can select to play live broadcast tape goods and present a goods preview list in a live broadcast room by monitoring operation behaviors of different presentation modes, can also select to switch an interface of a goods shelf e-commerce by monitoring the operation behaviors, background data (goods pictures, titles, prices and the like) are stored in the server, corresponding data can be pushed according to the interface presented by the terminal, and a goods detail page can also comprise audio, video, an atlas, text and the like generated by the media information processing method of the embodiment disclosed above.
(3) Searching scenes
Besides the searching of the live broadcast and the short video, at least one of an album, an article, a motion picture and an emoticon can be searched.
In some embodiments, the method further comprises the step of searching the second media information to obtain the required media information from the second media information, and in some embodiments, the searching of the second media information is performed in an existing searching manner.
Fig. 14 is a block diagram illustrating a logical structure of a media information acquisition apparatus according to an exemplary embodiment, and referring to fig. 14, the apparatus includes an acquisition module 1401, an information conversion module 1402, and a selection transmission module 1403.
An obtaining module 1401 configured to perform obtaining first media information;
an information conversion module 1402 configured to perform an information conversion process on the first media information to obtain at least one second media information, where the second media information is associated with the first media information;
a selecting and sending module 1403 configured to perform selecting at least one third media information from the at least one second media information in response to a preset browsing instruction triggered based on the first media information, and sending the at least one third media information.
The media information acquisition device provided by the embodiment of the disclosure performs information conversion processing on the first media information to obtain at least one type of second media information associated with the first media information, and sends the corresponding second media information for viewing when needed. The problem that the media information in various media forms cannot be selectively acquired according to the requirements is solved.
In one possible implementation, the obtaining module 1401 is configured to perform:
in response to the acquisition instruction, first media information is acquired, wherein the first media information is associated with the acquisition instruction.
In one possible implementation, the information conversion module 1402 is configured to perform:
responding to the information conversion processing instruction, and performing information conversion processing on the first media information to obtain at least one type of second media information;
wherein the information conversion processing instruction is associated with the first media information;
the media category of the second media information is associated with the information conversion processing instruction.
In one possible implementation, the information conversion module 1402 is further configured to perform:
according to the information identification and the second media category of the first media information specified in the information conversion processing instruction, performing information conversion processing related to the second media category on the first media information associated with the information identification of the first media information to obtain second media information corresponding to the second media category;
the second media category is different from the first media category to which the first media information belongs.
In one possible implementation, the information conversion module 1402 is further configured to perform:
and extracting media information associated with the second media type from the first media information, and determining the extracted media information as second media information corresponding to the second media type.
In one possible implementation, the information conversion module 1402 is further configured to perform:
performing characteristic analysis on the extracted media information to obtain media information meeting characteristic requirements as second media information corresponding to a second media category;
wherein the feature requirements are associated with the information conversion processing instructions.
In one possible implementation, the selection sending module 1403 is configured to perform:
selecting second media information with a media category being a third media category from at least one second media information as third media information according to the information identifier of the first media information and the third media category specified in the preset browsing instruction;
wherein the second media information is associated with the information identification of the first media information.
In one possible implementation, the first media category to which the first media information belongs includes a video category; the second media category to which the second media information belongs includes: at least one of an audio category, an image category, a text category, and a graphic category; wherein the image-text category is composed of an image category and a text category.
In one possible embodiment, the image category includes at least one of a photo set sub-category, a dynamic photo sub-category, and an expression sub-category.
In one possible implementation, the first media category to which the first media information belongs includes a text category; the second media category to which the second media information belongs includes an audio category.
In one possible implementation manner, the media information acquiring apparatus of the present disclosure further includes:
and the live broadcast analysis module is configured to analyze the live broadcast media information so as to obtain at least one piece of first media information from the live broadcast media information.
All the above optional technical solutions may be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
With regard to the media information acquisition apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the media information processing method, and will not be described in detail here.
It should be noted that: the foregoing embodiments are merely illustrated by the division of the functional modules, and in practical applications, the above functions may be distributed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above.
Fig. 15 is a block diagram showing a logical structure of a media information presentation apparatus according to an exemplary embodiment, and referring to fig. 15, the apparatus includes a monitoring module 1501, an instruction sending module 1502, and a reception presentation module 1503.
A monitoring module 1501, configured to perform monitoring of a preset operation behavior on the currently presented first media information;
the instruction sending module 1502 is configured to execute generating and sending a preset browsing instruction in response to a preset operation behavior;
a receiving and presenting module 1503 configured to perform receiving and presenting at least one third media information;
the at least one third media information is selected from the at least one second media information in response to a preset browsing instruction;
the at least one second media information is obtained by performing information conversion processing on the first media information.
The media information presentation device provided by the embodiment of the disclosure can send other types of media information associated with the presented first media information for presentation based on the preset operation behavior, and switches between the presented media types based on the preset operation behavior, thereby solving the problem that media information in various media forms cannot be selectively acquired according to requirements, and expanding the support of a terminal on the requirements of various media information acquisition modes.
In one possible implementation, the third media information is associated with the first media information.
In one possible implementation, the first media category to which the first media information belongs includes a video category; the second media category to which the second media information belongs includes: at least one of an audio category, an image category, a text category, and a graphic category; wherein the image-text category is composed of an image category and a text category.
In one possible implementation, the image categories include: at least one of a picture set sub-category, a dynamic picture sub-category, and an expression bag sub-category.
In one possible implementation, the first media category to which the first media information belongs includes a text category; the second media category to which the second media information belongs includes an audio category.
With regard to the media information presentation apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the media information presentation method, and will not be elaborated here.
It should be noted that: the foregoing embodiments are merely illustrated by the division of the functional modules, and in practical applications, the above functions may be distributed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above.
Referring also to fig. 15, another media information presentation apparatus according to an exemplary embodiment is shown in a logical block diagram, and includes a monitoring module 1501, an instruction sending module 1502, and a receiving presentation module 1503.
A monitoring module 1501 configured to perform monitoring of a preset interface operation behavior;
the instruction sending module 1502 is configured to execute a preset browsing instruction generated and sent in response to a preset interface operation behavior;
a receiving and presenting module 1503 configured to perform receiving and presenting at least one piece of third media information of the same category;
each piece of third media information is selected from at least one piece of second media information;
the second media information is obtained by performing information conversion processing on at least one piece of the first media information.
The media information presentation device provided by the embodiment of the disclosure can acquire media information of multiple categories for presentation based on the preset operation behavior, and switches among the displayed media categories based on the preset operation behavior, thereby solving the problem that the terminal cannot selectively acquire the media information of multiple media forms according to the requirement.
In a possible implementation, based on the apparatus components of fig. 15, the receiving and presenting module 1503 includes:
the first receiving and presenting sub-module is configured to receive and present summary information of at least one piece of third media information in the same category;
and the second receiving and presenting submodule is configured to execute the action of responding to the preset interface operation of any piece of summary information and present the third media information associated with the any piece of summary information.
In one possible implementation, the first media category to which the first media information belongs includes a video category; the second media category to which the second media information belongs includes: at least one of an audio category, an image category, a text category, and a graphic category; wherein the image-text category is composed of an image category and a text category.
In one possible embodiment, the image category includes at least one of a photo set sub-category, a dynamic photo sub-category, and an expression sub-category.
In one possible implementation, the first media category to which the first media information belongs includes a text category; the second media category to which the second media information belongs includes an audio category.
With regard to the media information presentation apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the media information presentation method, and will not be elaborated here.
It should be noted that: the foregoing embodiments are merely illustrated by the division of the functional modules, and in practical applications, the above functions may be distributed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above.
Fig. 16 shows a block diagram of a terminal, which is an exemplary illustration of a computer device, according to an exemplary embodiment of the present disclosure. The terminal 1600 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 1600 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, etc.
Generally, terminal 1600 includes: a processor 1601, and a memory 1602.
Processor 1601 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 1601 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). Processor 1601 may also include a main processor and a coprocessor, where the main processor is a processor for processing data in an awake state, and is also referred to as a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1601 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content that the display screen needs to display. In some embodiments, the processor 1601 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 1602 may include one or more computer-readable storage media, which may be non-transitory. The memory 1602 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices.
In some embodiments, a non-transitory computer readable storage medium in memory 1602 is used to store at least one instruction for execution by processor 1601 to implement the media information presentation methods provided by various embodiments of the present disclosure.
In some embodiments, the terminal 1600 may also optionally include: peripheral interface 1603 and at least one peripheral. Processor 1601, memory 1602 and peripheral interface 1603 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 1603 via buses, signal lines, or circuit boards. Specifically, the peripheral device includes: at least one of a radio frequency circuit 1604, a touch screen display 1605, a camera assembly 1606, audio circuitry 1607, a positioning assembly 1608, and a power supply 1609.
Peripheral interface 1603 can be used to connect at least one I/O (Input/Output) related peripheral to processor 1601 and memory 1602. In some embodiments, processor 1601, memory 1602, and peripheral interface 1603 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1601, the memory 1602 and the peripheral device interface 1603 may be implemented on a separate chip or circuit board, which is not limited by this embodiment.
The Radio Frequency circuit 1604 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 1604 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 1604 converts the electrical signal into an electromagnetic signal to be transmitted, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1604 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1604 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 1604 may also include NFC (Near Field Communication) related circuitry, which is not limited by this disclosure.
The display 1605 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1605 is a touch display screen, the display screen 1605 also has the ability to capture touch signals on or over the surface of the display screen 1605. The touch signal may be input to the processor 1601 as a control signal for processing. At this point, the display 1605 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 1605 may be one, providing the front panel of the terminal 1600; in other embodiments, the display screens 1605 can be at least two, respectively disposed on different surfaces of the terminal 1600 or in a folded design; in still other embodiments, display 1605 can be a flexible display disposed on a curved surface or a folded surface of terminal 1600. Even further, the display 1605 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The Display 1605 may be made of LCD (liquid crystal Display), OLED (Organic Light-Emitting Diode), or the like.
The camera assembly 1606 is used to capture images or video. Optionally, camera assembly 1606 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1606 can also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 1607 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1601 for processing or inputting the electric signals to the radio frequency circuit 1604 to achieve voice communication. For stereo sound acquisition or noise reduction purposes, the microphones may be multiple and disposed at different locations of terminal 1600. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1601 or the radio frequency circuit 1604 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuit 1607 may also include a headphone jack.
The positioning component 1608 is configured to locate a current geographic location of the terminal 1600 for navigation or LBS (location based Service). The positioning component 1608 may be a positioning component based on the united states GPS (global positioning System), the chinese beidou System, the russian graves System, or the european union galileo System.
Power supply 1609 is used to provide power to the various components of terminal 1600. Power supply 1609 may be alternating current, direct current, disposable or rechargeable. When power supply 1609 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 1600 also includes one or more sensors 1610. The one or more sensors 1610 include, but are not limited to: acceleration sensor 1611, gyro sensor 1612, pressure sensor 1613, fingerprint sensor 1614, optical sensor 1615, and proximity sensor 1616.
Acceleration sensor 1611 may detect acceleration in three coordinate axes of a coordinate system established with terminal 1600. For example, the acceleration sensor 1611 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 1601 may control the touch display screen 1605 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1611. The acceleration sensor 1611 may also be used for acquisition of motion data of a game or a user.
Gyroscope sensor 1612 can detect the organism direction and the turned angle of terminal 1600, and gyroscope sensor 1612 can gather the 3D action of user to terminal 1600 with acceleration sensor 1611 in coordination. From the data collected by the gyro sensor 1612, the processor 1601 may perform the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 1613 may be disposed on a side bezel of terminal 1600 and/or underlying touch display 1605. When the pressure sensor 1613 is disposed on the side frame of the terminal 1600, a user's holding signal of the terminal 1600 can be detected, and the processor 1601 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 1613. When the pressure sensor 1613 is disposed at the lower layer of the touch display 1605, the processor 1601 controls the operability control on the UI interface according to the pressure operation of the user on the touch display 1605. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1614 is configured to collect a fingerprint of the user, and the processor 1601 is configured to identify the user based on the fingerprint collected by the fingerprint sensor 1614, or the fingerprint sensor 1614 is configured to identify the user based on the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the processor 1601 authorizes the user to perform relevant sensitive operations including unlocking a screen, viewing encrypted information, downloading software, paying for and changing settings, etc. The fingerprint sensor 1614 may be disposed on the front, back, or side of the terminal 1600. When a physical key or vendor Logo is provided on the terminal 1600, the fingerprint sensor 1614 may be integrated with the physical key or vendor Logo.
The optical sensor 1615 is used to collect ambient light intensity. In one embodiment, the processor 1601 may control the display brightness of the touch display screen 1605 based on the ambient light intensity collected by the optical sensor 1615. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 1605 is increased; when the ambient light intensity is low, the display brightness of the touch display 1605 is turned down. In another embodiment, the processor 1601 may also dynamically adjust the shooting parameters of the camera assembly 1606 based on the ambient light intensity collected by the optical sensor 1615.
A proximity sensor 1616, also referred to as a distance sensor, is typically disposed on the front panel of terminal 1600. The proximity sensor 1616 is used to collect the distance between the user and the front surface of the terminal 1600. In one embodiment, the processor 1601 controls the touch display 1605 to switch from the light screen state to the rest screen state when the proximity sensor 1616 detects that the distance between the user and the front surface of the terminal 1600 is gradually decreased; when the proximity sensor 1616 detects that the distance between the user and the front surface of the terminal 1600 is gradually increased, the touch display 1605 is controlled by the processor 1601 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the above-described arrangements are not limiting of terminal 1600, and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be employed.
Fig. 17 is a schematic structural diagram of a server according to an embodiment of the present disclosure, where the server 1700 may generate a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 1701 and one or more memories 1702, where the memory 1702 stores at least one program code, and the at least one program code is loaded and executed by the processors 1701 to implement the media information processing method according to the embodiments. Of course, the server 1700 may also have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input/output, and the server 1700 may also include other components for implementing device functions, which are not described herein.
In an exemplary embodiment, a computer-readable storage medium comprising at least one instruction, such as a memory comprising at least one instruction, is also provided, the at least one instruction being executable by a processor in a computer device to perform the media information processing method and/or the media information presentation method in the above embodiments.
Alternatively, the computer-readable storage medium may be a non-transitory computer-readable storage medium, and the non-transitory computer-readable storage medium may include a ROM (Read-Only Memory), a RAM (Random-Access Memory), a CD-ROM (Compact Disc Read-Only Memory), a magnetic tape, a floppy disk, an optical data storage device, and the like, for example.
In an exemplary embodiment, a computer program product is also provided, which includes one or more instructions that can be executed by a processor of a computer device to implement the media information processing method and/or the media information presentation method provided by the above embodiments.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A media information processing method, comprising:
acquiring first media information;
performing information conversion processing on the first media information to obtain at least one type of second media information;
and responding to a preset browsing instruction triggered based on the first media information, selecting at least one third media information from at least one second media information, and sending the at least one third media information.
2. The method as claimed in claim 1, wherein the performing information conversion processing on the first media information to obtain at least one second media information comprises:
responding to an information conversion processing instruction, and performing information conversion processing on the first media information to obtain at least one type of second media information;
wherein the information conversion processing instruction is associated with the first media information;
the media category of the second media information is associated with the information conversion processing instruction.
3. The method for processing media information according to claim 1, wherein the selecting at least one third media information from at least one second media information in response to a preset browsing instruction triggered based on the first media information comprises:
selecting second media information with a media category of a third media category from at least one second media information as the third media information according to an information identifier of the first media information and the third media category specified in the preset browsing instruction;
wherein the second media information is associated with an information identifier of the first media information.
4. A method for presenting media information, comprising:
monitoring a preset operation behavior of the first media information presented currently;
responding to the preset operation behavior, and generating and sending a preset browsing instruction;
receiving and presenting at least one third media information;
the at least one third media information is selected from the at least one second media information in response to the preset browsing instruction; the at least one second media information is obtained by performing information conversion processing on the first media information.
5. A method for presenting media information, comprising:
monitoring operation behaviors of a preset interface;
responding to the preset interface operation behavior, and generating and sending a preset browsing instruction;
receiving and presenting at least one piece of third media information in the same category;
each piece of third media information is selected from at least one piece of second media information;
the at least one piece of second media information is obtained by performing information conversion processing on the at least one piece of first media information.
6. A media information processing apparatus, comprising:
an acquisition module configured to perform acquisition of first media information;
the information conversion module is configured to perform information conversion processing on the first media information to obtain at least one type of second media information;
and the selection sending module is configured to execute a preset browsing instruction triggered based on the first media information, select at least one third media information from at least one second media information, and send the at least one third media information.
7. A media information presentation device, comprising:
the monitoring module is configured to execute monitoring of preset operation behaviors of the first media information presented currently;
the instruction sending module is configured to execute responding to the preset operation behavior, and generate and send a preset browsing instruction;
a receiving and presenting module configured to receive and present at least one third media information;
the at least one third media information is selected from the at least one second media information in response to the preset browsing instruction;
the at least one second media information is obtained by performing information conversion processing on the first media information.
8. A media information presentation device, comprising:
the monitoring module is configured to execute monitoring of preset interface operation behaviors;
the instruction sending module is configured to execute a preset interface operation behavior in response to the preset interface operation behavior, and generate and send a preset browsing instruction;
the receiving and presenting module is configured to receive and present at least one piece of third media information in the same category;
each piece of third media information is selected from at least one piece of second media information;
the at least one piece of second media information is obtained by performing information conversion processing on the at least one piece of first media information.
9. An electronic device, comprising:
a processor;
a memory for storing executable instructions of the processor;
wherein the processor is configured to execute the executable instructions to implement the media information processing method of any one of claims 1 to 3 and/or the media information presentation method of any one of claims 4 to 5.
10. A computer-readable storage medium, wherein at least one instruction of the computer-readable storage medium, when executed by a processor of an electronic device, enables the electronic device to implement the media information processing method of any of claims 1 to 3 and/or the media information presentation method of any of claims 4 to 5.
CN202111568529.XA 2021-12-21 2021-12-21 Media information processing method, media information presenting method and device Pending CN114268801A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111568529.XA CN114268801A (en) 2021-12-21 2021-12-21 Media information processing method, media information presenting method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111568529.XA CN114268801A (en) 2021-12-21 2021-12-21 Media information processing method, media information presenting method and device

Publications (1)

Publication Number Publication Date
CN114268801A true CN114268801A (en) 2022-04-01

Family

ID=80828458

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111568529.XA Pending CN114268801A (en) 2021-12-21 2021-12-21 Media information processing method, media information presenting method and device

Country Status (1)

Country Link
CN (1) CN114268801A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105824799A (en) * 2016-03-14 2016-08-03 厦门幻世网络科技有限公司 Information processing method, equipment and terminal equipment
EP3110157A2 (en) * 2015-06-23 2016-12-28 Facebook, Inc. Streaming media presentation system
CN107609012A (en) * 2017-07-31 2018-01-19 珠海市魅族科技有限公司 Multimedia file treating method and apparatus, computer installation, readable storage medium storing program for executing
CN107943811A (en) * 2016-12-22 2018-04-20 腾讯科技(北京)有限公司 The dissemination method and device of content
CN111586490A (en) * 2020-04-28 2020-08-25 上海商汤临港智能科技有限公司 Multimedia interaction method, device, equipment and storage medium
CN111813969A (en) * 2019-11-08 2020-10-23 厦门雅基软件有限公司 Multimedia data processing method and device, electronic equipment and computer storage medium
CN112417180A (en) * 2019-08-23 2021-02-26 腾讯科技(深圳)有限公司 Method, apparatus, device and medium for generating album video

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3110157A2 (en) * 2015-06-23 2016-12-28 Facebook, Inc. Streaming media presentation system
CN105824799A (en) * 2016-03-14 2016-08-03 厦门幻世网络科技有限公司 Information processing method, equipment and terminal equipment
CN107943811A (en) * 2016-12-22 2018-04-20 腾讯科技(北京)有限公司 The dissemination method and device of content
CN107609012A (en) * 2017-07-31 2018-01-19 珠海市魅族科技有限公司 Multimedia file treating method and apparatus, computer installation, readable storage medium storing program for executing
CN112417180A (en) * 2019-08-23 2021-02-26 腾讯科技(深圳)有限公司 Method, apparatus, device and medium for generating album video
CN111813969A (en) * 2019-11-08 2020-10-23 厦门雅基软件有限公司 Multimedia data processing method and device, electronic equipment and computer storage medium
CN111586490A (en) * 2020-04-28 2020-08-25 上海商汤临港智能科技有限公司 Multimedia interaction method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111079012B (en) Live broadcast room recommendation method and device, storage medium and terminal
CN113411680B (en) Multimedia resource playing method, device, terminal and storage medium
CN112052897B (en) Multimedia data shooting method, device, terminal, server and storage medium
CN111880888B (en) Preview cover generation method and device, electronic equipment and storage medium
CN111327916B (en) Live broadcast management method, device and equipment based on geographic object and storage medium
CN111026992A (en) Multimedia resource preview method, device, terminal, server and storage medium
CN111031391A (en) Video dubbing method, device, server, terminal and storage medium
CN114302160B (en) Information display method, device, computer equipment and medium
CN112257006A (en) Page information configuration method, device, equipment and computer readable storage medium
CN112131422A (en) Expression picture generation method, device, equipment and medium
CN113987326B (en) Resource recommendation method and device, computer equipment and medium
CN113609358B (en) Content sharing method, device, electronic equipment and storage medium
CN111382355A (en) Live broadcast management method, device and equipment based on geographic object and storage medium
CN113469779A (en) Information display method and device
CN113032590A (en) Special effect display method and device, computer equipment and computer readable storage medium
CN112235609B (en) Content item data playing method and device, computer equipment and storage medium
WO2023029237A1 (en) Video preview method and terminal
CN113377976B (en) Resource searching method and device, computer equipment and storage medium
CN112492331B (en) Live broadcast method, device, system and storage medium
CN112202958B (en) Screenshot method and device and electronic equipment
CN115905374A (en) Application function display method and device, terminal and storage medium
CN114268801A (en) Media information processing method, media information presenting method and device
CN115086774B (en) Resource display method and device, electronic equipment and storage medium
CN114780828A (en) Resource recommendation method and device, computer equipment and medium
CN113704621A (en) Object information recommendation method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20220401