CN113051233A - Processing method and device - Google Patents

Processing method and device Download PDF

Info

Publication number
CN113051233A
CN113051233A CN202110339833.0A CN202110339833A CN113051233A CN 113051233 A CN113051233 A CN 113051233A CN 202110339833 A CN202110339833 A CN 202110339833A CN 113051233 A CN113051233 A CN 113051233A
Authority
CN
China
Prior art keywords
content
output
target
outputting
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110339833.0A
Other languages
Chinese (zh)
Inventor
胡泽凡
邝宇豪
李翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202110339833.0A priority Critical patent/CN113051233A/en
Publication of CN113051233A publication Critical patent/CN113051233A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/16File or folder operations, e.g. details of user interfaces specifically adapted to file systems
    • G06F16/168Details of user interfaces specifically adapted to file systems, e.g. browsing and visualisation, 2d or 3d GUIs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/14Details of searching files based on file metadata
    • G06F16/148File search processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Library & Information Science (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The application discloses a processing method and a processing device, which can firstly output target output content when the target output content of the content to be output exists, and also can generate the target output content according to evaluation information of the output content for outputting when the target output content does not exist, so that part of the content to be output, but not the beginning part, is used as the target output content, or the target output content is generated according to the evaluation information, so that the target output content is the content capable of representing the main output content of the content to be output, the main content of the content to be output is clarified, and the user experience is improved.

Description

Processing method and device
Technical Field
The present application relates to the field of processing, and in particular, to a processing method and apparatus.
Background
When a user browses a plurality of video files, each video file in the plurality of video files is displayed in a thumbnail mode, and the user judges whether the video file corresponding to the thumbnail is the video file which the user wants to watch through the thumbnail.
However, at present, a thumbnail of a video file is usually the first frame image of the entire video file, and the first frame image is not usually the key frame image in the entire video file, so that a user cannot determine the key content of the current video file through the first frame image of the video file, thereby reducing user experience.
Disclosure of Invention
In view of the above, the present application provides a processing method and apparatus, and the specific scheme is as follows:
a method of processing, comprising:
determining content to be output;
if the target output content associated with the content to be output is obtained, outputting the target output content before outputting the content to be output, wherein the target output content is a part of the content to be output;
and/or if the target output content related to the content to be output is not obtained, obtaining evaluation information aiming at the output content in the process of outputting the content to be output, and determining the target content at least according to the evaluation information so as to process the target content into the target output content representing the content to be output.
Further, the obtaining of the target output content associated with the content to be output at least includes one of:
if the target position of the content to be output is determined to have the target content, determining the target content as the target output content associated with the content to be output;
or,
if the associated file exists in the target path, determining to obtain the target output content;
or,
and if the partial content in the content to be output is determined to have the target identification, determining the content with the target identification as the target output content associated with the content to be output.
Further, the outputting the target output content includes:
obtaining attribute information of the content to be output, determining an output parameter of the target output content at least according to the attribute information, and outputting the target output content according to the output parameter;
or,
obtaining current environment information, determining an output parameter of the target output content at least according to the current environment information, and outputting the target output content according to the output parameter;
or,
obtaining attribute information and current environment information of the content to be output, determining output parameters of the target output content at least according to the attribute information and the current environment information, and outputting the target output content according to the output parameters;
or,
obtaining configuration information and/or historical use information of the electronic equipment, and determining output parameters of the target output content at least according to the configuration information and/or the historical use information so as to output the target output content according to the output parameters.
Further, the outputting the target output content includes:
if the content to be output is video content, outputting the target output content at a target frame rate, wherein the target frame rate is the same as or different from the output frame rate of the content to be output;
or,
if the content to be output is audio content, outputting the target output content in an audio and/or text mode;
or,
if the content to be output is image content, outputting the target output content in a picture or moving picture mode;
or,
and if the content to be output is text content, outputting the target output content in an image and/or audio mode.
Further, the outputting the content to be output includes:
obtaining feedback information for the target output content in a process of outputting the target output content;
if the feedback information represents that the content to be output meets the intention of a target receiver, automatically outputting the content to be output;
or,
and if the feedback information comprises operation information for jumping from the target output content to the content to be output, outputting the content to be output.
Further, the obtaining evaluation information for the output content in the process of outputting the content to be output and determining the target content at least according to the evaluation information includes:
receiving state information of a target receiver aiming at the output content in the process of outputting the content to be output is obtained, and the target content is determined at least according to the receiving state information;
or,
obtaining historical evaluation information and/or historical output parameters of the content which is output and/or the content which is not output in the process of outputting the content to be output, and determining target content at least according to the historical evaluation information and/or the historical output parameters.
Further, the method also comprises the following steps:
updating the target output content during or after the output of the content to be output.
Further, the updating the target output content includes:
and updating the target output content at least according to the receiving state information of the target receiver aiming at the output content and/or the output parameter of the output content in the process of outputting the content to be output.
Further, the method also comprises the following steps:
and sharing the updated target output content and/or the updated evaluation information to the target position.
A processing apparatus, comprising:
the determining module is used for determining the content to be output;
the first output module is used for outputting the target output content before outputting the content to be output under the condition of obtaining the target output content related to the content to be output, wherein the target output content is a part of the content to be output;
and the second output module is used for acquiring evaluation information aiming at output content in the process of outputting the content to be output under the condition that target output content related to the content to be output is not acquired, determining the target content at least according to the evaluation information, and processing the target content at least into target output content representing the content to be output.
An electronic device comprises a memory and a processor, wherein the memory stores a processing program, and the processing program can realize the steps of the processing method when being executed by the processor.
A storage medium storing at least one set of instructions;
the set of instructions is for being called and executing at least the processing method of any of the above.
As can be seen from the foregoing technical solutions, in the embodiments disclosed in the present application, a content to be output is determined, if a target output content associated with the content to be output is obtained, the target output content is output before the content to be output is output, where the target output content is a part of the content to be output, and if the target output content associated with the content to be output is not obtained, evaluation information for the output content in the process of outputting the content to be output is obtained, and the target content is determined at least according to the evaluation information, so as to process at least the target content into a target output content representing the content to be output. According to the scheme, when the target output content of the content to be output is available, the target output content can be output firstly, and when the target output content is unavailable, the target output content can be generated according to the evaluation information of the output content for outputting, so that part of the content to be output, but not the beginning part, is used as the target output content, or the target output content is generated according to the evaluation information, so that the target output content is the content capable of representing the main output content of the content to be output, the main content of the content to be output is clarified, and the user experience is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of a processing method disclosed in an embodiment of the present application;
FIG. 2 is a flow chart of a processing method disclosed in an embodiment of the present application;
FIG. 3 is a flow chart of a processing method disclosed in an embodiment of the present application;
FIG. 4 is a flow chart of a processing method disclosed in an embodiment of the present application;
FIG. 5 is a flow chart of a processing method disclosed in an embodiment of the present application;
fig. 6 is a schematic structural diagram of a processing apparatus according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The application discloses a processing method, a flow chart of which is shown in fig. 1, comprising the following steps:
step S11, determining the content to be output;
step S12, if the target output content associated with the content to be output is obtained, outputting the target output content before outputting the content to be output, wherein the target output content is a part of the content to be output;
step S13, if the target output content associated with the content to be output is not obtained, obtaining the evaluation information aiming at the output content in the process of outputting the content to be output, and determining the target content at least according to the evaluation information so as to process the target content into the target output content representing the content to be output.
The content to be output at least comprises a video file, and can also be: picture files, or text files, web pages, etc.
When the content to be output needs to be output or displayed, whether the content to be output has associated target output content is determined firstly.
The target output content associated with the content to be output may be a part of the content to be output, and specifically may be an abstract file or a thumbnail file of the content to be output, that is, before the content to be output is output, the content to be output is displayed in the form of the target output content, and the content to be output is displayed through the abstract file or the thumbnail file, so that a user can visually determine the main content of the content to be output through the abstract file or the thumbnail file in advance.
Specifically, if there is a target output content associated with the content to be output, outputting the target output content before outputting the content to be output, so that the user first determines a part of the content to be output through the target output content;
if the target output content related to the content to be output does not exist, before the content to be output is output, the target output content related to the content to be output cannot be directly obtained, evaluation information for the output content in the process of outputting the content to be output can be obtained, the target content is determined at least according to the evaluation information, and the target content is at least processed into the target output content representing the content to be output.
If the target output content related to the content to be output cannot be obtained before the content to be output is output, directly outputting the content to be output, and obtaining evaluation information for the output content in the process of currently outputting the content to be output by the current equipment, wherein the output content can be the output content in the content to be output, the content being output in the content to be output, or the content which is not output in the content to be output;
the evaluation information may be: in the process of outputting the content to be output by the device, the user performs operations on the device based on the content output by the device, such as: since the device can specify the evaluation information, the popularity information, the reception state information, and the like for the output content by evaluating the content currently output by the device, by performing a fast-forward operation on the content currently output by the device, or by repeating a playback operation, and the like, the operation or the information for the output content obtained based on the operation can be used as the evaluation information for the output content.
After obtaining the evaluation information, determining the target content according to the evaluation information so as to at least process the target content into target output content representing the content to be output, wherein the target content can be determined from the content to be output, can be determined from the output content, and can be determined from the content to be output and the output content together.
That is, after obtaining the evaluation information, if the evaluation information is an operation performed on the device, the content output by the device corresponding to the operation performed on the device is determined as the target content, and in this case, the target content may be a content being output by the device when the operation is performed, a content output at the time of performing the operation and within a predetermined range before the time, or a content to be output; if the evaluation information is information for the output content obtained based on an operation performed on the device, the output content related to the information is determined as the target content, and the target content may be: the content that has been output may be content that has not been output, content that is being output, or both content that has been output and content that has not been output.
And processing the target content to process the target content into content meeting the requirement of the target output content, and outputting the content serving as the target output content. The content meeting the target output content requirement may be: the file size meets the requirement of target output content, and the file format meets the requirement of target output content.
For example: in the process of playing the video, a user repeatedly watches the pictures of a certain number of frames, the pictures of the certain number of frames can be determined as the target content of the played video, and after the target content is processed into the content with the frame number or the duration meeting the requirement of the target output content, the processed content can be determined as the target output content.
After the target output content is obtained through the scheme, when the content to be output is in a non-output state, the content to be output is displayed through the determined target output content, so that the displayed target output content is more in line with the attention requirement of a user.
Further, the evaluation information may be: and the other equipment obtains the evaluation information aiming at the output content in the process of outputting the content to be output.
When the current device needs to output the content to be output, the target output content related to the content to be output is not stored in the device, at the moment, the target output content related to the content to be output stored in the cloud end needs to be obtained, and the target output content stored in the cloud end is directly determined as the target output content of the content to be output in the current device; or obtaining evaluation information aiming at the content to be output and stored in the cloud, analyzing the evaluation information to determine the target content, and processing the target content into the target output content of the content to be output, so that the target output content can be output firstly before the current equipment outputs the content to be output, and the key content of the content to be output can be obtained in advance through the target output content before the user obtains the complete content of the content to be output.
The processing method disclosed in this embodiment determines a content to be output, outputs a target output content before outputting the content to be output if the target output content associated with the content to be output is obtained, where the target output content is a part of the content to be output, acquires evaluation information for the output content in the process of outputting the content to be output if the target output content associated with the content to be output is not obtained, and determines the target content at least according to the evaluation information, so as to process the target content into the target output content representing the content to be output. According to the scheme, when the target output content of the content to be output is available, the target output content can be output firstly, and when the target output content is unavailable, the target output content can be generated according to the evaluation information of the output content for outputting, so that part of the content to be output, but not the beginning part, is used as the target output content, or the target output content is generated according to the evaluation information, so that the target output content is the content capable of representing the main output content of the content to be output, the main content of the content to be output is clarified, and the user experience is improved.
The embodiment discloses a processing method, a flowchart of which is shown in fig. 2, and the processing method includes:
step S21, determining the content to be output;
step S22, if the target position of the content to be output is determined to have the target content, the target content is determined as the target output content related to the content to be output, and the target output content is output before the content to be output is output, wherein the target output content is a part of the content to be output;
step S23, if the target output content associated with the content to be output is not obtained, obtaining the evaluation information aiming at the output content in the process of outputting the content to be output, and determining the target content at least according to the evaluation information so as to process the target content into the target output content representing the content to be output.
The target position of the content to be output, namely the content to be output has the target content, the target content is a part of the content to be output, the target content is directly intercepted from the target position of the content to be output, and the target content is taken as the target output content.
The target position may be a start position, an end position, or a specific position other than the start position and the end position of the content to be output.
Such as: if the content to be output is a video file, the target position may be a specific position such as a leader or a trailer of the video file, or an image file from the m1 th frame to the n1 th frame of the video file, where n1 is greater than or equal to m1, or a collection of discontinuous images of several frames in the video file, and at this time, the target position is multiple;
if the content to be output is an audio file, the target position may be a specific position such as a prelude part, a climax part, a tail end part and the like of the audio file, or may be an m2 th frame to an n2 th frame of the audio file, wherein n2 is greater than or equal to m 2;
if the content to be output is an album file, the target position may be a specific position such as a thumbnail, a first image, a last image, or an image indicating a position in the file, or may be a collection of a plurality of images, and at this time, if the plurality of images are consecutive images, the target position may be one or a plurality of positions, and if the plurality of images are non-consecutive images, the target position is a plurality of positions;
if the content to be output is a text file, the target position may be a specific position such as an illustration position, a title position, a subject position, an abstract position, and the like in the content to be output, or may be one or more paragraphs or sentences in the text file, in which case, the target position may be one or more.
Further, after the target output content is obtained, the target output content needs to be output, which may specifically be:
obtaining attribute information of the content to be output, and determining output parameters of the target output content at least according to the attribute information so as to output the target output content according to the output parameters.
The attribute information of the content to be output may be: the type of the content to be output, or the playing parameter of the content to be output, and the like, wherein the playing parameter may be a playing frame rate, a duration, a playing speed, and the like; correspondingly, the output parameters may be: output type or output mode, display brightness, volume, speed, etc., and the output type may include: the type of motion picture, video, picture, text, dynamic text, etc., the target output content may be the type of motion picture, video, picture, text, dynamic text, etc.
For example: if the content to be output is a video file, the target output content may be: for example, the output parameters of the target output content may be: playing the target output content according to the video type and the playing frame rate of the content to be output;
if the content to be output is a video file, the output parameters of the target output content may be: determining the time length of the target output content according to the playing time length of the content to be output, and playing the target output content with the time length at the playing speed of the content to be output;
if the content to be output is an audio, the target output content may be an audio type, a text type, or the like.
In addition, the output target output content may be:
and obtaining the current environment information, determining the output parameters of the target output content at least according to the current environment information, and outputting the target output content according to the output parameters.
The environment information may include: sound, brightness, location, etc., then the corresponding output parameters may include: volume, whether to display, display mode, etc.
Specifically, the following may be mentioned: and determining target output content according to the content to be output, and further determining output parameters of the target output content according to the current environment information, so that the target output content is output according to the output parameters.
For example: if the volume of the sound in the current environment is generally low, outputting the target output content at the low volume when outputting the target output content;
if the current environment indicates that the target output content is currently located in a relatively quiet environment such as a library, when the target output content is output, if the target output content is content with audio information, the audio information may not be output, but only information except the audio information in the target output content, such as an image, is output, or the audio information is converted into text information to be output, or the audio information is output only when an external earphone is available, and when the external earphone is unavailable, the audio information is not output.
In addition, the output target output content may be:
obtaining attribute information and current environment information of the content to be output, determining output parameters of the target output content at least according to the attribute information and the current environment information, and outputting the target output content according to the output parameters.
And determining the type of the target output content according to the attribute information of the content to be output, determining the volume, the display mode and the like of the target output content according to the current environment information, and outputting.
For example: the content to be output is video content, so that the type of the content to be output is determined to be video, then the type of the target output content is video, the current environment information is library, when the target output content of the video type is output, audio in the target output content is converted into a text file, when the video in the target output content is output, only an image file and a text file in the video file are output, and the audio file is not synchronously output;
in addition, the following may be also possible: and if the content to be output is video content, outputting the target output content at a target frame rate, wherein the target frame rate is the same as or different from the output frame rate of the content to be output.
A target frame rate may be set in advance, and as long as the target output content is a video content, a picture, or a moving picture file, the target frame rate may be used for outputting, and the target frame rate may be the same as or different from the frame rate of the content to be output.
For example: and if the content to be output is video content, determining that the target output content is also video content, directly selecting a target frame rate for output, wherein the target frame rate can be greater than the frame rate of the content to be output, so that a user can quickly determine the main output content of the content to be output through the target output content, and quickly determine whether the content to be output meets the receiving intention of a target receiver.
And if the content to be output is audio content, outputting the target output content in an audio and/or text mode.
If the content to be output is audio, after determining the target output content, the target output content may be output in an audio manner, may be output in a text manner, or may be output in both manners, where which manner to output the target output content is selected may be determined by the environment information, may be determined by the location information, or may be determined according to the historical usage information of the user, or may be determined by the selection of the user.
And if the content to be output is image content, outputting the target output content in a picture or motion picture mode.
If the content to be output is an image, a single picture can be determined or multiple pictures can be determined when the target output content is determined, and during output, a single picture can be output in the form of a picture, multiple pictures can be output in the form of a picture, or multiple pictures can be output in the form of a motion picture.
If the content to be output is text content, the type of the target output content is determined to be the text type, and the current environment information is determined to be darker, it can be determined that the target output content is not output, or characters in the target output content are displayed in a darker font.
Of course, if the content to be output is text content, the target output content may also be output in an image and/or audio manner.
Specifically, the content to be output is text content, and the obtained target output content may be an image obtained by photographing or screenshot the text content, or may be specific content in the text content identified, and the identified content is output in an audio form without being viewed word by the user.
In addition, the output target output content may be:
the method comprises the steps of obtaining configuration information and/or historical use information of the electronic equipment, determining output parameters of target output content at least according to the configuration information and/or the historical use information, and outputting the target output content according to the output parameters.
The configuration information includes at least: hardware configuration information and/or software configuration information, where the software configuration information may be an application capable of outputting the target output content, or a configuration such as resolution, a memory, and the like required for outputting the target output content; the hardware configuration information can be a plurality of loudspeakers, a secondary screen and the like; the historical usage information may include: user usage habit information, output parameter information of similar contents, and the like.
For example: determining the type of the content to be output, acquiring output parameter information of content similar to the content to be output in the historical use information, and outputting target output content by the acquired output parameter information of the content similar to the content to be output;
or after the target output content is determined, determining whether the hardware configuration information and the software configuration information of the current electronic equipment meet the output requirement of the target output content, if not, adjusting the output parameters of the target output content to enable the output parameters of the target output content to meet the hardware configuration information and the historical use information of the current electronic equipment;
or after the target output content is determined, firstly determining the output parameter information of the content similar to the content to be output in the historical use information, then determining whether the hardware configuration information and the software configuration information of the current electronic equipment can output the target output content with the output parameter information of the content similar to the content to be output, if so, directly outputting the target output content with the output parameter information of the content similar to the content to be output, if not, adjusting the output parameter information of the content similar to the content to be output according to the hardware configuration information and the software configuration information of the current electronic equipment, so that the adjusted output parameter information can be matched with the hardware configuration information and the software configuration information of the current electronic equipment, and then outputting the target output content with the adjusted output parameter information.
The processing method disclosed in this embodiment determines a content to be output, outputs a target output content before outputting the content to be output if the target output content associated with the content to be output is obtained, where the target output content is a part of the content to be output, acquires evaluation information for the output content in the process of outputting the content to be output if the target output content associated with the content to be output is not obtained, and determines the target content at least according to the evaluation information, so as to process the target content into the target output content representing the content to be output. According to the scheme, when the target output content of the content to be output is available, the target output content can be output firstly, and when the target output content is unavailable, the target output content can be generated according to the evaluation information of the output content for outputting, so that part of the content to be output, but not the beginning part, is used as the target output content, or the target output content is generated according to the evaluation information, so that the target output content is the content capable of representing the main output content of the content to be output, the main content of the content to be output is clarified, and the user experience is improved.
Further, obtaining target output content associated with the content to be output may further include:
if the associated file exists in the target path, determining to obtain target output content; or, if partial contents in the contents to be output are determined to have the target identification, determining the contents with the target identification as target output contents associated with the contents to be output.
Wherein, the target path may include: the local path, the cloud path, the non-cloud path, and the like.
If the associated file of the content to be output exists in the local electronic equipment, the associated file is directly determined as the target output content and is directly acquired from the local equipment;
if the electronic equipment does not locally store the associated file of the content to be output, searching from the cloud path to which the content to be output belongs, wherein the cloud path to which the content to be output belongs is the position where the content to be output is stored in the cloud, and determining the associated file of the content to be output when the content to be output is stored in the cloud as the target output content;
if the electronic device does not locally store the associated file of the content to be output, and the associated file of the content to be output is not found in the cloud path to which the content to be output belongs, the associated file needs to be found from a non-affiliated cloud path, that is, the associated file of the content to be output is found from a position in the cloud where the content to be output is not stored, and is used as the target output content of the content to be output.
Of course, it may be: the method includes the steps that an associated file of the content to be output is stored locally in the electronic equipment, but the content to be output cannot be accurately expressed based on the content of the associated file, and the content to be output can be searched from a cloud path of the content to be output, so that the target output content is the content capable of expressing the content to be output.
In addition, part of the content has a target identifier, which may be: the target identification is set in the specific content part, and the specific content can be: key frames, key speech snippets, key chapters, key images, and the like.
The setting of the key frame, the key voice clip, the key chapter, and the key image may be based on evaluation information, and specifically may be based on user evaluation of content, content popularity, display parameters, volume level, and emotional excitement level.
For example: and determining a key chapter according to the evaluation of the content, and setting a target identifier at the position of the key chapter, so that the key chapter with the target identifier is directly selected as target output content when the target output content is determined.
Further, the target output content associated with the content to be output is obtained in at least one of the above forms, that is: can be as follows: if the target content exists at the target position of the content to be output, determining the target content as the target output content associated with the content to be output; the following steps can be also included: if the associated file exists in the target path, determining to obtain target output content; the method can also comprise the following steps: if it is determined that part of the content to be output has the target identification, determining the content with the target identification as target output content associated with the content to be output; alternatively, it may be: any two or three of the above-described various modes are not limited herein.
In addition, when the target output content is obtained in any of the above manners, the following manners may be adopted when outputting the target output content:
obtaining attribute information of the content to be output, determining an output parameter of the target output content at least according to the attribute information, and outputting the target output content according to the output parameter;
or,
obtaining current environment information, determining an output parameter of the target output content at least according to the current environment information, and outputting the target output content according to the output parameter;
or,
obtaining attribute information and current environment information of the content to be output, determining output parameters of target output content at least according to the attribute information and the current environment information, and outputting the target output content according to the output parameters;
or,
the method comprises the steps of obtaining configuration information and/or historical use information of the electronic equipment, determining output parameters of target output content at least according to the configuration information and/or the historical use information, and outputting the target output content according to the output parameters.
The specific way of outputting the target output content is consistent with the above embodiment, and is not described herein again.
The processing method disclosed in this embodiment can output the target output content before outputting the content to be output no matter what way is used to obtain the target output content associated with the content to be output and what way is used to output the target output content after determining the content to be output, thereby realizing that the content capable of representing the content to be output is output in advance before outputting the content to be output, so as to clarify the main content of the content to be output in advance, and improve user experience.
The embodiment discloses a processing method, a flowchart of which is shown in fig. 3, and the processing method includes:
step S31, determining the content to be output;
step S32, if the target output content associated with the content to be output is obtained, outputting the target output content before outputting the content to be output, wherein the target output content is a part of the content to be output;
step S33, obtaining feedback information for the target output content in the process of outputting the target output content;
step S34, if the feedback information represents that the content to be output meets the intention of the target receiver, the content to be output is automatically output;
step S35, if the target output content associated with the content to be output is not obtained, obtaining the evaluation information aiming at the output content in the process of outputting the content to be output, and determining the target content at least according to the evaluation information so as to process the target content into the target output content representing the content to be output.
If the target output content is obtained, the target output content may be directly output before the content to be output is output, that is, the content to be output is directly output after the target output content is output, or whether the content to be output is manually determined by a user after the target output content is output, or: after outputting the target output content, the apparatus automatically determines whether to output the content to be output.
Specifically, it is determined whether feedback information for the target output content is obtained in the process of outputting the target output content, and if the feedback information for the target output content is not obtained, the content to be output is directly output after the target output content is output, or a user manually determines whether the content to be output is output; if the feedback information for the target output content is obtained, it may be first determined whether the feedback information can represent whether the content to be output meets the intention of the target recipient.
In the output process of the target output content, the user performs an operation on the target output content, so as to obtain feedback information for the target output content, where the operation performed by the user on the target output content may be: operations such as fast forward, slow play, pause, playback, repeat play, and the like, may further include: the user's attention value, length of attention, etc. to different portions of the target output content.
According to the feedback information of the user for the target output content, whether the content to be output needs to be automatically output can be determined, that is, according to the feedback information of the user for the target output content, whether the content to be output represented by the target output content is the content which the user wants to obtain can be determined, if the content is the content which the user wants to obtain, the content to be output is continuously output, and if the content to be output is directly determined to be not the content which the user wants to obtain according to the feedback information for the target output content, the content to be output does not need to be output, so that the time of the user is saved, and the user experience is improved.
For example: in the process of outputting the target output content, the attention of the user is always on the target output content, or the time length of the attention of the user on the target output content reaches the preset time length, then it can be determined that the feedback information aiming at the target output content can indicate that the content to be output meets the intention of the user, namely the user intentionally watches the content to be output, and then the content to be output is automatically output;
or,
in the process of outputting the target output content, the user performs pause operation on the target output content, and in the process from pause to restart playing, the attention of the user is always on the target output content, which indicates that the feedback information aiming at the target output content can indicate that the content to be output meets the intention of the user, namely, the user intentionally watches the content to be output, and then the content to be output is automatically output;
or,
in the process of outputting the target output content, if the user performs fast forward operation on the target output content for a preset number of times, it can be determined that the feedback information of the user on the target output content can indicate that the content to be output does not meet the intention of the user, the user does not intend to watch the content to be output, and after the target output content is output, the content to be output does not need to be automatically output;
or, in the process of outputting the target output content, the user performs screenshot operation on at least one part of the target output content, so that it can be determined that the feedback information of the user on the target output content can indicate that the content to be output meets the intention of the user, that is, the user intentionally views the content to be output, and then the content to be output is automatically output.
In addition, the following may be also possible: and if the feedback information comprises operation information for jumping from the target output content to the content to be output by the user, outputting the content to be output.
That is, in the process of outputting the target output content, the user performs an operation on the target output content, where the operation performed by the user on the target output content may be one or multiple, and the operation includes at least one preset operation, where the preset operation is an operation for directly jumping from the target output content to the content to be output, that is, when the user performs the preset operation on a certain part of the target output content, the target output content is not output any more at the time when the preset operation is detected, or within a preset time period when the preset operation is detected, but the content to be output is directly output, so as to complete the jump from the target output content to the content to be output.
Specifically, when the feedback information includes operation information for jumping from the target output content to the content to be output, the operation information may jump directly to a beginning portion of the content to be output, that is, when the preset operation is detected, the content to be output is output no matter at a time when the preset operation is detected, or the content to be output is output within a preset time duration when the preset operation is detected, and when the content to be output is output, the output is started from the beginning of the content to be output, that is, as long as the operation information jumps to the content to be output, an entire portion of the content to be output is output;
alternatively, it may be: when the feedback information comprises operation information used for jumping from target output content to be output, a content part output by the target output content is determined at the moment of executing the preset operation, a content part in the content to be output, which is matched with the content part output by the target output content, is determined, when the feedback information jumps from the target output content to the content to be output, the output is directly started from the content part in the content to be output, namely only the content part in the content to be output and/or the content behind the content part are/is output, and the complete content to be output is not output, so that the pertinence of the jumping operation aimed at by the preset operation is stronger, and the content which is not received by a target receiver is prevented from being output.
The processing method disclosed in this embodiment determines a content to be output, outputs a target output content before outputting the content to be output if the target output content associated with the content to be output is obtained, where the target output content is a part of the content to be output, acquires evaluation information for the output content in the process of outputting the content to be output if the target output content associated with the content to be output is not obtained, and determines the target content at least according to the evaluation information, so as to process the target content into the target output content representing the content to be output. According to the scheme, when the target output content of the content to be output is available, the target output content can be output firstly, and when the target output content is unavailable, the target output content can be generated according to the evaluation information of the output content for outputting, so that part of the content to be output, but not the beginning part, is used as the target output content, or the target output content is generated according to the evaluation information, so that the target output content is the content capable of representing the main output content of the content to be output, the main content of the content to be output is clarified, and the user experience is improved.
The present embodiment discloses a processing method, a flowchart of which is shown in fig. 4, and includes:
step S41, determining the content to be output;
step S42, if the target output content associated with the content to be output is obtained, outputting the target output content before outputting the content to be output, wherein the target output content is a part of the content to be output;
step S43, if the target output content associated with the content to be output is not obtained, obtaining the receiving status information of the target receiver for the output content in the process of outputting the content to be output, and determining the target content at least according to the receiving status information, so as to process at least the target content into the target output content representing the content to be output.
When the content to be output is determined, if the target output content related to the content to be output is not obtained, no matter the target output content related to the content to be output is not obtained from the local path or the cloud path, the target output content is not output any more before the content to be output is output, but the content to be output is directly output.
In addition, in the process of outputting the content to be output, the receiving state of the target receiver needs to be analyzed and determined, so that the target output content is determined according to the receiving state, and the target output content related to the content to be output is generated, so that the target output content can be output before the content to be output is output next time.
Specifically, in the process of outputting the content to be output, the target receiver, i.e. the user or the device receiving the content to be output, and in the process of receiving the content to be output, the target receiver has receiving status information, such as: whether a target receiver completely receives the content to be output, whether the target receiver receives the content to be output according to a normal speed, the attention value of the target receiver in the process of receiving the content to be output, the times, duration and the like of the target receiver paying attention to the part of the content to be output in the process of receiving the content to be output, and whether the target receiver has operations of pause, repeated playing, playback, acceleration, skipping and the like in the process of receiving the content to be output.
And based on one or more of the multiple receiving states, obtaining final receiving state information of the content to be output of the target receiver together, thereby determining a part with higher attention of the target receiver to the content to be output, taking the part as the target content, processing the target content into the target output content, and taking the target output content as a summary file or a thumbnail capable of representing the content to be output, wherein the target output content can represent the attention of the target receiver to the content to be output.
For example: the content to be output is a video content, which does not have associated target output content, so that during the process of outputting the video content, the user watches the video content, and during the watching process, the user may perform fast forward operation, pause operation, slow play operation, playback operation, screenshot operation, user's sight line leaving the output area of the video content, user's sight line leaving the area capable of watching the video content, and the like, which all affect the final receiving state information of the content to be output by the end user.
If the user executes the fast forward operation in the process of watching the video, the several frames of images targeted by the fast forward operation will not be the content of interest of the user for the content to be output and will not appear in the target output content; if the user plays the content between the points a-B in the video content for multiple times, the content between the points a-B may be determined as the content of interest of the user for the content to be output, and may appear in the target output content.
In the process of determining the target output content based on the receiving state information of the target receiver aiming at the output content, all operations executed by a user on the output content can be converted into heat information of different parts in the output content, and the part with the highest heat information in the complete content to be output is selected as the target output content, or the part with the heat information exceeding a preset value in the complete content to be output is selected as the target output content.
For example: playing a video content by the electronic device, wherein in the playing process, the user executes the playing operation, the heat information of each frame of the played video content is 1, on the basis, if the skip operation is executed, the heat information of each frame of picture in the skipped video content is 0, if the video content is played for multiple times between A-B points, the heat information of each frame between the points a-B is increased by 1 with each play, if a fast forward operation is performed between the points C-D of the video content, the heat information of each frame between the C-D points is added with 0.5, and if there is a pause at a certain frame, adding 1 to the heat information of the frame of picture until the complete content to be output is output, and then counting the heat information of each frame of picture in the content to be output, thereby determining the part with the highest heat information value as the target output content.
In addition, the process of determining the target content may further include:
obtaining historical evaluation information and/or historical output parameters of the output content and/or the non-output content in the process of outputting the content to be output, and determining the target content at least according to the historical evaluation information and/or the historical output parameters.
After the content to be output is determined, if there is no target output content associated with the content to be output, the history evaluation information and/or the history output parameters for the content in the process of outputting the content to be output in the history record need to be searched.
The searched history record may be a record when the content to be output is output from the cloud search cloud record, or may also be a record when the content to be output is output from the history record locally stored.
The historical evaluation information may be: for comment information, bullet screen information, heat information, and the like of different parts in the content to be output, the historical output parameters may be: and aiming at parameters such as output times, output duration, output speed multiplication, output mode and the like of different parts in the content to be output.
According to the historical evaluation information and/or the historical output parameters in the historical records, the attention information of different users on different parts of the content to be output is determined, and therefore the target output content is determined, the primary output part of the content to be output is obtained through preliminary analysis of the attention information of the different users on the different parts of the content to be output, and whether the content to be output meets the receiving intention of the user is determined based on the target output content.
Specifically, the following may be mentioned: on the basis that target output content related to the content to be output does not exist in the local equipment, relevant records are searched from the local equipment or the cloud end, so that the target output content is generated, after the target output content generated by the user based on the historical records outputs the content to be output, the current target output content is determined according to output evaluation information and output parameters of the content to be output of the user, and therefore the target output content generated based on the historical records can be replaced or updated.
The processing method disclosed in this embodiment determines a content to be output, outputs a target output content before outputting the content to be output if the target output content associated with the content to be output is obtained, where the target output content is a part of the content to be output, acquires evaluation information for the output content in the process of outputting the content to be output if the target output content associated with the content to be output is not obtained, and determines the target content at least according to the evaluation information, so as to process the target content into the target output content representing the content to be output. According to the scheme, when the target output content of the content to be output is available, the target output content can be output firstly, and when the target output content is unavailable, the target output content can be generated according to the evaluation information of the output content for outputting, so that part of the content to be output, but not the beginning part, is used as the target output content, or the target output content is generated according to the evaluation information, so that the target output content is the content capable of representing the main output content of the content to be output, the main content of the content to be output is clarified, and the user experience is improved.
The present embodiment discloses a processing method, a flowchart of which is shown in fig. 5, and the processing method includes:
step S51, determining the content to be output;
step S52, if the target output content associated with the content to be output is obtained, outputting the target output content before outputting the content to be output, wherein the target output content is a part of the content to be output;
step S53, if the target output content associated with the content to be output is not obtained, obtaining evaluation information aiming at the output content in the process of outputting the content to be output, determining the target content at least according to the evaluation information, and at least processing the target content into the target output content representing the content to be output;
step S54, during or after outputting the content to be output, updates the target output content.
The target output content is a target output content into which the target content determined with respect to the evaluation information of the already output or not output content is processed in outputting the content to be output, and then, during or after the output of the current content to be output is finished, the evaluation information for the content which is output or not output is continuously generated, as long as during or after the current content to be output is output, the generated rating information may be different from the rating information based on which the target output content was previously obtained, which may result in a difference in the target output content that is finally obtained, and therefore, during or after each output of the content to be output, it is necessary to record evaluation information for the content that has been output or has not been output, therefore, the target output content is updated, so that the finally obtained target output content conforms to the receiving intention of the current user on the basis of conforming to the historical receiving intention.
For example: the content to be output is a video content, the directly obtained target output content takes the 10 th to 20 th frames in the video content as main content, the content to be output is output after the target output content is output, a video receiver can have evaluation information based on the output content in the output process or after the content to be output, the evaluation information can be a comment aiming at a certain frame of picture and can also be operations such as fast forwarding or pausing, the evaluation information generated by the current output of the content to be output can influence the target output content, the target output content is updated based on the current evaluation information box, and the finally obtained target output content can take the 13 th to 22 th frames in the video content as main content.
Further, the updating of the target output content may be: and updating the target output content at least according to the receiving state information of the target receiver aiming at the output content and/or the output parameter for outputting the content to be output in the process of outputting the content to be output.
The target receiver receives the receiving state information of the output content, namely whether the target receiver completely receives the content to be output in the content output process, the attention value of the target receiver in the content output process, the times and the duration of the target receiver paying attention to the part of the content to be output in the content output process, and the like; and outputting the output parameters of the content to be output, namely whether the target receiver receives the content to be output according to the normal double speed or not, and whether the target receiver has operations of pause, repeated playing, playback, acceleration, skipping and the like in the process of receiving the content to be output.
For example: determining the historical heat information of each frame of picture in the video to be output in the historical record, if the playing operation is executed in the playing process, adding 1 to the heat information of each frame of picture of the played video content, if the skipping operation is executed, adding 0 to the heat information of each frame of picture in the skipped video content, if the video content is played for multiple times between A-B points, adding 1 to the heat information of each frame of picture between the A-B points along with each playing, if the fast forwarding operation is executed between C-D points of the video content, adding 0.5 to the heat information of each frame of picture between the C-D points, if the video content is paused at a certain frame of picture, adding 1 to the heat information of the frame of picture until the complete content to be output is output, counting the updated heat information of each frame of picture in the content to be output on the basis of the historical heat information, thereby re-determining the part with the highest heat information value as the target output content.
In addition, the update of the target output content may be: and updating the target output content based on the receiving state information and/or the output parameter information in the process of outputting the target output content.
For example: if the target output content itself has 20 frames of images, and two frames of images are skipped during the process of outputting the target output content, after the target output content is output, the heat information of the two skipped frames of images can be respectively reduced by 1 on the basis of the historical heat information of each frame of image in the 20 frames of images, or 1 can be respectively added to the heat information of each frame of image of 18 frames of images except the two skipped frames of images, so as to update the evaluation information of the target output content, so that the target output content can be updated based on the update of the evaluation information of the target output content, for example: and deleting the skipped two frames of images from the target output content if the heat information of the skipped two frames of images is lower than a preset heat value so as to ensure that the heat information of each frame of images in the target output content can be kept above the preset heat value.
Further, the method can also comprise the following steps:
and sharing the updated target output content and/or the updated evaluation information to the target position.
Wherein, the target position may be: the method comprises the steps that a specific website, other terminals or other users are used, so that each user or cloud can obtain the attention information of different users to the same content to be output, and the users or the cloud can conveniently analyze different types of users or different types of content to be output based on the difference of the attention information of different users or different types of users to the same content to be output;
in addition, the target position may be a specific position of the content to be output, and the updated target output content may be shared to the specific position of the content to be output, so that the target output content associated with the content to be output is the content updated according to the target recipient content to be output or the attention information of the target output content.
The processing method disclosed in this embodiment determines a content to be output, outputs a target output content before outputting the content to be output if the target output content associated with the content to be output is obtained, where the target output content is a part of the content to be output, acquires evaluation information for the output content in the process of outputting the content to be output if the target output content associated with the content to be output is not obtained, and determines the target content at least according to the evaluation information, so as to process the target content into the target output content representing the content to be output. According to the scheme, when the target output content of the content to be output is available, the target output content can be output firstly, and when the target output content is unavailable, the target output content can be generated according to the evaluation information of the output content for outputting, so that part of the content to be output, but not the beginning part, is used as the target output content, or the target output content is generated according to the evaluation information, so that the target output content is the content capable of representing the main output content of the content to be output, the main content of the content to be output is clarified, and the user experience is improved.
The embodiment discloses a processing apparatus, a schematic structural diagram of which is shown in fig. 6, including:
a determination module 61, a first output module 62 and a second output module 63.
The determining module 61 is configured to determine content to be output;
the first output module 62 is configured to, in a case that a target output content associated with the content to be output is obtained, output the target output content before outputting the content to be output, where the target output content is a part of the content to be output;
the second output module 63 is configured to, when a target output content associated with the content to be output is not obtained, obtain evaluation information for the output content in the process of outputting the content to be output, determine the target content at least according to the evaluation information, and process at least the target wool fabric into a target output content representing the content to be output.
Further, the first output module obtains target output content associated with the content to be output, and the target output content at least comprises one of the following items:
if the target content exists at the target position of the content to be output, the first output module determines the target content as the target output content associated with the content to be output;
if the associated file exists in the target path, the first output module determines to obtain target output content;
and if the partial content in the content to be output is determined to have the target identification, determining the content with the target identification as the target output content associated with the content to be output.
Further, the first output module outputs the target output content, including:
the first output module obtains attribute information of the content to be output, determines output parameters of the target output content at least according to the attribute information, and outputs the target output content according to the output parameters;
or,
the first output module obtains current environment information, determines output parameters of target output content at least according to the current environment information, and outputs the target output content according to the output parameters;
or,
the first output module obtains attribute information and current environment information of the content to be output, determines output parameters of target output content at least according to the attribute information and the current environment information, and outputs the target output content according to the output parameters;
or,
the first output module obtains configuration information and/or historical use information of the electronic equipment, determines output parameters of target output content at least according to the configuration information and/or the historical use information, and outputs the target output content according to the output parameters.
Further, the first output module outputs the target output content, including:
if the content to be output is video content, outputting the target output content at a target frame rate, wherein the target frame rate is the same as or different from the output frame rate of the content to be output;
if the content to be output is audio content, outputting the target output content in an audio and/or text mode;
if the content to be output is image content, outputting the target output content in a picture or moving picture mode;
and if the content to be output is text content, outputting the target output content in an image and/or audio mode.
Further, the first output module outputs the content to be output, including:
the first output module obtains feedback information aiming at the target output content in the process of outputting the target output content;
if the feedback information represents that the content to be output meets the intention of the target receiver, the content to be output is automatically output;
or,
and outputting the content to be output if the feedback information includes operation information for jumping from the target output content to the content to be output.
Further, the second output module obtains evaluation information for the output content in the process of outputting the content to be output, and determines the target content at least according to the evaluation information, including:
the second output module obtains receiving state information of a target receiver aiming at the output content in the process of outputting the content to be output, and determines the target content at least according to the receiving state information;
or,
the second output module acquires historical evaluation information and/or historical output parameters of the output content and/or the non-output content in the process of outputting the content to be output, and determines the target content at least according to the historical evaluation information and/or the historical parameters.
Further, the processing device may further include:
and the updating module is used for updating the target output content in the process of outputting the content to be output or after outputting the content to be output.
Further, the updating module updates the target output content, including:
the updating module updates the target output content at least according to the receiving state information of the target receiver aiming at the output content and/or the output parameter of the output content to be output in the process of outputting the content to be output.
Further, the processing device may further include:
and the sharing module is used for sharing the updated target output content and/or the updated evaluation information to the target position.
The processing apparatus disclosed in this embodiment is implemented based on the processing method disclosed in the above embodiment, and a specific implementation manner thereof is not described herein again.
The processing device disclosed in this embodiment determines a content to be output, outputs a target output content before outputting the content to be output if the target output content associated with the content to be output is obtained, where the target output content is a part of the content to be output, acquires evaluation information for the output content in the process of outputting the content to be output if the target output content associated with the content to be output is not obtained, and determines the target content at least according to the evaluation information so as to process the target content into the target output content representing the content to be output. According to the scheme, when the target output content of the content to be output is available, the target output content can be output firstly, and when the target output content is unavailable, the target output content can be generated according to the evaluation information of the output content for outputting, so that part of the content to be output, but not the beginning part, is used as the target output content, or the target output content is generated according to the evaluation information, so that the target output content is the content capable of representing the main output content of the content to be output, the main content of the content to be output is clarified, and the user experience is improved.
The embodiment discloses an electronic device, which comprises a memory and a processor, wherein the memory stores a processing program, and the processing program can realize the steps shown in the processing method when being executed by the processor.
The present embodiment discloses a storage medium for storing at least one set of instructions for being called and executing at least the processing method of any of the above.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method of processing, comprising:
determining content to be output;
if the target output content associated with the content to be output is obtained, outputting the target output content before outputting the content to be output, wherein the target output content is a part of the content to be output;
and/or if the target output content related to the content to be output is not obtained, obtaining evaluation information aiming at the output content in the process of outputting the content to be output, and determining the target content at least according to the evaluation information so as to process the target content into the target output content representing the content to be output.
2. The method of claim 1, wherein the obtaining of the target output content associated with the content to be output comprises at least one of:
if the target position of the content to be output is determined to have the target content, determining the target content as the target output content associated with the content to be output;
or,
if the associated file exists in the target path, determining to obtain the target output content;
or,
and if the partial content in the content to be output is determined to have the target identification, determining the content with the target identification as the target output content associated with the content to be output.
3. The method of claim 1, wherein the outputting the target output content comprises:
obtaining attribute information of the content to be output, determining an output parameter of the target output content at least according to the attribute information, and outputting the target output content according to the output parameter;
or,
obtaining current environment information, determining an output parameter of the target output content at least according to the current environment information, and outputting the target output content according to the output parameter;
or,
obtaining attribute information and current environment information of the content to be output, determining output parameters of the target output content at least according to the attribute information and the current environment information, and outputting the target output content according to the output parameters;
or,
obtaining configuration information and/or historical use information of the electronic equipment, and determining output parameters of the target output content at least according to the configuration information and/or the historical use information so as to output the target output content according to the output parameters.
4. The method of claim 1 or 3, wherein the outputting the target output content comprises:
if the content to be output is video content, outputting the target output content at a target frame rate, wherein the target frame rate is the same as or different from the output frame rate of the content to be output;
or,
if the content to be output is audio content, outputting the target output content in an audio and/or text mode;
or,
if the content to be output is image content, outputting the target output content in a picture or moving picture mode;
or,
and if the content to be output is text content, outputting the target output content in an image and/or audio mode.
5. The method of claim 1, wherein the outputting the content to be output comprises:
obtaining feedback information for the target output content in a process of outputting the target output content;
if the feedback information represents that the content to be output meets the intention of a target receiver, automatically outputting the content to be output;
or,
and if the feedback information comprises operation information for jumping from the target output content to the content to be output, outputting the content to be output.
6. The method according to claim 1, wherein the obtaining of the evaluation information for the output content in the process of outputting the content to be output and the determining of the target content according to at least the evaluation information comprise:
receiving state information of a target receiver aiming at the output content in the process of outputting the content to be output is obtained, and the target content is determined at least according to the receiving state information;
or,
obtaining historical evaluation information and/or historical output parameters of the content which is output and/or the content which is not output in the process of outputting the content to be output, and determining target content at least according to the historical evaluation information and/or the historical output parameters.
7. The method of claim 1 or 6, further comprising:
updating the target output content during or after the output of the content to be output.
8. The method of claim 7, wherein the updating the target output content comprises:
and updating the target output content at least according to the receiving state information of the target receiver aiming at the output content and/or the output parameter of the output content in the process of outputting the content to be output.
9. The method of claim 7, further comprising:
and sharing the updated target output content and/or the updated evaluation information to the target position.
10. A processing apparatus, comprising:
the determining module is used for determining the content to be output;
the first output module is used for outputting the target output content before outputting the content to be output under the condition of obtaining the target output content related to the content to be output, wherein the target output content is a part of the content to be output;
and the second output module is used for acquiring evaluation information aiming at output content in the process of outputting the content to be output under the condition that target output content related to the content to be output is not acquired, determining the target content at least according to the evaluation information, and processing the target content at least into target output content representing the content to be output.
CN202110339833.0A 2021-03-30 2021-03-30 Processing method and device Pending CN113051233A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110339833.0A CN113051233A (en) 2021-03-30 2021-03-30 Processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110339833.0A CN113051233A (en) 2021-03-30 2021-03-30 Processing method and device

Publications (1)

Publication Number Publication Date
CN113051233A true CN113051233A (en) 2021-06-29

Family

ID=76516455

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110339833.0A Pending CN113051233A (en) 2021-03-30 2021-03-30 Processing method and device

Country Status (1)

Country Link
CN (1) CN113051233A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113627363A (en) * 2021-08-13 2021-11-09 百度在线网络技术(北京)有限公司 Video file processing method, device, equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104936032A (en) * 2015-06-03 2015-09-23 北京百度网讯科技有限公司 Method and device for playing network video
CN106488300A (en) * 2016-10-27 2017-03-08 广东小天才科技有限公司 Video content viewing method and device
CN106713964A (en) * 2016-12-05 2017-05-24 乐视控股(北京)有限公司 Method of generating video abstract viewpoint graph and apparatus thereof
CN109587578A (en) * 2018-12-21 2019-04-05 麒麟合盛网络技术股份有限公司 The processing method and processing device of video clip
CN110650375A (en) * 2019-10-18 2020-01-03 腾讯科技(深圳)有限公司 Video processing method, device, equipment and storage medium
CN110798747A (en) * 2019-09-27 2020-02-14 咪咕视讯科技有限公司 Video playing method, electronic equipment and storage medium
CN111694984A (en) * 2020-06-12 2020-09-22 百度在线网络技术(北京)有限公司 Video searching method and device, electronic equipment and readable storage medium
CN112231516A (en) * 2020-09-29 2021-01-15 北京三快在线科技有限公司 Training method of video abstract generation model, video abstract generation method and device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104936032A (en) * 2015-06-03 2015-09-23 北京百度网讯科技有限公司 Method and device for playing network video
CN106488300A (en) * 2016-10-27 2017-03-08 广东小天才科技有限公司 Video content viewing method and device
CN106713964A (en) * 2016-12-05 2017-05-24 乐视控股(北京)有限公司 Method of generating video abstract viewpoint graph and apparatus thereof
CN109587578A (en) * 2018-12-21 2019-04-05 麒麟合盛网络技术股份有限公司 The processing method and processing device of video clip
CN110798747A (en) * 2019-09-27 2020-02-14 咪咕视讯科技有限公司 Video playing method, electronic equipment and storage medium
CN110650375A (en) * 2019-10-18 2020-01-03 腾讯科技(深圳)有限公司 Video processing method, device, equipment and storage medium
CN111694984A (en) * 2020-06-12 2020-09-22 百度在线网络技术(北京)有限公司 Video searching method and device, electronic equipment and readable storage medium
CN112231516A (en) * 2020-09-29 2021-01-15 北京三快在线科技有限公司 Training method of video abstract generation model, video abstract generation method and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113627363A (en) * 2021-08-13 2021-11-09 百度在线网络技术(北京)有限公司 Video file processing method, device, equipment and storage medium
CN113627363B (en) * 2021-08-13 2023-08-15 百度在线网络技术(北京)有限公司 Video file processing method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
US11030987B2 (en) Method for selecting background music and capturing video, device, terminal apparatus, and medium
US8606077B2 (en) Method, system, user equipment, and server equipment for video file playback
US11140462B2 (en) Method, apparatus, and device for generating an essence video and storage medium
WO2015038351A1 (en) Method and apparatus for generating a text color for a group of images
US10341727B2 (en) Information processing apparatus, information processing method, and information processing program
US20080066104A1 (en) Program providing method, program for program providing method, recording medium which records program for program providing method and program providing apparatus
CN112153307A (en) Method and device for adding lyrics in short video, electronic equipment and storage medium
JP6202815B2 (en) Character recognition device, character recognition method, and character recognition program
JP2010055501A (en) Information providing server, information providing method and information providing system
CN108958592B (en) Video processing method and related product
CN113852767B (en) Video editing method, device, equipment and medium
JP2009077112A (en) Image reproducing device and control method and control program of image reproducing device
CN113051233A (en) Processing method and device
CN103517150B (en) Blu-ray player is representing method and system that Internet video is loading
JP4592719B2 (en) Digest content display device and program thereof
US20230052033A1 (en) Systems and methods for recommending content using progress bars
JP2007208651A (en) Content viewing apparatus
KR101924634B1 (en) Content providing server, content providing terminal and content providing method
CN113691838A (en) Audio bullet screen processing method and device, electronic equipment and storage medium
CN109151568B (en) Video processing method and related product
CN111343391A (en) Video capture method and electronic device using same
CN108882023B (en) Video processing method and related product
CN114430499B (en) Video editing method, video editing apparatus, electronic device, and readable storage medium
JP4961760B2 (en) Content output apparatus and content output method
CN116521925A (en) Video recording, playing, retrieving and playback method and device, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination