CN109977239B - Information processing method and electronic equipment - Google Patents

Information processing method and electronic equipment Download PDF

Info

Publication number
CN109977239B
CN109977239B CN201910254420.5A CN201910254420A CN109977239B CN 109977239 B CN109977239 B CN 109977239B CN 201910254420 A CN201910254420 A CN 201910254420A CN 109977239 B CN109977239 B CN 109977239B
Authority
CN
China
Prior art keywords
information
local data
format
multimedia data
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910254420.5A
Other languages
Chinese (zh)
Other versions
CN109977239A (en
Inventor
刘刊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201910254420.5A priority Critical patent/CN109977239B/en
Publication of CN109977239A publication Critical patent/CN109977239A/en
Application granted granted Critical
Publication of CN109977239B publication Critical patent/CN109977239B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The application provides an information processing method, which comprises the following steps: obtaining input information in a first format; processing the input information, and searching local data matched with the input information in multimedia data in a second format, wherein the first format and the second format are different; the local data is determined. In the scheme, the first input information for searching is in the first format, and the multimedia data is in the second format, and local data in the multimedia data can be searched based on the input information different from the multimedia content format by adopting the scheme, so that the adjustment based on the content of the multimedia data is realized, a user does not need to memorize the specific time of each segment in the multimedia content, the searching speed of the multimedia data is improved, and the user experience is improved.

Description

Information processing method and electronic equipment
Technical Field
The present application relates to the field of electronic devices, and more particularly, to an information processing method and an electronic device.
Background
In the process of playing the multimedia content, a user can select to adjust the playing progress of the multimedia content according to requirements.
In the prior art, the adjustment of the playing progress of the multimedia content is generally performed based on a time point, such as fast forward for 10 seconds, fast backward for 30 minutes, etc.
By adopting the adjusting method, the user needs to memorize the time of the multimedia content, so as to adjust the playing progress based on the time of the multimedia content, and jump to the content segment which the user wants to play.
Disclosure of Invention
In view of this, the present application provides the following technical solutions:
a method of processing, comprising:
obtaining input information in a first format;
processing the input information, and searching local data matched with the input information in multimedia data in a second format, wherein the first format and the second format are different;
the local data is determined.
In the above method, preferably, the processing the input information searches for local data matching the input information in the multimedia data in the second format, including:
according to a preset first processing rule, converting the input information in a first format into the input information in a second format;
and searching the local data matching the input information in the multimedia data by utilizing the input information in the second format.
In the above method, preferably, before obtaining the input information in the first format, the method further includes:
According to a preset second processing rule, processing the multimedia data in the second format to obtain a local data set in the first format, wherein the local data set comprises at least two orderly local data;
processing the input information and searching the multimedia data in the second format for local data matching the input information, including:
and searching local data matched with the input information in the first format in the local data set in the first format according to the input information in the first format.
In the above method, preferably, the determining the local data includes:
determining local data corresponding to the input information in a local data set;
and determining identification information representing the relative position according to the relative position of the local data in the multimedia data.
In the above method, preferably, the processing the multimedia data in the second format according to a preset second processing rule to obtain a local data set in the first format specifically includes at least one of the following:
traversing audio information in the multimedia data to obtain key sounds contained in the audio information, wherein each key sound corresponds to local data in the multimedia data; based on a key sound in a first format and a first relative position of corresponding local data in multimedia data, establishing a corresponding relation between the key sound and the first relative position;
Or alternatively
Traversing key frames of video information in multimedia data, wherein each key frame corresponds to local data in the multimedia data, and sequentially identifying scenes corresponding to images in each key frame; according to a scene corresponding to an image in a key frame and a second relative position of corresponding local data of the key frame in the multimedia data, taking the scene as key information, and establishing a corresponding relation between the key information and the second relative position;
or alternatively
Traversing the subtitle file corresponding to the multimedia data to obtain key information contained in the subtitle file, wherein each key information corresponds to a local data in the multimedia data; and establishing a corresponding relation between the key information and a third relative position based on the key information and the third relative position of the corresponding local data in the multimedia data.
In the above method, preferably, after determining the local data, the method further includes:
and adjusting the output position of the multimedia data to the local data according to the determined identification information so as to enable the multimedia data to be continuously output based on the local data.
In the above method, preferably, after determining the identification information characterizing the relative position according to the relative position of the local data in the multimedia data, before adjusting the output position of the multimedia data to the local data according to the determined identification information, the method further includes:
Generating prompt information according to the determined at least two identification information and displaying the prompt information;
receiving feedback information, wherein the feedback information is that a user selects a first relative position in the display prompt information;
and according to the feedback information, analyzing and determining that the local data corresponding to the first relative position is the local data to be output.
In the above method, preferably, the obtaining the input information in the first format includes:
acquiring voice information, wherein the voice information is voice search information input by a user;
or alternatively
Acquiring voice information, wherein the voice information is voice search information input by a user;
and converting the voice information into text information according to a preset conversion rule.
An electronic device, comprising:
the collector is used for obtaining the input information in the first format;
a processor for processing the input information, searching for local data matching the input information in multimedia data of a second format, wherein the first format and the second format are different; the local data is determined.
An electronic device, comprising:
the acquisition module is used for acquiring the input information in the first format;
the processing module is used for processing the input information and searching local data matched with the input information in multimedia data in a second format, wherein the first format and the second format are different; the local data is determined.
As can be seen from the above technical solution, compared with the prior art, the present application provides an information processing method, including: obtaining input information in a first format; processing the input information, and searching local data matched with the input information in multimedia data in a second format, wherein the first format and the second format are different; the local data is determined. In the scheme, the first input information for searching is in the first format, and the multimedia data is in the second format, and local data in the multimedia data can be searched based on the input information different from the multimedia content format by adopting the scheme, so that the adjustment based on the content of the multimedia data is realized, a user does not need to memorize the specific time of each segment in the multimedia content, the searching speed of the multimedia data is improved, and the user experience is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only embodiments of the present application, and other drawings may be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of an embodiment 1 of an information processing method according to the present application;
FIG. 2 is a flowchart of an embodiment 2 of an information processing method according to the present application;
FIG. 3 is a flowchart of an embodiment 3 of an information processing method according to the present application;
FIG. 4 is a flowchart of an embodiment 4 of an information processing method according to the present application;
fig. 5 is a schematic diagram of audio information of key sound and multimedia data in embodiment 4 of an information processing method according to the present application;
FIG. 6 is a flowchart of an embodiment 5 of an information processing method according to the present application;
fig. 7 is a schematic diagram of video information of a key frame and multimedia data in embodiment 5 of an information processing method according to the present application;
FIG. 8 is a flowchart of an embodiment 6 of an information processing method according to the present application;
FIG. 9 is a flowchart of an embodiment 7 of an information processing method according to the present application;
FIG. 10 is a flowchart of an embodiment 8 of an information processing method according to the present application;
FIG. 11 is a flowchart of an embodiment 9 of an information processing method according to the present application;
fig. 12 is a schematic diagram showing prompt information in a display area of an electronic device in embodiment 9 of an information processing method provided by the present application;
Fig. 13 is a schematic structural diagram of an embodiment 1 of an electronic device according to the present application;
fig. 14 is a schematic structural diagram of an embodiment 2 of an electronic device according to the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
As shown in fig. 1, a flowchart of an embodiment 1 of an information processing method according to the present application is provided, and the method is applied to an electronic device, and includes the following steps:
step S101: obtaining input information in a first format;
the input information is information input by a user through an input device of the electronic equipment, and the input information is used for searching local data in the multimedia data.
Wherein the input information and the multimedia data in the electronic device are in different formats.
In particular, the input information may be audio, text or even video.
Specifically, the step S101 includes at least one of the following means: acquiring voice information, wherein the voice information is voice search information input by a user; or, acquiring voice information, wherein the voice information is voice search information input by a user; and converting the voice information into text information according to a preset conversion rule.
Specifically, in the above manner, local data in the multimedia data may be searched directly based on the voice information; it is also possible to convert the voice information into text information so that local data in the multimedia data is searched based on the text information.
Step S102: processing the input information, and searching local data matched with the input information in the multimedia data in a second format;
wherein the first format and the second format are different.
Specifically, the input information is analyzed such that local data in the multimedia data is searched based on the input information.
Wherein the multimedia data in the second format is in a completely different format from the input information.
For example, when the input information is audio, the multimedia data may be text, video, or the like; the input information is text, and the multimedia data can be audio, video and the like; when the input information is video, the multimedia data may be text, audio, etc.
In the present application, the input information and the multimedia data are not limited in particular, as long as they are different.
Step S103: the local data is determined.
And searching the multimedia data according to the input information to obtain the local data matched with the multimedia data.
In a specific implementation, the local data is one or more parts, such as one or more fragments, of the multimedia data.
In a specific implementation, when the multimedia data is in an audio or video format, the relative position of the local data in the multimedia data may also be determined when the local data is determined.
In summary, the information processing method provided in this embodiment includes: obtaining input information in a first format; processing the input information, and searching local data matched with the input information in multimedia data in a second format, wherein the first format and the second format are different; the local data is determined. In the scheme, the first input information for searching is in the first format, and the multimedia data is in the second format, and local data in the multimedia data can be searched based on the input information different from the multimedia content format by adopting the scheme, so that the adjustment based on the content of the multimedia data is realized, a user does not need to memorize the specific time of each segment in the multimedia content, the searching speed of the multimedia data is improved, and the user experience is improved.
As shown in fig. 2, a flowchart of an embodiment 2 of an information processing method provided by the present application includes the following steps:
step S201: obtaining input information in a first format;
the step S201 is identical to the step S101 in embodiment 1, and is not described in detail in this embodiment.
Step S202: according to a preset first processing rule, converting the input information in a first format into the input information in a second format;
wherein, since the input information is different from the multimedia data format, a format conversion is required in order to search the multimedia data according to the input information.
In this embodiment, the input information is converted into the same format as the multimedia data.
Specifically, according to a first processing rule, the input information in the first format is converted into the input information in the second format.
As a specific example, the first format is audio, and when the second format is text, the input information of the audio is converted into the input information of the text.
As a specific example, when the first format is audio and the second format is video, since the video is composed of a large number of images, the input information of the audio is converted into a short video composed of a plurality of frames of images, so that the search can be performed according to the short video in the subsequent step.
Step S203: searching the multimedia data for local data matching the input information by using the input information in the second format;
the input information is in a second format, and based on the input information in the second format, the multimedia data is directly searched for matching local data.
In particular implementations, the matches may be identical or have a similarity greater than a threshold.
As a specific example, when the second format is text, then the matching may be that the similarity between text content is greater than 90%;
as a specific example, when the second format is video, then the matching may be that the similarity between the content of the video is greater than 80%.
It should be noted that, the matched threshold may be set according to practical situations, and the threshold is not limited in the present application.
Step S204: the local data is determined.
The step S204 is identical to the step S103 in embodiment 1, and will not be described in detail in this embodiment.
In summary, in the information processing method provided in the present embodiment, the processing the input information searches for local data matching the input information in the multimedia data in the second format, including: according to a preset first processing rule, converting the input information in a first format into the input information in a second format; and searching the local data matching the input information in the multimedia data by utilizing the input information in the second format. With this scheme, by converting the input information into the same format as the multimedia data, it is possible to search for matching local data in the multimedia data based on the input information.
As shown in fig. 3, a flowchart of an embodiment 3 of an information processing method provided by the present application includes the following steps:
step S301: according to a preset second processing rule, processing the multimedia data in the second format to obtain a local data set in the first format;
wherein the local data set comprises at least two local data in order.
Wherein, since the input information is different from the multimedia data format, a format conversion is required in order to search the multimedia data according to the input information.
In this embodiment, the multimedia data is converted into the first format in advance.
The local data set may be a local data set obtained by dividing the multimedia data at fixed time intervals, or may be a local data set obtained by dividing the multimedia data according to the content progress.
In a specific implementation, the multimedia data in the second format may be divided into a plurality of local data according to a time interval, then key information of each local data is extracted, the key information is assembled in the form of the first format, and each key information corresponds to one local data.
In a specific implementation, the key information in the multimedia data in the second format may be extracted, the part to which the key information belongs is used as local data, and then the key information is assembled in the form of the first format, where each key information corresponds to one local data.
It should be noted that, in the local data set in the first format, a relative position of each local data in the multimedia data is also recorded.
As a specific example, the first format is text, and when the second format is video, the video is converted to text.
As a specific example, the first format is text and the second format is audio, then the audio is converted to text.
Of course, in the implementation, the local data set may also be obtained by processing according to feedback information received by the terminal.
The method specifically comprises the following steps: receiving feedback information, wherein the feedback information is a keyword which is acquired by a terminal and is input by a user aiming at a first playing position of the multimedia data; based on the feedback information, establishing a corresponding relation between the keyword and the local data of the first playing position, and adding the corresponding relation and the local data into the local data set.
In a specific implementation, the method of processing according to feedback information may be applied in combination with processing multimedia data to obtain a local data set in this embodiment.
Step S302: obtaining input information in a first format;
the step S302 is identical to the step S101 in embodiment 1, and will not be described in detail in this embodiment.
Step S303: searching local data matched with the input information in the local data set of the first format according to the input information of the first format;
the input information and the local data set have the same format, and can be directly searched in the local data based on the input information so as to search and obtain the local data matched with the input information.
Note that the matching may be completely identical, or the similarity may be greater than a certain threshold.
As a specific example, when the first format is text, then the matching may be that the similarity between text content is greater than 86%;
as a specific example, when the first format is video, then the matching may be that the similarity between the content of the video images is greater than 70%.
It should be noted that, the matched threshold may be set according to practical situations, and the threshold is not limited in the present application.
Step S304: the local data is determined.
The step S304 is identical to the step S103 in embodiment 1, and will not be described in detail in this embodiment.
In summary, in the information processing method provided in this embodiment, before obtaining the input information in the first format, the method further includes: according to a preset second processing rule, processing the multimedia data in the second format to obtain a local data set in the first format, wherein the local data set comprises at least two orderly local data; processing the input information and searching the multimedia data in the second format for local data matching the input information, including: and searching local data matched with the input information in the first format in the local data set in the first format according to the input information in the first format. By adopting the scheme, the local data set of the first format is obtained by processing the multimedia data of the second format in advance, and the local data set of the first format can be searched directly according to the input information of the first format to obtain matched local data.
As shown in fig. 4, a flowchart of an embodiment 4 of an information processing method provided by the present application includes the following steps:
Step S401: traversing audio information in the multimedia data to obtain key sounds contained in the audio information;
wherein each key tone corresponds to a local data in the multimedia data.
When the multimedia data contains audio information, key information in the audio information of the multimedia data is extracted.
Wherein the key information is a key sound, and the key sound can be one syllable, two syllables or even a plurality of syllables.
In a specific implementation, the audio data of the multimedia data may include a plurality of key tones, and the number of syllables included in each key tone may be different.
It should be noted that, each key tone corresponds to a local data in the multimedia data, and the key tone characterizes the content of the corresponding local data.
As a specific example, the audio information of the multimedia data includes a section of voice of "the aircraft is about to take off, please check-in as soon as possible" the passenger not checked-in "and a background sound, and when traversing the audio information of the multimedia data, the corresponding key sounds" the aircraft "," check-in "and the like may be extracted.
In specific implementation, the extracted key sound can be human voice in audio information, automobile booming sound, animal sound and the like, and the specific content of the key sound is not limited in the application.
Step S402: based on a key sound in a first format and a first relative position of corresponding local data in multimedia data, establishing a corresponding relation between the key sound and the first relative position;
the key sound is a component part in the multimedia data, converts the key sound into a first format, and determines a first relative position of corresponding local data in the multimedia data.
In particular, the first relative position characterizes a relative play time of the local data during play, which may be, for example, the 50 th minute from a start time of the multimedia data.
And establishing a corresponding relation between the key sound in the first format and the first relative position.
In a specific implementation, the multimedia data has a plurality of key sounds, and then corresponding relations between the key sounds and corresponding relative positions of the key sounds are sequentially established.
Fig. 5 is a schematic diagram of audio information of key tones and multimedia data, in which the audio information 501 includes five key tones, respectively, key tones 1-5, where each key tone corresponds to a local data 502-506.
Step S403: obtaining input information in a first format;
step S404: searching local data matched with the input information in the local data set of the first format according to the input information of the first format;
Step S405: the local data is determined.
The steps S403-405 are identical to the steps S302-304 in embodiment 3, and will not be described in detail in this embodiment.
In summary, the information processing method provided in this embodiment includes: traversing audio information in the multimedia data to obtain key sounds contained in the audio information, wherein each key sound corresponds to local data in the multimedia data; and establishing a corresponding relation between the key sound and the first relative position based on the key sound in the first format and the first relative position of the corresponding local data in the multimedia data. By adopting the scheme, the audio information of the multimedia data is analyzed, the key sound in the first format is determined, the first relative position of the local data corresponding to the key sound in the multimedia data is determined based on the key sound, the corresponding relation between the key sound and the first relative position can be established, and the corresponding content of each key sound in the multimedia data can be determined based on the corresponding relation.
As shown in fig. 6, a flowchart of an embodiment 5 of an information processing method provided by the present application, the method includes the following steps:
step S601: traversing key frames of video information in the multimedia data, and sequentially identifying scenes corresponding to images in each key frame;
Wherein each key frame corresponds to a local data in the multimedia data.
When the multimedia data contains video information, the key information in the video information can be determined by traversing the key frames in the video information.
Specifically, each key frame is identified according to a preset image identification rule, and a corresponding scene in the image is obtained.
In specific implementation, a frame of key frame exists in the preset frames in the video information, so that the content in the key frame image can be analyzed to obtain the corresponding scene.
Step S602: according to a scene corresponding to an image in a key frame and a second relative position of corresponding local data of the key frame in the multimedia data, taking the scene as key information, and establishing a corresponding relation between the key information and the second relative position;
according to the position of the key frame in the multimedia data, the corresponding local data can be determined through a preset acquisition rule, and the second relative position of the corresponding local data in the multimedia data is obtained.
In a specific implementation, the local data corresponding to the key frame may be acquired according to a preset acquisition rule, for example, an image of a preset frame after the key frame.
Specifically, according to the scene identified by the key frame as key information, establishing a corresponding relation between the key information and the second relative position
In a specific implementation, the multimedia data has a plurality of key frames, and then corresponding relations between each key information and the corresponding second relative positions are sequentially established.
Fig. 7 is a schematic diagram of video information of a key frame and multimedia data, where the video information 701 includes 6 key frames, i.e., key frames 1-6 frames, respectively, and each key frame corresponds to one local data 702-707.
Step S603: obtaining input information in a first format;
step S604: searching local data matched with the input information in the local data set of the first format according to the input information of the first format;
step S605: the local data is determined.
The steps S603-605 are identical to the steps S302-304 in the embodiment 3, and are not described in detail in this embodiment.
In summary, the information processing method provided in this embodiment includes: traversing key frames of video information in multimedia data, wherein each key frame corresponds to local data in the multimedia data, and sequentially identifying scenes corresponding to images in each key frame; and establishing a corresponding relation between the key information and the second relative position by taking the scene as the key information according to the scene corresponding to the image in the key frame and the second relative position of the corresponding local data of the key frame in the multimedia data. By adopting the scheme, the scene corresponding to the key frame in the video information is analyzed, the second relative position of the local data corresponding to the key frame in the multimedia data is determined, the corresponding relation between the scene and the local data is established, and each scene in the video information of the multimedia data can be determined based on the corresponding relation.
As shown in fig. 8, a flowchart of an embodiment 6 of an information processing method provided by the present application, the method includes the following steps:
step S801: traversing the subtitle file corresponding to the multimedia data to obtain key information contained in the subtitle file;
wherein each key information corresponds to a local data in the multimedia data.
The title file contains non-application content corresponding to the multimedia data, and specifically contains a title, a staff table, a gramophone, a dialogue, a caption, a character introduction, a place name, a year, and the like.
Specifically, the key information in the subtitle file may include a word, a phrase, a sentence, and the like.
Specifically, each key information corresponds to a local data of the multimedia data.
Step S802: and establishing a corresponding relation between the key information and a third relative position based on the key information and the third relative position of the corresponding local data in the multimedia data.
The subtitle file and the multimedia data have a correspondence relationship, and generally correspond to each other with a time axis of playback as a reference.
Specifically, according to the corresponding relation between the subtitle file and the multimedia data, local data corresponding to the key information in the subtitle file is determined in the multimedia data, and the local data is at a third relative position of the multimedia data.
Specifically, the corresponding relation between the key information and the third relative position is established,
in a specific implementation, the subtitle file has a plurality of key information, and then corresponding relations between the key information and the corresponding third relative positions are sequentially established.
Step S803: obtaining input information in a first format;
step S804: searching local data matched with the input information in the local data set of the first format according to the input information of the first format;
step S805: the local data is determined.
The steps S803-805 are identical to the steps S302-304 in the embodiment 3, and are not described in detail in this embodiment.
In summary, the information processing method provided in this embodiment includes: traversing the subtitle file corresponding to the multimedia data to obtain key information contained in the subtitle file, wherein each key information corresponds to a local data in the multimedia data; and establishing a corresponding relation between the key information and a third relative position based on the key information and the third relative position of the corresponding local data in the multimedia data. By adopting the scheme, the key information in the subtitle file is acquired, the local data corresponding to the multimedia data is determined, the key information and the third position of the local part in the multimedia data can be established, the corresponding relation can be established, and the corresponding content of each key information in the multimedia data can be determined based on the corresponding relation.
As shown in fig. 9, a flowchart of an embodiment 7 of an information processing method provided by the present application, the method includes the following steps:
step S901: obtaining input information in a first format;
step S902: processing the input information, and searching local data matched with the input information in the multimedia data in a second format;
the steps S901-902 are identical to the steps S101-102 in embodiment 1, and are not described in detail in this embodiment.
Step S903: determining local data corresponding to the input information in a local data set;
the multimedia data is obtained by a plurality of local data sets, and the local data sets are combined to obtain the multimedia data.
Specifically, local data corresponding to the input information is determined in the local data set based on the input information.
Step S904: and determining identification information representing the relative position according to the relative position of the local data in the multimedia data.
In a specific implementation, the local data set also records the relative position of each local data in the multimedia data.
After determining the local data corresponding to the input information, the relative position of the local data in the multimedia data can be obtained.
Accordingly, corresponding identification information is determined in the multimedia data based on the relative position.
The identification information may be an identification added in advance in the multimedia data, which may be a content identifying the local data at the addition position thereof.
In a specific implementation, the multimedia data also has identification information for representing the content of the multimedia data, the identification information corresponds to standard local data which are divided according to rules in advance, the position of each standard local data and the corresponding identification information are known, and the content corresponding to the standard local data is a complete scene/scenario.
Specifically, according to the local data determined by the input information, the relative position of the local data is determined in the multimedia data, and the relative position is used as a range value, which can overlap with a range corresponding to a certain preset identification information, so that the identification information overlapping with the range can be selected according to the local data determined by the input information.
In a specific implementation, the relative position may be a position determined according to a play time of the multimedia data.
In summary, in the information processing method provided in this embodiment, the determining the local data includes: determining local data corresponding to the input information in a local data set; and determining identification information representing the relative position according to the relative position of the local data in the multimedia data. By adopting the scheme, the relative position of the content corresponding to the input information in the multimedia data can be determined by selecting the local data corresponding to the input information and determining the relative position of the local data in the multimedia data.
As shown in fig. 10, a flowchart of an embodiment 8 of an information processing method provided by the present application, the method includes the following steps:
step S1001: obtaining input information in a first format;
step S1002: processing the input information, and searching local data matched with the input information in the multimedia data in a second format;
step S1003: determining local data corresponding to the input information in a local data set;
step S1004: determining identification information characterizing the relative position according to the relative position of the local data in the multimedia data;
the steps S1001-1004 are identical to the steps S901-904 in embodiment 7, and are not described in detail in this embodiment.
Step S1005: and adjusting the output position of the multimedia data to the local data according to the determined identification information so as to enable the multimedia data to be continuously output based on the local data.
After the identification information representing the relative position of the local data in the multimedia data is known, the output position of the multimedia data is adjusted according to the identification information.
The output position specifically refers to the current playing position of the multimedia data.
Specifically, after the output position of the multimedia data is adjusted to the local data corresponding to the input information, the multimedia data is output from the local data, so as to realize the process of adjusting the progress of the multimedia data.
For example, in the process of playing the multimedia data, the current playing time reaches the 10 th minute of the multimedia data, a local data is determined based on the input information "man-in-the-corner aircraft", the local data corresponds to a first identifier, the relative position of the local data corresponding to the first identifier in the multimedia data is 50 th minute, and the output position of the multimedia data is adjusted to the 50 th minute position according to the identifier information.
In summary, the information processing method provided in this embodiment further includes: and adjusting the output position of the multimedia data to the local data according to the determined identification information so as to enable the multimedia data to be continuously output based on the local data. By adopting the scheme, the output position of the multimedia data is adjusted based on the relative position of the local data so as to realize the adjustment of the playing progress of the multimedia data in response to the input information.
As shown in fig. 11, a flowchart of an embodiment 9 of an information processing method provided by the present application, the method includes the following steps:
Step S1101: obtaining input information in a first format;
step S1102: processing the input information, and searching local data matched with the input information in the multimedia data in a second format;
step S1103: determining local data corresponding to the input information in a local data set;
step S1104: determining identification information characterizing the relative position according to the relative position of the local data in the multimedia data;
the steps S1101-1104 are identical to the steps S1001-1004 in embodiment 8, and are not described in detail in this embodiment.
Step S1105: generating prompt information according to the determined at least two identification information and displaying the prompt information;
in a specific implementation, since a multimedia data may have a plurality of similar scenes/episodes, a plurality of local data may be searched in the multimedia data according to the input information, and then a plurality of identification information may be determined.
Correspondingly, prompt information is generated based on the plurality of identification information and displayed in a display area of the electronic equipment so as to prompt a user.
Step S1106: receiving feedback information;
the feedback information is that a user selects a first relative position in the display prompt information.
Specifically, after the user sees the prompt information, one of the prompt information can be selected as required, and the selected information is used as feedback information to the electronic equipment.
The feedback information may be in the same first format as the input information, or may be in another format different from the input information, which is not limited in the present application.
Step S1107: according to the feedback information, analyzing and determining that the local data corresponding to the first relative position is the local data to be output;
and based on the feedback information, analyzing to obtain the first relative position, and selecting local data corresponding to the first relative position as the local data to be output, so that the output position of the multimedia data is adjusted to the local data to be output in a subsequent step.
Fig. 12 is a schematic diagram showing prompt information in a display area of an electronic device, wherein the display area 1201 displays a playing progress of a video in a time axis 1202, a user inputs a search content of "a man main corner is injured" in a voice manner, two corresponding local data 1203-1204 are obtained through the search, and identification information corresponding to the two local data is marked in the time axis in a bold type manner, and the "the man main corner is injured in the wild" and the "the man main corner is injured in the home". The user can select the output position 1203 of "man main corner is injured in the wild" as required, so as to adjust the playing progress of the video to the output position corresponding to "man main corner is injured in the wild".
Step S1108: and adjusting the output position of the multimedia data to the local data so as to continue outputting based on the local data.
The step S1108 corresponds to the step S1005 in embodiment 8, and is not described in detail in this embodiment.
In summary, the information processing method provided in this embodiment further includes: generating prompt information according to the determined at least two identification information and displaying the prompt information; receiving feedback information, wherein the feedback information is that a user selects a first relative position in the display prompt information; and according to the feedback information, analyzing and determining that the local data corresponding to the first relative position is the local data to be output. By adopting the scheme, when the multimedia data has a plurality of local data corresponding to the input information, the first relative position can be selected according to feedback information input by the user again, the corresponding local data is determined to be the local data to be output, and accurate positioning according to the selection of the user can be realized.
Corresponding to the embodiment of the information processing method provided by the application, the application also provides an embodiment of the electronic equipment applying the information processing method.
As shown in fig. 13, a schematic structural diagram of an embodiment 1 of an electronic device according to the present application includes the following structures: collector 1301 and processor 1302;
The collector 1301 is configured to obtain input information in a first format;
in a specific implementation, the collector is a device capable of collecting, such as a camera, a microphone, a keyboard, a mouse, and the like.
Wherein, the processor 1302 is configured to process the input information, and search for local data matching the input information in multimedia data in a second format, where the first format and the second format are different; the local data is determined.
In particular, the processor is a structure having data processing capabilities, such as a CPU (central processing unit ).
Preferably, the processing the input information searches the multimedia data in the second format for local data matching the input information, including:
according to a preset first processing rule, converting the input information in a first format into the input information in a second format;
and searching the local data matching the input information in the multimedia data by utilizing the input information in the second format.
Preferably, before the obtaining the input information in the first format, the method further includes:
according to a preset second processing rule, processing the multimedia data in the second format to obtain a local data set in the first format, wherein the local data set comprises at least two orderly local data;
Processing the input information and searching the multimedia data in the second format for local data matching the input information, including:
and searching local data matched with the input information in the first format in the local data set in the first format according to the input information in the first format.
Preferably, the determining the local data includes:
determining local data corresponding to the input information in a local data set;
and determining identification information representing the relative position according to the relative position of the local data in the multimedia data.
Preferably, the processing the multimedia data in the second format according to a preset second processing rule to obtain a local data set in the first format specifically includes at least one of the following:
traversing audio information in the multimedia data to obtain key sounds contained in the audio information, wherein each key sound corresponds to local data in the multimedia data; based on a key sound in a first format and a first relative position of corresponding local data in multimedia data, establishing a corresponding relation between the key sound and the first relative position;
or alternatively
Traversing key frames of video information in multimedia data, wherein each key frame corresponds to local data in the multimedia data, and sequentially identifying scenes corresponding to images in each key frame; according to a scene corresponding to an image in a key frame and a second relative position of corresponding local data of the key frame in the multimedia data, taking the scene as key information, and establishing a corresponding relation between the key information and the second relative position;
or alternatively
Traversing the subtitle file corresponding to the multimedia data to obtain key information contained in the subtitle file, wherein each key information corresponds to a local data in the multimedia data; and establishing a corresponding relation between the key information and a third relative position based on the key information and the third relative position of the corresponding local data in the multimedia data.
Preferably, after determining the local data, the method further includes:
and adjusting the output position of the multimedia data to the local data according to the determined identification information so as to enable the multimedia data to be continuously output based on the local data.
Preferably, after determining the identification information characterizing the relative position according to the relative position of the local data in the multimedia data, before adjusting the output position of the multimedia data to the local data according to the determined identification information, the method further includes:
Generating prompt information according to the determined at least two identification information and displaying the prompt information;
receiving feedback information, wherein the feedback information is that a user selects a first relative position in the display prompt information;
and according to the feedback information, analyzing and determining that the local data corresponding to the first relative position is the local data to be output.
Preferably, the obtaining the input information in the first format includes:
acquiring voice information, wherein the voice information is voice search information input by a user;
or alternatively
Acquiring voice information, wherein the voice information is voice search information input by a user;
and converting the voice information into text information according to a preset conversion rule.
In summary, in the electronic device provided in this embodiment, the first input information for searching is in the first format, and the multimedia data is in the second format, and by adopting the scheme, local data in the multimedia data can be searched based on the input information different from the multimedia content format, so that adjustment based on the content of the multimedia data is realized, a user does not need to remember the specific time of each segment in the multimedia content, the search speed of adjusting the multimedia data is improved, and the user experience is improved.
As shown in fig. 14, a schematic structural diagram of an embodiment 2 of an electronic device according to the present application includes the following structures: an acquisition module 1401 and a processing module 1402;
wherein, the obtaining module 1401 is configured to obtain input information in a first format;
the processing module 1402 is configured to process the input information, and search for local data matching the input information in multimedia data in a second format, where the first format and the second format are different; the local data is determined.
Preferably, the processing the input information searches the multimedia data in the second format for local data matching the input information, including:
according to a preset first processing rule, converting the input information in a first format into the input information in a second format;
and searching the local data matching the input information in the multimedia data by utilizing the input information in the second format.
Preferably, before the obtaining the input information in the first format, the method further includes:
according to a preset second processing rule, processing the multimedia data in the second format to obtain a local data set in the first format, wherein the local data set comprises at least two orderly local data;
Processing the input information and searching the multimedia data in the second format for local data matching the input information, including:
and searching local data matched with the input information in the first format in the local data set in the first format according to the input information in the first format.
Preferably, the determining the local data includes:
determining local data corresponding to the input information in a local data set;
and determining identification information representing the relative position according to the relative position of the local data in the multimedia data.
Preferably, the processing the multimedia data in the second format according to a preset second processing rule to obtain a local data set in the first format specifically includes at least one of the following:
traversing audio information in the multimedia data to obtain key sounds contained in the audio information, wherein each key sound corresponds to local data in the multimedia data; based on a key sound in a first format and a first relative position of corresponding local data in multimedia data, establishing a corresponding relation between the key sound and the first relative position;
or alternatively
Traversing key frames of video information in multimedia data, wherein each key frame corresponds to local data in the multimedia data, and sequentially identifying scenes corresponding to images in each key frame; according to a scene corresponding to an image in a key frame and a second relative position of corresponding local data of the key frame in the multimedia data, taking the scene as key information, and establishing a corresponding relation between the key information and the second relative position;
or alternatively
Traversing the subtitle file corresponding to the multimedia data to obtain key information contained in the subtitle file, wherein each key information corresponds to a local data in the multimedia data; and establishing a corresponding relation between the key information and a third relative position based on the key information and the third relative position of the corresponding local data in the multimedia data.
Preferably, after determining the local data, the method further includes:
and adjusting the output position of the multimedia data to the local data according to the determined identification information so as to enable the multimedia data to be continuously output based on the local data.
Preferably, after determining the identification information characterizing the relative position according to the relative position of the local data in the multimedia data, before adjusting the output position of the multimedia data to the local data according to the determined identification information, the method further includes:
Generating prompt information according to the determined at least two identification information and displaying the prompt information;
receiving feedback information, wherein the feedback information is that a user selects a first relative position in the display prompt information;
and according to the feedback information, analyzing and determining that the local data corresponding to the first relative position is the local data to be output.
Preferably, the obtaining the input information in the first format includes:
acquiring voice information, wherein the voice information is voice search information input by a user;
or alternatively
Acquiring voice information, wherein the voice information is voice search information input by a user;
and converting the voice information into text information according to a preset conversion rule.
In summary, in the electronic device provided in this embodiment, the first input information for searching is in the first format, and the multimedia data is in the second format, and by adopting the scheme, local data in the multimedia data can be searched based on the input information different from the multimedia content format, so that adjustment based on the content of the multimedia data is realized, a user does not need to remember the specific time of each segment in the multimedia content, the search speed of adjusting the multimedia data is improved, and the user experience is improved.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. The device provided in the embodiment corresponds to the method provided in the embodiment, so that the description is simpler, and the relevant points refer to the description of the method.
The previous description of the provided embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features provided herein.

Claims (9)

1. A method of processing, comprising:
according to a preset second processing rule, processing the multimedia data in a second format to obtain a local data set in a first format, wherein the local data set comprises at least two orderly local data, the local data set is a local data set obtained by dividing the multimedia data according to content progress, and each local data corresponds to key information;
Obtaining input information in a first format;
processing the input information, searching local data matched with the input information in multimedia data in a second format, and marking identification information and key information corresponding to the local data in a time axis corresponding to the multimedia data, wherein the first format and the second format are different, and the identification information is used for representing the relative position of the local data in the multimedia data;
determining the local data;
wherein the processing the input information searches the multimedia data in the second format for local data matching the input information, including:
and searching local data matched with the input information in the local data set of the first format based on the key information according to the input information of the first format.
2. The method of claim 1, the processing the input information, searching for local data matching the input information in multimedia data in a second format, comprising:
according to a preset first processing rule, converting the input information in a first format into the input information in a second format;
and searching the local data matching the input information in the multimedia data by utilizing the input information in the second format.
3. The method of claim 1, the determining the local data comprising:
determining local data corresponding to the input information in a local data set;
and determining identification information representing the relative position according to the relative position of the local data in the multimedia data.
4. The method of claim 1, wherein the processing the multimedia data in the second format according to the preset second processing rule to obtain the local data set in the first format specifically includes at least one of the following:
traversing audio information in the multimedia data to obtain key sounds contained in the audio information, wherein each key sound corresponds to local data in the multimedia data; based on a key sound in a first format and a first relative position of corresponding local data in multimedia data, establishing a corresponding relation between the key sound and the first relative position;
or alternatively
Traversing key frames of video information in multimedia data, wherein each key frame corresponds to local data in the multimedia data, and sequentially identifying scenes corresponding to images in each key frame; according to a scene corresponding to an image in a key frame and a second relative position of corresponding local data of the key frame in the multimedia data, taking the scene as key information, and establishing a corresponding relation between the key information and the second relative position;
Or alternatively
Traversing the subtitle file corresponding to the multimedia data to obtain key information contained in the subtitle file, wherein each key information corresponds to a local data in the multimedia data; and establishing a corresponding relation between the key information and a third relative position based on the key information and the third relative position of the corresponding local data in the multimedia data.
5. The method of claim 3, after said determining said local data, further comprising:
and adjusting the output position of the multimedia data to the local data according to the determined identification information so as to enable the multimedia data to be continuously output based on the local data.
6. The method of claim 5, wherein after determining the identification information characterizing the relative position according to the relative position of the local data in the multimedia data, the adjusting the output position of the multimedia data to the local data according to the determined identification information further comprises:
generating prompt information according to the determined at least two identification information and displaying the prompt information;
receiving feedback information, wherein the feedback information is that a user selects a first relative position in the display prompt information;
And according to the feedback information, analyzing and determining that the local data corresponding to the first relative position is the local data to be output.
7. The method of claim 1, the obtaining input information in a first format, comprising:
acquiring voice information, wherein the voice information is voice search information input by a user;
or alternatively
Acquiring voice information, wherein the voice information is voice search information input by a user;
and converting the voice information into text information according to a preset conversion rule.
8. An electronic device, comprising:
the processor is used for processing the multimedia data in the second format according to a preset second processing rule to obtain a local data set in the first format, wherein the local data set comprises at least two orderly local data, the local data set is a local data set obtained by dividing the multimedia data according to the content progress, and each local data corresponds to one piece of key information;
the collector is used for obtaining the input information in the first format;
the processor is further used for processing the input information, searching local data matched with the input information in the multimedia data in a second format based on the key information, and marking identification information and key information corresponding to the local data in a time axis corresponding to the multimedia data, wherein the first format and the second format are different; determining the local data, wherein the identification information is used for representing the relative position of the local data in the multimedia data;
Wherein, the processor is specifically configured to:
and searching local data matched with the input information in the first format in the local data set in the first format according to the input information in the first format.
9. An electronic device, comprising:
the processing module is used for processing the multimedia data in the second format according to a preset second processing rule to obtain a local data set in the first format, wherein the local data set comprises at least two orderly local data, the local data set is a local data set obtained by dividing the multimedia data according to the content progress, and each local data corresponds to one piece of key information;
the acquisition module is used for acquiring the input information in the first format;
the processing module is further used for processing the input information, searching local data matched with the input information in the multimedia data in a second format based on the key information, and marking identification information and key information corresponding to the local data in a time axis corresponding to the multimedia data, wherein the first format and the second format are different; determining the local data, wherein the identification information is used for representing the relative position of the local data in the multimedia data;
The processing module is specifically configured to:
and searching local data matched with the input information in the first format in the local data set in the first format according to the input information in the first format.
CN201910254420.5A 2019-03-31 2019-03-31 Information processing method and electronic equipment Active CN109977239B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910254420.5A CN109977239B (en) 2019-03-31 2019-03-31 Information processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910254420.5A CN109977239B (en) 2019-03-31 2019-03-31 Information processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN109977239A CN109977239A (en) 2019-07-05
CN109977239B true CN109977239B (en) 2023-08-18

Family

ID=67081981

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910254420.5A Active CN109977239B (en) 2019-03-31 2019-03-31 Information processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN109977239B (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101853253A (en) * 2009-03-30 2010-10-06 三星电子株式会社 Equipment and method for managing multimedia contents in mobile terminal
CN102207966A (en) * 2011-06-01 2011-10-05 华南理工大学 Video content quick retrieving method based on object tag
KR20130019509A (en) * 2011-08-17 2013-02-27 주식회사에어플러그 Method for displaying section information of contents, contents reproducing apparatus and contents providing system
CN103645836A (en) * 2013-11-15 2014-03-19 联想(北京)有限公司 Information processing method and electronic device
CN104424228A (en) * 2013-08-26 2015-03-18 联想(北京)有限公司 Method for inquiring multimedia data in multimedia file and electronic device
CN104618807A (en) * 2014-03-31 2015-05-13 腾讯科技(北京)有限公司 Multimedia playing method, device and system
CN104731944A (en) * 2015-03-31 2015-06-24 努比亚技术有限公司 Video searching method and device
CN106095804A (en) * 2016-05-30 2016-11-09 维沃移动通信有限公司 The processing method of a kind of video segment, localization method and terminal
CN106658199A (en) * 2016-12-28 2017-05-10 网易传媒科技(北京)有限公司 Video content display method and apparatus
WO2017114388A1 (en) * 2015-12-30 2017-07-06 腾讯科技(深圳)有限公司 Video search method and device
CN107181849A (en) * 2017-04-19 2017-09-19 北京小米移动软件有限公司 The way of recording and device
CN107193841A (en) * 2016-03-15 2017-09-22 北京三星通信技术研究有限公司 Media file accelerates the method and apparatus played, transmit and stored
CN107506385A (en) * 2017-07-25 2017-12-22 努比亚技术有限公司 A kind of video file retrieval method, equipment and computer-readable recording medium
CN108024143A (en) * 2017-11-03 2018-05-11 国政通科技股份有限公司 A kind of intelligent video data handling procedure and device
CN109246472A (en) * 2018-08-01 2019-01-18 平安科技(深圳)有限公司 Video broadcasting method, device, terminal device and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8677241B2 (en) * 2007-09-10 2014-03-18 Vantrix Corporation Method and system for multimedia messaging service (MMS) to video adaptation
US10372758B2 (en) * 2011-12-22 2019-08-06 Tivo Solutions Inc. User interface for viewing targeted segments of multimedia content based on time-based metadata search criteria
US9066135B2 (en) * 2012-12-18 2015-06-23 Sony Corporation System and method for generating a second screen experience using video subtitle data

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101853253A (en) * 2009-03-30 2010-10-06 三星电子株式会社 Equipment and method for managing multimedia contents in mobile terminal
CN102207966A (en) * 2011-06-01 2011-10-05 华南理工大学 Video content quick retrieving method based on object tag
KR20130019509A (en) * 2011-08-17 2013-02-27 주식회사에어플러그 Method for displaying section information of contents, contents reproducing apparatus and contents providing system
CN104424228A (en) * 2013-08-26 2015-03-18 联想(北京)有限公司 Method for inquiring multimedia data in multimedia file and electronic device
CN103645836A (en) * 2013-11-15 2014-03-19 联想(北京)有限公司 Information processing method and electronic device
CN104618807A (en) * 2014-03-31 2015-05-13 腾讯科技(北京)有限公司 Multimedia playing method, device and system
CN104731944A (en) * 2015-03-31 2015-06-24 努比亚技术有限公司 Video searching method and device
WO2017114388A1 (en) * 2015-12-30 2017-07-06 腾讯科技(深圳)有限公司 Video search method and device
CN107193841A (en) * 2016-03-15 2017-09-22 北京三星通信技术研究有限公司 Media file accelerates the method and apparatus played, transmit and stored
CN106095804A (en) * 2016-05-30 2016-11-09 维沃移动通信有限公司 The processing method of a kind of video segment, localization method and terminal
CN106658199A (en) * 2016-12-28 2017-05-10 网易传媒科技(北京)有限公司 Video content display method and apparatus
CN107181849A (en) * 2017-04-19 2017-09-19 北京小米移动软件有限公司 The way of recording and device
CN107506385A (en) * 2017-07-25 2017-12-22 努比亚技术有限公司 A kind of video file retrieval method, equipment and computer-readable recording medium
CN108024143A (en) * 2017-11-03 2018-05-11 国政通科技股份有限公司 A kind of intelligent video data handling procedure and device
CN109246472A (en) * 2018-08-01 2019-01-18 平安科技(深圳)有限公司 Video broadcasting method, device, terminal device and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
印刷企业数字资产管理中基于内容的视频检索技术研究;陈玉洁;《中国优秀硕士学位论文全文数据库 信息科技辑》;I138-535 *

Also Published As

Publication number Publication date
CN109977239A (en) 2019-07-05

Similar Documents

Publication Publication Date Title
US9798934B2 (en) Method and apparatus for providing combined-summary in imaging apparatus
CN110970014B (en) Voice conversion, file generation, broadcasting and voice processing method, equipment and medium
US10225625B2 (en) Caption extraction and analysis
EP2978232A1 (en) Method and device for adjusting playback progress of video file
US20080243473A1 (en) Language translation of visual and audio input
KR20120038000A (en) Method and system for determining the topic of a conversation and obtaining and presenting related content
CN1581951A (en) Information processing apparatus and method
CN110781328A (en) Video generation method, system, device and storage medium based on voice recognition
US20100257212A1 (en) Metatagging of captions
JP5296598B2 (en) Voice information extraction device
JP6202815B2 (en) Character recognition device, character recognition method, and character recognition program
CN113035199A (en) Audio processing method, device, equipment and readable storage medium
JP2006279898A (en) Information processing apparatus and its method
CN109977239B (en) Information processing method and electronic equipment
KR101618777B1 (en) A server and method for extracting text after uploading a file to synchronize between video and audio
JP2004289530A (en) Recording and reproducing apparatus
US20140297285A1 (en) Automatic page content reading-aloud method and device thereof
KR101783872B1 (en) Video Search System and Method thereof
JP2020140326A (en) Content generation system and content generation method
CN114842858A (en) Audio processing method and device, electronic equipment and storage medium
JP5033653B2 (en) Video recording / reproducing apparatus and video reproducing apparatus
JP4654438B2 (en) Educational content generation device
JP6433765B2 (en) Spoken dialogue system and spoken dialogue method
CN111627417B (en) Voice playing method and device and electronic equipment
JP2006054517A (en) Information presenting apparatus, method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant