CN112995705A - Method and device for video processing and electronic equipment - Google Patents

Method and device for video processing and electronic equipment Download PDF

Info

Publication number
CN112995705A
CN112995705A CN202110122900.3A CN202110122900A CN112995705A CN 112995705 A CN112995705 A CN 112995705A CN 202110122900 A CN202110122900 A CN 202110122900A CN 112995705 A CN112995705 A CN 112995705A
Authority
CN
China
Prior art keywords
audio
target
video
target audio
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110122900.3A
Other languages
Chinese (zh)
Inventor
张林娟
刘永龙
范红娟
张晓娜
张丽丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Haier Smart Home Co Ltd
Qingdao Haier Multimedia Co Ltd
Original Assignee
Haier Smart Home Co Ltd
Qingdao Haier Multimedia Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Haier Smart Home Co Ltd, Qingdao Haier Multimedia Co Ltd filed Critical Haier Smart Home Co Ltd
Priority to CN202110122900.3A priority Critical patent/CN112995705A/en
Publication of CN112995705A publication Critical patent/CN112995705A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/232Content retrieval operation locally within server, e.g. reading video streams from disk arrays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/242Synchronization processes, e.g. processing of PCR [Program Clock References]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/266Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
    • H04N21/26603Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel for automatically generating descriptors from content, e.g. when it is not made available by its provider, using content analysis techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/432Content retrieval operation from a local storage medium, e.g. hard-disk
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/84Generation or processing of descriptive data, e.g. content descriptors

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Management Or Editing Of Information On Record Carriers (AREA)

Abstract

The application relates to the technical field of intelligent household appliances and discloses a method for video processing. The method is applied to electronic equipment, a video library and an audio library are prestored, the video library comprises a plurality of videos, each video is provided with first tag information representing a video type, the music library comprises a plurality of audios, each audio is provided with second tag information representing an audio type, and the electronic equipment also stores the association relationship between the first tag information and the second tag information; receiving a video playing request sent by a user, and determining label information of a target audio associated with the label information of the target video according to the association relation; matching a target audio corresponding to the label information of the target audio in an audio library; and playing the target video on the electronic equipment and synchronously playing the target audio. By the scheme, the personalized audio matching requirement of the user is met. The application also discloses a device and electronic equipment for video processing.

Description

Method and device for video processing and electronic equipment
Technical Field
The present application relates to the field of intelligent home appliance technologies, and for example, to a method and an apparatus for video processing, and an electronic device.
Background
In daily life of people, fitness gets more and more attention of people, and fitness application is also installed in various electronic products so that people can obtain better experience. In recent years, with the rapid development of video processing technology, different body-building contents should be provided with different background music, which has become a very important link in the video processing process.
In the existing fitness video in fitness application, the background music usually only contains one background music from beginning to end, or the user needs to manually select the background music, which causes much inconvenience to the music matching process of the video. Therefore, how to conveniently add appropriate background music to a video and improve the adaptability in the video processing process becomes one of important research directions.
Disclosure of Invention
The following presents a simplified summary in order to provide a basic understanding of some aspects of the disclosed embodiments. This summary is not an extensive overview nor is intended to identify key/critical elements or to delineate the scope of such embodiments but rather as a prelude to the more detailed description that is presented later.
The embodiment of the disclosure provides a method and a device for video processing and electronic equipment, so as to provide a more convenient video processing method.
In some embodiments, the method comprises: the method comprises the steps that a video library and an audio library are prestored, the video library comprises a plurality of videos, each video is provided with first label information representing the video type, the audio library comprises a plurality of audios, each audio is provided with second label information representing the audio type, and the electronic equipment also stores the incidence relation between the first label information and the second label information; receiving a video playing request which is sent by a user and carries the label information of the target video, and determining the label information of the target audio which is associated with the label information of the target video according to the association relation between the first label information and the second label information; matching a target audio corresponding to the label information of the target audio in an audio library; and playing the target video acquired from the video library on the electronic equipment, and synchronously playing the target audio.
In one embodiment, the method comprises: if a plurality of target audios are matched in an audio library according to the tag information of the target audio, determining a first target audio from the plurality of target audios, and synchronously playing the first target audio, and acquiring emotion information of a user listening to the first target audio; if the emotion information indicates that the user likes the first target audio, the first target audio continues to be played.
In one embodiment, the method comprises: if the first target audio is determined in the plurality of target audios, one target audio is randomly selected from the plurality of target audios and determined as the first target audio.
In one embodiment, the method comprises: if the audio preference information of the user is prestored in the electronic equipment, determining the playing sequence of the target audios according to the audio preference information of the user; and determining the target audio played first as the first target audio according to the playing sequence.
In one embodiment, the method comprises: if the emotion information indicates that the user does not like the first target audio, stopping playing the first target audio; and determining a second target audio from other audio except the first target audio in the plurality of target audio in a random selection mode, and playing the audio.
In one embodiment, the method comprises: if the emotion information indicates that the user does not like the first target audio, stopping playing the first target audio; determining the playing sequence of a plurality of target audios according to the audio preference information of the user; and according to the playing sequence, determining the target audio played first in the other audio except the first target audio in the plurality of target audio as the second target audio, and playing the audio.
In some embodiments, the method comprises: establishing an incidence relation between the target video and the second target audio; and synchronously playing second target audio associated with the target video when the user requests to play the target video again.
In some embodiments, the apparatus comprises: the electronic equipment comprises a pre-storage module, a video library and an audio library, wherein the video library comprises a plurality of videos, each video is provided with first label information representing a video type, the music library comprises a plurality of audios, each audio is provided with second label information representing an audio type, and the electronic equipment also stores the incidence relation between the first label information and the second label information; the determining module is configured to receive a video playing request which is sent by a user and carries the tag information of the target video, and determine the tag information of the target audio which is associated with the tag information of the target video according to the association relation between the first tag information and the second tag information; the matching module is configured to match one or more target audios corresponding to the tag information of the target audio in an audio library; and the playing module is configured to play the target video acquired from the video library on the electronic equipment and synchronously play one or more target audios.
In some embodiments, the electronic device comprises: a processor and a memory storing program instructions, the processor being configured to, upon execution of the program instructions, perform the method for video processing as previously described.
In some embodiments, the electronic device comprises: the foregoing apparatus for video processing.
The method, the device and the electronic equipment for video processing provided by the embodiment of the disclosure can achieve the following technical effects: through prestoring the video library and the audio library in the cloud or the electronic equipment, one or more target audios associated with the tag information of the target audio matched with the tag information of the target video can be matched in the audio library according to the video playing intention of the user, so that the target audio can be synchronously played on the electronic equipment.
The foregoing general description and the following description are exemplary and explanatory only and are not restrictive of the application.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the accompanying drawings and not in limitation thereof, in which elements having the same reference numeral designations are shown as like elements and not in limitation thereof, and wherein:
fig. 1 is a schematic diagram of a method for video processing according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of an apparatus for video processing according to an embodiment of the disclosure;
fig. 3 is a schematic diagram of an electronic device provided by an embodiment of the present disclosure.
Detailed Description
So that the manner in which the features and elements of the disclosed embodiments can be understood in detail, a more particular description of the disclosed embodiments, briefly summarized above, may be had by reference to the embodiments, some of which are illustrated in the appended drawings. In the following description of the technology, for purposes of explanation, numerous details are set forth in order to provide a thorough understanding of the disclosed embodiments. However, one or more embodiments may be practiced without these details. In other instances, well-known structures and devices may be shown in simplified form in order to simplify the drawing.
The terms "first," "second," and the like in the description and in the claims, and the above-described drawings of embodiments of the present disclosure, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the present disclosure described herein may be made. Furthermore, the terms "comprising" and "having," as well as any variations thereof, are intended to cover non-exclusive inclusions.
The term "plurality" means two or more unless otherwise specified.
In the embodiment of the present disclosure, the character "/" indicates that the preceding and following objects are in an or relationship. For example, A/B represents: a or B.
The term "and/or" is an associative relationship that describes objects, meaning that three relationships may exist. For example, a and/or B, represents: a or B, or A and B.
In practical application, the electronic device may be a smart television, the video library and the audio library may be pre-stored on the smart television, and different types of videos and audios are respectively stored in the video library and the audio library, and when the user wants to play the yoga video, the user sends a play request of the yoga video to the smart television, and can determine that the tag information of the target audio is the yoga audio, the bamboo forest audio, and the meditation audio. The plurality of target audios are ranked according to the audio preference information of the user, thereby determining a first target audio. And after the first target audio is determined, the target video and the first target audio can be synchronously played. And after the user does not like the first target audio, determining a second target audio in the audio library according to the audio preference information of the user, and playing the second target audio while playing the target video.
Fig. 1 is a schematic diagram of a method for video processing according to an embodiment of the present disclosure, and in conjunction with fig. 1, a method for video processing is provided according to an embodiment of the present disclosure.
S11, a video library and an audio library are prestored, wherein the video library comprises a plurality of videos, each video is provided with first label information representing the type of the video, the audio library comprises a plurality of audios, each audio is provided with second label information representing the type of the audio, and the electronic equipment further stores the association relationship between the first label information and the second label information.
S12, receiving a video playing request which is sent by a user and carries the label information of the target video, and determining the label information of the target audio which is associated with the label information of the target video according to the association relation between the first label information and the second label information.
S13, the target audio corresponding to the tag information of the target audio is matched in the audio library.
And S14, playing the target video acquired from the video library on the electronic equipment, and synchronously playing the target audio.
In step 11, a video library and an audio library are respectively pre-stored in the electronic device or the cloud, a plurality of videos are pre-stored in the video library, and each video is configured with first tag information representing the type of the video. A plurality of audios are stored in an audio library in advance, and each audio is configured with second tag information representing the audio. The electronic equipment also stores the incidence relation between the first label information and the second label information.
Specifically, the electronic device may be a device having a video playback function. Such as a smart television. The video in the video library may be a sports video, and in one example, the first tag information characterizing the video type may be yoga, and the tag information characterizing the audio may be yoga music. In this scheme, an association relationship between the first tag information and the second tag information may be established, for example: the association relationship between the yoga and the yoga music can be established and stored in the electronic equipment. In an optimized solution, a first tag information may be associated with a plurality of second tag information, such as: the label information representing the audio frequency can also be bamboo forest music, and then association relation can be established between the yoga and the yoga music and the bamboo forest music. By the scheme, the free matching of the video and the audio can be realized according to the incidence relation between the first label information and the second label information, and the personalized requirements of the user are met.
In step 12, the video playing request sent by the user may carry the tag information of the target video, and the tag information of the target audio is determined according to the association relationship between the first tag information and the second tag information.
In this scheme, a user may send a video playing request by operating a remote control device matched with the video playing device, for example: and if the video playing equipment is a television, the user sends a video playing request by pressing the key value information of the remote control equipment. In an example, a video playing request in a voice form may also be sent to the video playing device by the user, and the request carries tag information of the target video. For example, the user speaks a video playing request to play the yoga video in the voice acquisition area of the video playing device. By the scheme, the video playing device can accurately acquire the video playing intention of the user and determine the label information of the target video according to the intention. And further determining the label information of the target audio according to the incidence relation between the first label information and the second label information.
In step 13, the target audio may be matched in the audio library according to the tag information of the target audio.
In this scheme, the tag information of the same audio frequency may correspond to multiple audio frequencies, for example, if the tag information of the target audio frequency is yoga music, one or more yoga music may be matched in the audio frequency library as the target audio frequency. According to the scheme, the target audio can be determined in the audio library according to the label information of the target audio, the accuracy of the process of determining the target audio is improved, and the target audio and the target video can be conveniently matched in a personalized mode by a user.
In step 14, the target video obtained from the video library can be played on the electronic device, and the target audio can be played synchronously.
By means of the video processing method provided by the embodiment of the disclosure, the video library and the audio library are respectively pre-stored in the electronic device or the cloud, one or more target audios associated with the tag information of the target audio matched with the tag information of the target video can be matched in the audio library according to the video playing intention of the user and the tag information of the target video, so that the target audio is synchronously played on the electronic device.
Optionally, to determine the first target audio among the plurality of target audios. In the scheme, if a plurality of target audios are matched in an audio library according to the tag information of the target audio, a first target audio is determined and played in the plurality of target audios. And then, by acquiring emotion information of the user on the first target audio being played, and when the emotion information shows that the first target audio is loved, the first target audio is continuously played.
In the scheme, after the video playing device plays the first target audio, the emotion information of the user can be acquired by acquiring the voice information or the voiceprint information of the user. For example, after the emotion information of the user is collected, if the emotion information is analyzed to represent that the first target audio is favored, the first target audio is continuously played. The obtained emotion information may be classified, for example, the emotion of the user may be determined by recognizing the mood of the user in the voice. The emotion information may be happy, and the three may represent that the user likes the first target audio. In a preferred scheme, if the user likes the first target audio, and after the playing of the first target audio is completed, if the target video still has the remaining duration, the first target audio is played in a loop. For example, if the total playing time of the target video is three minutes, the first target audio is played while the playing device plays the target video, and if it is determined that the user likes the first target audio and the remaining time of the target video is 1 minute when the first target audio is played for one time, the first target audio is played in a circulating manner. According to the scheme, the first target audio can be played according to the emotion information of the user, and the personalized requirements of the user are met.
Optionally, in order to determine the first target audio from the multiple target audios, in this scheme, one target audio may be randomly selected from the multiple target audios to be determined as the first target audio.
In this scheme, if a plurality of target audios are determined according to tag information of the target audio, one target audio may be randomly determined as the first target audio from among the plurality of target audios. By the scheme, personalized target video and target audio matching schemes can be provided for different users, matching requirements of multiple users are met, and flexibility of scheme matching is improved.
Optionally, in order to determine a first target audio from the multiple target audios and pre-store audio preference information of the user in the electronic device, a playing sequence of the multiple target audios is determined according to the audio preference information of the user; and according to the playing sequence, determining the target audio played first as the first target audio.
In this scheme, the audio preference information of the user may be the order of the audio tags that the user likes. For example: if a plurality of target audio frequencies of yoga video relevance include: bamboo forest music, meditation music, running water music, etc. And the audio preference information of the user is bamboo forest music, meditation music and streaming music, the bamboo forest music can be determined as the first target audio in the target audios according to the audio preference information of the user, and the scheme can match the videos and the audios according to the audio preference of the user, so that the favorite requirements of the user are met, and the satisfaction degree of the user is improved.
Optionally, in order to perform audio replacement when the user does not like the first target audio, in the present solution, if the obtained emotion information indicates that the user does not like the first target audio, the playing of the first target audio is stopped; and determining a second target audio from other audio except the first target audio in the plurality of target audio in a random selection mode, and playing the audio.
In the scheme, if the acquired emotion information of the user shows that the user does not like the first target audio, other audio except the first target audio in the plurality of target audio can be played randomly. By the scheme, personalized target video and target audio matching schemes can be provided for different users, matching requirements of multiple users are met, and flexibility of scheme matching is improved.
Optionally, in order to perform audio replacement when the user does not like the first target audio, in the scheme, if the audio preference information of the user is pre-stored in the electronic device and the obtained emotion information is that the user does not like the first target audio, the playing of the first target audio is stopped; determining the playing sequence of a plurality of target audios according to the audio preference information of the user; and according to the playing sequence, determining the target audio played first in the other audio except the first target audio in the plurality of target audio as the second target audio, and playing the audio.
In this scheme, the second target audio may be determined at least in two ways:
in the first mode, if the user does not like the first target audio and the audio preference information of the user is bamboo forest music > running water music, the playing sequence of other audio in the target audio can be determined according to the preference information. And determining the second-ranked target audio in the bamboo forest music as the second target audio.
In the second mode, if the user does not like the first target audio, the label information of the first target audio is bamboo forest music, and the audio preference information of the user is bamboo forest music > running water music > raindrop music, the first target audio in the running water music can be determined as the second target audio according to the playing sequence after the bamboo forest music is removed.
With this scheme, the second target audio can be determined in various ways. In an optimized scheme, emotion information of the user on the second target audio can be further acquired, the audio is played in a circulating mode when the user likes the second target audio, or a third target audio is determined through audio preference information of the user when the user does not like the second target audio, and the like. Therefore, the video and the audio can be matched according to the audio preference of the user, the favorite requirements of the user are met, and the satisfaction degree of the user is improved.
Optionally, the target audio is accurately acquired when the user plays the target video again. In the scheme, when the user is determined to have the intention of playing the target video again, the target video is played, and the second target audio related to the target video is played synchronously.
In the scheme, the video playing intention of the user can be obtained through the key value information and the voice information sent by the user. For example, the user may select a video to be played through a remote control key configured by the playing device, and if the video is determined to be the target video, it is determined that the user has an intention to play the target video again. In an optimized scheme, a voice instruction sent by a user can also be obtained, and when the voice instruction for playing the target video sent by the user is received, the fact that the user has the video playing intention is determined. And playing the second target audio after the second target audio is determined through the pre-stored incidence relation between the target video and the second target audio. With the scheme, the matching efficiency of the target video and the second target audio can be accelerated, the video and audio can be matched according to the preference of the user, the preference requirement of the user is met, and the satisfaction degree of the user is improved.
Fig. 2 is a schematic diagram of an apparatus for video processing according to an embodiment of the present disclosure, and in conjunction with fig. 2, an apparatus for video processing according to an embodiment of the present disclosure includes a pre-storing module 21, a determining module 22, a matching module 23, and a playing module 24. The pre-storing module 21 is configured to pre-store a video library and an audio library in the electronic device, where the video library includes a plurality of videos, each video is configured with first tag information representing a video type, the audio library includes a plurality of audios, each audio is configured with second tag information representing an audio type, and an association relationship between the first tag information and the second tag information is also stored in the electronic device; the determining module 22 is configured to receive a video playing request which is sent by a user and carries tag information of a target video, and determine tag information of a target audio associated with the tag information of the target video according to an association relationship between the first tag information and the second tag information; the matching module 23 is configured to match one or more target audios corresponding to the tag information of the target audio in the audio library. The playing module 24 is configured to play the target video retrieved from the video library on the electronic device and to synchronously play one or more target audios.
By means of the video processing device provided by the embodiment of the disclosure, the video library and the audio library are pre-stored in the electronic equipment or the cloud server, one or more target audios associated with the tag information of the target audio matched with the tag information of the target video can be matched in the audio library according to the video playing intention of the user and the tag information of the target video, so that the target audio can be synchronously played on the electronic equipment.
Fig. 3 is a schematic diagram of an electronic device provided in an embodiment of the present disclosure, and in conjunction with fig. 3, an embodiment of the present disclosure provides an electronic device including a processor (processor)100 and a memory (memory) 101. Optionally, the apparatus may also include a Communication Interface (Communication Interface)102 and a bus 103. The processor 100, the communication interface 102, and the memory 101 may communicate with each other via a bus 103. The communication interface 102 may be used for information transfer. The processor 100 may call logic instructions in the memory 101 to perform the method for … of the above-described embodiment.
In addition, the logic instructions in the memory 101 may be implemented in the form of software functional units and stored in a computer readable storage medium when the logic instructions are sold or used as independent products.
The memory 101, which is a computer-readable storage medium, may be used for storing software programs, computer-executable programs, such as program instructions/modules corresponding to the methods in the embodiments of the present disclosure. The processor 100 executes functional applications and data processing, i.e., implements the method for video processing in the above-described embodiments, by executing program instructions/modules stored in the memory 101.
The memory 101 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal device, and the like. In addition, the memory 101 may include a high-speed random access memory, and may also include a nonvolatile memory.
The embodiment of the disclosure provides an electronic device, which includes the above device for video processing.
Embodiments of the present disclosure provide a computer-readable storage medium storing computer-executable instructions configured to perform the above-described method for video processing.
Embodiments of the present disclosure provide a computer program product comprising a computer program stored on a computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, cause the computer to perform the above-described method for video processing.
The computer-readable storage medium described above may be a transitory computer-readable storage medium or a non-transitory computer-readable storage medium.
The technical solution of the embodiments of the present disclosure may be embodied in the form of a software product, where the computer software product is stored in a storage medium and includes one or more instructions to enable a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method of the embodiments of the present disclosure. And the aforementioned storage medium may be a non-transitory storage medium comprising: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes, and may also be a transient storage medium.
The above description and drawings sufficiently illustrate embodiments of the disclosure to enable those skilled in the art to practice them. Other embodiments may incorporate structural, logical, electrical, process, and other changes. The examples merely typify possible variations. Individual components and functions are optional unless explicitly required, and the sequence of operations may vary. Portions and features of some embodiments may be included in or substituted for those of others. Furthermore, the words used in the specification are words of description only and are not intended to limit the claims. As used in the description of the embodiments and the claims, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. Similarly, the term "and/or" as used in this application is meant to encompass any and all possible combinations of one or more of the associated listed. Furthermore, the terms "comprises" and/or "comprising," when used in this application, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Without further limitation, an element defined by the phrase "comprising an …" does not exclude the presence of other like elements in a process, method or apparatus that comprises the element. In this document, each embodiment may be described with emphasis on differences from other embodiments, and the same and similar parts between the respective embodiments may be referred to each other. For methods, products, etc. of the embodiment disclosures, reference may be made to the description of the method section for relevance if it corresponds to the method section of the embodiment disclosure.
Those of skill in the art would appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software may depend upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosed embodiments. It can be clearly understood by the skilled person that, for convenience and brevity of description, the specific working processes of the system, the apparatus and the unit described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments disclosed herein, the disclosed methods, products (including but not limited to devices, apparatuses, etc.) may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units may be merely a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form. The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to implement the present embodiment. In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. In the description corresponding to the flowcharts and block diagrams in the figures, operations or steps corresponding to different blocks may also occur in different orders than disclosed in the description, and sometimes there is no specific order between the different operations or steps. For example, two sequential operations or steps may in fact be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved. Each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

Claims (10)

1. A method for video processing, applied to an electronic device, the method comprising:
pre-storing a video library and an audio library, wherein the video library comprises a plurality of videos, each video is configured with first tag information representing the type of the video, the audio library comprises a plurality of audios, each audio is configured with second tag information representing the type of the audio, and the electronic equipment further stores the association relationship between the first tag information and the second tag information;
receiving a video playing request which is sent by a user and carries the label information of a target video, and determining the label information of the target audio which is associated with the label information of the target video according to the association relationship between the first label information and the second label information;
matching a target audio corresponding to the label information of the target audio in the audio library;
and playing the target video acquired from the video library on the electronic equipment, and synchronously playing the target audio.
2. The method of claim 1, wherein if a plurality of target audios are matched in the audio library according to the tag information of the target audio, the method further comprises:
after a first target audio is determined from the multiple target audios and is synchronously played, obtaining emotion information of the user listening to the first target audio;
and if the emotion information indicates that the user likes the first target audio, continuing to play the first target audio.
3. The method of claim 2, wherein the determining the first target audio among the plurality of target audio comprises:
randomly selecting one target audio from the plurality of target audios to be determined as the first target audio.
4. The method of claim 2, wherein if the electronic device further pre-stores audio preference information of the user, the determining the first target audio from the plurality of target audios comprises:
determining the playing sequence of the target audios according to the audio preference information of the user;
and determining the target audio played firstly as the first target audio according to the playing sequence.
5. The method of claim 2, further comprising:
if the emotion information indicates that the user does not like the first target audio, stopping playing the first target audio;
and determining a second target audio from other audio except the first target audio in the plurality of target audio in a random selection mode, and playing the audio.
6. The method of claim 2, wherein if the electronic device further pre-stores audio preference information of the user, the method further comprises:
if the emotion information indicates that the user does not like the first target audio, stopping playing the first target audio;
determining the playing sequence of the target audios according to the audio preference information of the user;
and according to the playing sequence, determining the target audio played first in the other audio except the first target audio in the plurality of target audio as a second target audio, and playing the audio.
7. The method of claim 6, further comprising:
establishing an incidence relation between the target video and the second target audio;
and synchronously playing the second target audio associated with the target video when the user requests to play the target video again.
8. An apparatus for video processing, applied to an electronic device, comprising:
the electronic equipment comprises a pre-storing module, a video library and an audio library, wherein the video library comprises a plurality of videos, each video is configured with first label information representing the type of the video, the audio library comprises a plurality of audios, each audio is configured with second label information representing the type of the audio, and the electronic equipment also stores the incidence relation between the first label information and the second label information;
the determining module is configured to receive a video playing request which is sent by a user and carries the tag information of the target video, and determine the tag information of the target audio which is associated with the tag information of the target video according to the association relation between the first tag information and the second tag information;
the matching module is configured to match target audio corresponding to the tag information of the target audio in the audio library;
and the playing module is configured to play the target video acquired from a video library on the electronic equipment and synchronously play the target audio.
9. An electronic device comprising a processor and a memory storing program instructions, characterized in that the processor is configured to perform the method for video processing according to any of claims 1 to 7 when executing the program instructions.
10. An electronic device, characterized in that it comprises an apparatus for video processing according to claim 8.
CN202110122900.3A 2021-01-29 2021-01-29 Method and device for video processing and electronic equipment Pending CN112995705A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110122900.3A CN112995705A (en) 2021-01-29 2021-01-29 Method and device for video processing and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110122900.3A CN112995705A (en) 2021-01-29 2021-01-29 Method and device for video processing and electronic equipment

Publications (1)

Publication Number Publication Date
CN112995705A true CN112995705A (en) 2021-06-18

Family

ID=76346659

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110122900.3A Pending CN112995705A (en) 2021-01-29 2021-01-29 Method and device for video processing and electronic equipment

Country Status (1)

Country Link
CN (1) CN112995705A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114390342A (en) * 2021-12-10 2022-04-22 阿里巴巴(中国)有限公司 Video dubbing method, device, equipment and medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114390342A (en) * 2021-12-10 2022-04-22 阿里巴巴(中国)有限公司 Video dubbing method, device, equipment and medium
CN114390342B (en) * 2021-12-10 2023-08-29 阿里巴巴(中国)有限公司 Video music distribution method, device, equipment and medium

Similar Documents

Publication Publication Date Title
CN108536414B (en) Voice processing method, device and system and mobile terminal
JP6806762B2 (en) Methods and devices for pushing information
CN110784768B (en) Multimedia resource playing method, storage medium and electronic equipment
CN104778946A (en) Voice control method and system
US9537913B2 (en) Method and system for delivery of audio content for use on wireless mobile device
CN105491126A (en) Service providing method and service providing device based on artificial intelligence
CN108881649B (en) Method and apparatus for providing voice service
US20200043050A1 (en) Vehicle-Based Media System with Audio Advertisement and External-Device Action Synchronization Feature
CN105915701A (en) Information recommending method and apparatus
WO2015102877A1 (en) Method and system for playback of audio content using wireless mobile device
CN106713985B (en) Method and device for recommending network video
CN107943914A (en) Voice information processing method and device
CN104916295A (en) Method and terminal for play control
CN109492152A (en) Push method, apparatus, computer equipment and the storage medium of customized content
CN106407287A (en) Multimedia resource pushing method and system
CN112995705A (en) Method and device for video processing and electronic equipment
CN112784069A (en) IPTV content intelligent recommendation system and method
CN112086082A (en) Voice interaction method for karaoke on television, television and storage medium
CN108121713A (en) Intension recognizing method and device
CN116847131A (en) Play control method, device, remote controller, play system and storage medium
CN109903762B (en) Voice control method, device, storage medium and voice equipment
WO2017080216A1 (en) Method for recommending video through bluetooth technology, remote controller, and smart tv
US20170078750A1 (en) Forecasting and guidance of content consumption
CN111343483B (en) Method and device for prompting media content segment, storage medium and electronic device
CN112650467B (en) Voice playing method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination