WO2016161899A1 - 一种多媒体信息处理方法、设备及计算机存储介质 - Google Patents

一种多媒体信息处理方法、设备及计算机存储介质 Download PDF

Info

Publication number
WO2016161899A1
WO2016161899A1 PCT/CN2016/077243 CN2016077243W WO2016161899A1 WO 2016161899 A1 WO2016161899 A1 WO 2016161899A1 CN 2016077243 W CN2016077243 W CN 2016077243W WO 2016161899 A1 WO2016161899 A1 WO 2016161899A1
Authority
WO
WIPO (PCT)
Prior art keywords
parameter
multimedia information
frame image
key frame
determining
Prior art date
Application number
PCT/CN2016/077243
Other languages
English (en)
French (fr)
Inventor
周健
罗少华
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2016161899A1 publication Critical patent/WO2016161899A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234309Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4 or from Quicktime to Realvideo
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/266Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
    • H04N21/2668Creating a channel for a dedicated end-user group, e.g. insertion of targeted commercials based on end-user profiles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/812Monomedia components thereof involving advertisement data

Definitions

  • the invention designs information processing technology, and specifically designs a multimedia information processing method, device and computer storage medium.
  • the first multimedia information such as a movie or a TV drama video
  • the second multimedia information such as an advertisement
  • the embodiment of the present invention is to provide a multimedia information processing method, a device, and a computer storage medium, which can automatically insert a second multimedia information into the first multimedia information, that is, can automatically insert an advertisement.
  • the embodiment of the invention provides a multimedia information processing method, and the method includes:
  • the embodiment of the present invention further provides a device, where the device includes: a first acquiring unit, a first analyzing unit, a second acquiring unit, a second analyzing unit, a matching unit, and a determining unit;
  • the first acquiring unit is configured to acquire the first multimedia information
  • the first analyzing unit is configured to analyze the first multimedia information acquired by the first acquiring unit, obtain a key frame image of the first multimedia information, and identify the key frame image to obtain a a first parameter of the key frame image; the first parameter characterizing a display attribute parameter of the key frame image;
  • the second obtaining unit is configured to acquire second multimedia information
  • the second analyzing unit is configured to analyze the second multimedia information acquired by the second acquiring unit, obtain a first frame image of the second multimedia information, and identify the first frame image, Obtaining a second parameter of the first frame image; the second parameter characterizing a display attribute parameter of the first frame image;
  • the matching unit is configured to determine whether the first parameter obtained by the first analyzing unit and the second parameter obtained by the second analyzing unit meet a predetermined condition
  • the determining unit is configured to: when the matching unit determines that the first parameter and the second parameter meet a predetermined condition, determine a location of the key frame image in the first multimedia information, according to the location Determining an insertion point of the second multimedia information to insert at the insertion point Entering the second multimedia information.
  • the embodiment of the invention further provides a computer storage medium, wherein the computer storage medium stores computer executable instructions, and the computer executable instructions are configured to execute the multimedia information processing method according to the embodiment of the invention.
  • the embodiment of the present invention provides a multimedia information processing method, a device, and a computer storage medium.
  • the first multimedia information is obtained by acquiring and analyzing the first multimedia information (the first multimedia information, that is, a video). a key frame image of the information; identifying the key frame image to obtain a first parameter of the key frame image; the first parameter characterizing a display attribute parameter of the key frame image; acquiring and analyzing the second multimedia information (
  • the second multimedia information that is, the creative material, obtains a first frame image of the second multimedia information; identifies the first frame image, and obtains a second parameter of the first frame image; Determining, by the second parameter, a display attribute parameter of the first frame image; determining whether the first parameter and the second parameter satisfy a predetermined condition, and determining, when the first parameter and the second parameter satisfy a predetermined condition, determining Positioning the key frame image in the first multimedia information, determining an insertion point of the second multimedia information according to the location, to insert the second multimedia information at the insertion point
  • the key frame image is obtained by identifying the key frame image of the first multimedia information and identifying the first frame image of the second multimedia information.
  • the position of the key frame image matching the second parameter of the first frame image is used as the insertion point of the second multimedia information, thereby realizing the automatic control insertion of the second multimedia information, thereby greatly reducing the manpower cost.
  • FIG. 1 is a schematic flowchart of a multimedia information processing method according to Embodiment 1 of the present invention.
  • FIG. 2 is a schematic diagram of an application scenario of a multimedia information processing method according to an embodiment of the present invention
  • FIG. 3 is a schematic flowchart of a multimedia information processing method according to Embodiment 2 of the present invention.
  • FIG. 4 is a schematic structural diagram of a device according to an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of hardware structure of a device according to an embodiment of the present invention.
  • the embodiment of the invention provides a multimedia information processing method, and the multimedia information processing method is applied to a device.
  • 1 is a schematic flowchart of a multimedia information processing method according to Embodiment 1 of the present invention; as shown in FIG. 1, the multimedia information processing method includes:
  • Step 101 Acquire and analyze the first multimedia information, and obtain a key frame image of the first multimedia information.
  • the multimedia information processing method in the embodiment of the present invention is applied to a device, where the device may be a server or a server cluster to which the multimedia platform belongs, and the multimedia platform is a Tencent video or the like.
  • the acquiring the first multimedia information, analyzing the first multimedia information, and obtaining the key frame image of the first multimedia information includes: acquiring, by the device, the first multimedia information, The first multimedia information is analyzed to obtain a key frame image of the first multimedia information.
  • the first multimedia information specifically refers to a video
  • the video acquired by the device is usually video data compressed by a specific compression method.
  • the compressed video data is usually IPB frame data.
  • the IPB frame data includes an I frame, a P frame, and a B frame.
  • the I frame is a key frame, and it can be understood that the picture in the I frame is completely reserved, that is, only the current frame data is needed for decoding.
  • the P frame is a forward predictive coded frame, indicating the difference between the current frame and the previous key frame (or the previous frame).
  • the image data of the previous key frame (or the previous frame) is required for decoding to generate the frame image, that is, P.
  • the frame does not have full image data.
  • the B frame is a bidirectional predictive coded frame. It can be understood that the B frame is the difference between the current frame and the previous frame and the subsequent frame.
  • the decoding requires the image data of the previous frame and the subsequent frame to generate the frame image.
  • the specific compression mode may be any compression method (ie, compression standard) including a key frame (also referred to as a reference frame) after compression, such as the Moving Picture Experts Group (MPEG, Moving Picture Experts Group). ) Compression standards and so on.
  • the analyzing the first multimedia information to obtain a key frame image of the first multimedia information is: analyzing the first multimedia information, obtaining the first I frame image in multimedia information.
  • the first multimedia information is analyzed, that is, the first multimedia information characterized by the IPB frame data is analyzed, because the I frame picture in the IPB frame data is completely reserved, and the P frame and the B frame are not For the complete image, an I frame picture may be extracted from the IPB frame data, and the extracted I frame picture is at least one frame.
  • Step 102 Identify the key frame image to obtain a first parameter of the key frame image; the first parameter represents a display attribute parameter of the key frame image.
  • the first parameter may characterize a color parameter of the key frame image; the color parameter may be characterized by a red, green, and blue (RGB) color system.
  • the device identifies an RGB value of each pixel in the key frame image, determines a color system of the corresponding pixel point in the RGB color system table based on the RGB value of each pixel point, and counts the key frame image.
  • the number of pixels in the same color system, the color system corresponding to the pixel number with the largest number of the same color system (or the color system corresponding to the pixel point whose number is greater than the first preset threshold) is used as the color of the key frame image. parameter.
  • the first parameter may also characterize a grayscale parameter of the keyframe image, the grayscale parameter being characterization by a grayscale value.
  • the device processes the key frame image in a preset processing manner, and converts the key frame image into a grayscale image; wherein the preset processing mode may be an image binarization processing mode.
  • the device identifies a gray value of each pixel in the grayscale image (the processed key frame image), and counts the number of pixels in the same grayscale (where the pixel in the same grayscale may be Is a pixel at the same gray value, or a pixel at the same gray value range), and the pixel with the largest number of pixels in the same gray level (or the gray number corresponding to the pixel with the second preset threshold) Corresponding gradation is used as the gradation parameter of the key frame image.
  • Step 103 Acquire and analyze the second multimedia information to obtain a first frame image of the second multimedia information.
  • the second multimedia information specifically refers to creative material data
  • the creative material data may be in a video, a graphic interchange format (GIF, Graphics Interchange Format) or a picture format.
  • the image data is analyzed to obtain a first frame image in the creative data.
  • the creative data is video data
  • the video data is obtained. a first frame image
  • the creative data is GIF data
  • the creative data is a picture, directly obtaining the picture.
  • Step 104 Identify the first frame image to obtain a second parameter of the first frame image; and the second parameter to represent a display attribute parameter of the first frame image.
  • the second parameter characterizes a color parameter of the first frame image; the color parameter may be characterized by a red, green, and blue (RGB) color system.
  • the device identifies an RGB value of each pixel in the first frame image, determines a color system of the corresponding pixel point in the RGB color system table based on the RGB value of each pixel point, and counts the first frame.
  • the number of pixels in the same color system in the image, the color system corresponding to the pixel number with the largest number of the same color system (or the color system corresponding to the pixel point whose number is greater than the first preset threshold) is used as the first frame.
  • the color parameters of the image is used as the first frame.
  • the first parameter may also characterize a grayscale parameter of the first frame image, the grayscale parameter being characterization by a grayscale value.
  • the device processes the first frame image according to a preset processing manner, and converts the first frame image into a grayscale image; wherein the preset processing mode may be an image binarization processing mode.
  • the device identifies a gray value of each pixel in the grayscale image (the first frame image after processing), and counts the number of pixel points in the same gray scale (where the pixel points in the same grayscale It may be a pixel at the same gray value, or may be a pixel in the same gray value range, and the pixel with the largest number of the same gray level (or a pixel corresponding to the second preset threshold)
  • the gradation corresponding to the gradation is used as the gradation parameter of the first frame image.
  • the first parameter obtained in step 102 and the second parameter obtained in this step Using the parameter representation of the same attribute, when the first parameter is represented by the color parameter of the key frame image, the second parameter is also characterized by the color parameter of the first frame image; correspondingly, when the first parameter When a parameter is characterized by a grayscale parameter of the key frame image, the second parameter is also characterized by a grayscale parameter of the first frame image.
  • Step 105 Determine whether the first parameter and the second parameter satisfy a predetermined condition, and when the first parameter and the second parameter satisfy a predetermined condition, determine that the key frame image is in the first multimedia a location in the volume information, determining an insertion point of the second multimedia information according to the location to insert the second multimedia information at the insertion point.
  • the determining whether the first parameter and the second parameter satisfy a predetermined condition comprises: determining whether a color parameter or a gray parameter of the key frame image and the first frame image match; when the key When the frame image and the color parameter or the grayscale parameter of the first frame image match, it is determined that the first parameter and the second parameter satisfy a predetermined condition.
  • the first parameter of the key frame image and the second parameter of the first frame image are characterized by parameters of the same attribute.
  • the color parameter may be a color system, and then determining the color of the key frame image and the first frame image Whether the parameters match, whether: the color system of the key frame image and the color system of the first frame image are consistently matched, when the color system of the key frame image and the color of the first frame image are consistently matched.
  • the color frame image matches the color parameter of the first frame image; or determining the color system of the key frame image and the color of the first frame image in a preset color system table Whether it is less than the second threshold, when the color system of the key frame image and the color system of the first frame image are less than a second threshold in the preset colorimetric table (if the second threshold is 2,
  • the color frame number of the key frame image is 21, the RGB values of the corresponding pixel points are: 240, 156, 66; the color system number of the first key frame image is 22, and the RGB values of the corresponding pixel points are: 245, 177 109) determining the key frame image and the first frame map Color matching parameters.
  • the gray parameter when the first parameter and the second parameter are both characterized by a gray parameter, the gray parameter may be a gray value, and the key frame image and the first are determined. Whether the gray level parameter of the frame image matches, whether: determining whether the color system of the key frame image and the gray value of the first frame image match, when the gray value of the key frame image and the first When the gray value matching of the frame image is consistent, determining that the key frame image and the gray frame parameter of the first frame image match; or determining the gray value of the key frame image and the gray of the first frame image Whether the degree value is less than a third threshold, when the gray value of the key frame image and the gray value of the first frame image are less than a third threshold (as the third threshold is 20, the key frame image The gray value is 150, and the gray value of the first key frame image is 165), and the gray frame parameters of the key frame image and the first frame image are determined to match.
  • the key frame image in the first multimedia information may include N, and N is a positive integer; then determine the first of the M (M is a positive integer and M is less than N) key image
  • the insertion point of the second multimedia information at the location of the P key frame images is selected according to a preset rule; wherein P is a positive integer and P is less than M.
  • the method further includes: acquiring a fifth parameter associated with the first multimedia information, where the fifth parameter includes at least a following parameter a: a quantity of the second multimedia information that is allowed to be inserted, a time interval between the second multimedia information that is allowed to be inserted, and a time range in which the second multimedia information is not allowed to be inserted in the first multimedia information. ;
  • determining, according to the location, the insertion point of the second multimedia information comprising: determining, according to a location of the key frame image in the first multimedia information, and the fifth parameter An insertion point of the second multimedia information in the first multimedia information.
  • allowing insertion The number of second multimedia information, the time interval between the second multimedia information that is allowed to be inserted, the time range in which the second multimedia information is not allowed to be inserted in the first multimedia information, and the insertion is determined to be permitted.
  • the number of second multimedia information and/or the time range of the second multimedia information allowed to be inserted in the first multimedia information based on the number of second multimedia information allowed to be inserted and / Or a time range of the second multimedia information that is allowed to be inserted in the first multimedia information, where an insertion point of the second multimedia information is located at a position where the P key frame images are located; wherein the P value is not greater than the The number of second multimedia information that is allowed to be inserted, and the location of the P key frame images is in a time range of the second multimedia information that is allowed to be inserted in the first multimedia information.
  • the device when acquiring each second multimedia information (ie, creative material data), the device identifies a first image of each second multimedia information, and obtains each second multimedia a second parameter of the first frame image of the volume information, the second parameter being identifiable by a color parameter (such as a color system) or a gray parameter (a gray value), and the second parameter of each second multimedia information Stored in the database by the same color parameter category or the same grayscale category parameter.
  • a color parameter such as a color system
  • a gray parameter a gray value
  • the first multimedia information After acquiring the first multimedia information, identifying a key frame image of the first multimedia information (ie, an I frame image), and obtaining a first parameter of the key frame image, selecting and a set of categories in which the first parameter matches, in which a second multimedia information is selected to be inserted in a position of the key frame image in the first multimedia information (where the key frame is The location of the image in the first multimedia information may be determined based on a fifth parameter acquired in advance).
  • identifying a key frame image of the first multimedia information ie, an I frame image
  • obtaining a first parameter of the key frame image selecting and a set of categories in which the first parameter matches, in which a second multimedia information is selected to be inserted in a position of the key frame image in the first multimedia information (where the key frame is The location of the image in the first multimedia information may be determined based on a fifth parameter acquired in advance).
  • the first parameter of the key frame image is determined by identifying the key frame image of the first multimedia information and identifying the first frame image of the second multimedia information.
  • the position of the key frame image matching the second parameter of the first frame image is used as the insertion point of the second multimedia information, and on the one hand, the automatic control insertion of the second multimedia information is realized, which greatly reduces the manpower Cost; on the other hand, the first parameter and the key of the key frame image
  • the second parameter matching of the first frame image means that the key frame image is the same or similar to the color or gray level of the first frame image, such that when the second multimedia is inserted at the position of the key frame image
  • the second multimedia information that is viewed by the user can be perfectly integrated with the first multimedia information in the display effect, so that the user suddenly inserts when viewing the first multimedia information.
  • the second multimedia information does not feel abrupt, greatly improving the user's visual experience.
  • the multimedia information processing method in the embodiment of the present invention is applicable to the following scenario: user 1 uploads the first through the first terminal 11 Multimedia information (the first multimedia information, that is, video), and setting a third parameter in the uploading process (the third parameter includes at least: an advertisement type that the first multimedia information is allowed to be inserted and And the channel type in which the first multimedia information is located; the server 3 acquires the first multimedia information, and acquires the third parameter, and uses the third parameter as the first multimedia
  • the tag information parameter associated with the body information is stored.
  • the user 2 uploads the second multimedia information (the second multimedia information, that is, the creative material) through the second terminal 21, and sets a fourth parameter in the uploading process (the fourth parameter includes at least: the first The type of the channel that the multimedia information is allowed to be served and/or the type of the second multimedia information; the server 3 acquires the second multimedia information, and acquires the fourth parameter, The fourth parameter is stored as a tag information parameter associated with the second multimedia information.
  • the server 3 may determine, according to the multimedia information processing method described in Embodiment 1, an insertion point in the first multimedia information, where the insertion point inserts the second multimedia information into the first multiple The insertion point in the media information.
  • the embodiment of the invention further provides a computer storage medium, wherein the computer storage medium stores computer executable instructions, and the computer executable instructions are configured to execute the multimedia information processing method according to the embodiment of the invention.
  • an embodiment of the present invention further provides a multimedia information processing method.
  • 3 is a schematic flowchart of a multimedia information processing method according to Embodiment 2 of the present invention. As shown in FIG. 3, the multimedia information processing method includes:
  • Step 301 Acquire a first multimedia information and a third parameter; the third parameter represents a label information parameter associated with the first multimedia information.
  • the multimedia information processing method in the embodiment of the present invention is applied to a device, where the device may be a server or a server cluster to which the multimedia platform belongs, and the multimedia platform is a Tencent video or the like.
  • the acquiring the first multimedia information, analyzing the first multimedia information, and obtaining the key frame image of the first multimedia information includes: acquiring, by the device, the first multimedia information, The first multimedia information is analyzed to obtain a key frame image of the first multimedia information.
  • the first multimedia information specifically refers to a video
  • the video acquired by the device is usually video data compressed by a specific compression method.
  • the user 1 uploads the first multimedia information to the device by using the first terminal, and the user 1 generally selects a channel type based on the type of the first multimedia information, for example, When the first multimedia information is a movie, the movie channel is usually selected for uploading; when the first multimedia information is a television drama, the television is usually selected to upload the channel; the first of the embodiment is The channel type in which the multimedia information is located is the channel type associated with the type of the first multimedia information set by the acquired user. And/or, when the user 1 uploads the first multimedia information through the first terminal, it is generally required to set an advertisement type that is allowed to be inserted, such as a lifestyle care product, a medicine health care product, a food category, etc.
  • an advertisement type that is allowed to be inserted, such as a lifestyle care product, a medicine health care product, a food category, etc.
  • the type of advertisement allowed to be inserted may be preset to n types for user 1 to select; wherein n is a positive integer. Then, when the user 1 selects the type of advertisement that is allowed to be inserted, the device obtains the type of advertisement that is allowed to be inserted according to the operation of the user 1 on the first terminal.
  • Step 302 Acquire second multimedia information and fourth parameter; the fourth parameter represents a label information parameter associated with the second multimedia information.
  • the second multimedia information specifically refers to creative material data
  • the creative material data may be in a video, GIF or image format.
  • the user 2 uploads the second multimedia information to the device through the second terminal, that is, uploads the creative material to the device.
  • the label information parameter associated with the second multimedia information needs to be set, where the label information parameter includes the second multimedia
  • the type of channel that the information is allowed to serve and/or the type of the second multimedia information is determined by the number of channels included in the multimedia platform carried by the device.
  • the type of the second multimedia information may be preset to n types for the user 2 to select, such as a life care class, a medicine health care product, a food category, and the like.
  • the device After the user 2 selects the channel type that the second multimedia information is allowed to be served and/or the type of the second multimedia information, the device operates according to the operation of the user 2 on the second terminal. Obtaining a channel type that the second multimedia information is allowed to be served and/or a type of the second multimedia information.
  • Step 303 Determine whether the third parameter of the first multimedia information and the fourth parameter of the second multimedia information match.
  • Step 304 After determining that the third parameter of the first multimedia information and the fourth parameter of the second multimedia information are consistent, the fifth parameter associated with the first multimedia information is obtained.
  • the fifth parameter includes at least one of the following parameters: the number of second multimedia information that is allowed to be inserted, the time interval between the second multimedia information that is allowed to be inserted, and the first multimedia information is not allowed. The time range in which the second multimedia information is inserted.
  • the scenarios applied by step 303 and step 304 generally include a large amount of first multimedia information and a large amount of second multimedia information, that is, a massive amount of video and a large amount of creative material.
  • the third parameter of the first multimedia information ie, the type of advertisement in which the first multimedia information is allowed to be inserted and/or the channel type in which the first multimedia information is located
  • the first a fourth parameter of the multimedia information ie, a channel type that the second multimedia information allows to be served and/or the second multimedia message Matching the third parameter of the first multimedia information and the fourth parameter of the second multimedia information, that is, the device selecting and the third parameter of the first multimedia information
  • the matching second multimedia information is used as the second multimedia information to be inserted, so as to filter a large amount of the creative material, and the targeted delivery of the advertisement can be realized.
  • the fifth parameter associated with the first multimedia information includes at least one of the following parameters: a number of second multimedia information that is allowed to be inserted, a time interval between second multimedia information that is allowed to be inserted And a time range in which the second multimedia information is not allowed to be inserted in the first multimedia information.
  • the device may determine, according to a duration of the first multimedia information, a quantity of second multimedia information that is allowed to be inserted in the first multimedia information, where the inserted second multimedia is allowed
  • the number of information is the highest upper limit. In practical applications, the number of inserted second multimedia information is not higher than the upper limit.
  • the time interval between the second multimedia information that is allowed to be inserted is the time interval between the insertion points of the two second multimedia information.
  • a time range in which the second multimedia information is not allowed to be inserted in the first multimedia information that is, a time range in which the second multimedia information is allowed to be inserted in the first multimedia information, for example, when the video header is located
  • the user usually feels disgusted; when the ad is inserted in the time range of the end of the video, the end of the video that the user browses to the video usually switches to another video, and usually does not see the inserted ad; therefore,
  • a time range in which the second multimedia information is not allowed to be inserted in the first multimedia information may be set (or a time range in which the second multimedia information is allowed to be inserted in the first multimedia information) .
  • Step 305 Analyze the first multimedia information to obtain N key frame images of the first multimedia information; N is a positive integer.
  • the first multimedia information is video data compressed by a specific compression method.
  • the compressed video data is usually IPB frame data.
  • the IPB frame data includes an I frame, a P frame, and a B frame.
  • the I frame is a key frame, and it can be understood that the picture in the I frame is completely reserved, that is, only the current frame data is needed for decoding.
  • P frame is forward prediction
  • the code frame indicates the difference between the current frame and the previous key frame (or the previous frame).
  • the image data of the previous key frame (or the previous frame) is required for decoding to generate the current frame image, that is, the P frame has no complete image data.
  • the B frame is a bidirectional predictive coded frame.
  • the B frame is the difference between the current frame and the previous frame and the subsequent frame.
  • the decoding requires the image data of the previous frame and the subsequent frame to generate the frame image.
  • the specific compression mode may be any compression mode (ie, compression standard) including a key frame (also referred to as a reference frame) after compression, such as an MPEG compression standard.
  • the analyzing the first multimedia information to obtain a key frame image of the first multimedia information is: analyzing the first multimedia information, obtaining the first I frame image in multimedia information.
  • the first multimedia information is analyzed, that is, the first multimedia information characterized by the IPB frame data is analyzed, because the I frame picture in the IPB frame data is completely reserved, and the P frame and the B frame are not For the complete image, an I frame picture may be extracted from the IPB frame data, and the extracted I frame pictures are N pieces.
  • Step 306 Identify the N key frame images to obtain a first parameter of the N key frame images; the first parameter represents a display attribute parameter of the key frame image.
  • the first parameter may characterize a color parameter of the key frame image; the color parameter may be characterized by a red, green, and blue (RGB) color system.
  • the device identifies an RGB value of each pixel in the key frame image, determines a color system of the corresponding pixel point in the RGB color system table based on the RGB value of each pixel point, and counts the key frame image.
  • the number of pixels in the same color system, the color system corresponding to the pixel number with the largest number of the same color system (or the color system corresponding to the pixel point whose number is greater than the first preset threshold) is used as the color of the key frame image. parameter.
  • the first parameter may also characterize a grayscale parameter of the keyframe image, the grayscale parameter being characterization by a grayscale value.
  • the device processes the key frame image in a preset processing manner, and converts the key frame image into a grayscale image; wherein the preset processing mode may be an image binarization processing mode.
  • the device identifies a gray value of each pixel in the grayscale image (the processed key frame image), and counts the number of pixels in the same grayscale (wherein Pixels in the same gray level may be pixels at the same gray value, or pixels in the same gray value range, and the number of pixels in the same gray level is the most (or the number is greater than the second pre-
  • the gradation corresponding to the gradation corresponding to the pixel of the threshold is set as the gradation parameter of the key frame image.
  • Step 307 Analyze the second multimedia information to obtain a first frame image of the second multimedia information.
  • the device analyzes the second multimedia information (ie, the creative data) to obtain a first frame image in the second multimedia information, specifically, when the second multimedia information is Obtaining a first frame image of the video data when the video data is; obtaining the first picture of the GIF when the second multimedia information is GIF data; and when the second multimedia information is When the picture is taken, the picture is directly obtained.
  • the second multimedia information ie, the creative data
  • Step 308 Identify the first frame image to obtain a second parameter of the first frame image; and the second parameter to represent a display attribute parameter of the first frame image.
  • the second parameter characterizes a color parameter of the first frame image; the color parameter may be characterized by a red, green, and blue (RGB) color system.
  • the device identifies an RGB value of each pixel in the first frame image, determines a color system of the corresponding pixel point in the RGB color system table based on the RGB value of each pixel point, and counts the first frame.
  • the number of pixels in the same color system in the image, the color system corresponding to the pixel number with the largest number of the same color system (or the color system corresponding to the pixel point whose number is greater than the first preset threshold) is used as the first frame.
  • the color parameters of the image is used as the first frame.
  • the first parameter may also characterize a grayscale parameter of the first frame image, the grayscale parameter being characterization by a grayscale value.
  • the device processes the first frame image according to a preset processing manner, and converts the first frame image into a grayscale image; wherein the preset processing mode may be an image binarization processing mode.
  • the device identifies a gray value of each pixel in the grayscale image (the first frame image after processing), and counts the number of pixel points in the same gray scale (where the pixel points in the same grayscale Can be a pixel at the same gray value, or it can be in the same a pixel of a gray value range), the gray level corresponding to the pixel number of the same gray level (or the gray level corresponding to the pixel point of the second preset threshold) is used as the first frame image Grayscale parameters.
  • the first parameter obtained in step 306 and the second parameter obtained in the step are characterized by parameters of the same attribute, and when the first parameter is represented by the color parameter of the key frame image, the first parameter
  • the second parameter is also characterized by the color parameter of the first frame image; correspondingly, when the first parameter is represented by the gray parameter of the key frame image, the second parameter also adopts the first frame image Characterization of grayscale parameters.
  • Step 309 Determine whether the first parameter and the second parameter satisfy a predetermined condition, when the first parameter and the second parameter of the M key frame images in the N key frame images satisfy a predetermined condition, determine Positions of the M key frame images in the first multimedia information, determining the number according to a position of the M key frame images in the first multimedia information and the fifth parameter P insertion points of the multimedia information to insert the second multimedia information at the P insertion points; wherein, M and P are both positive integers, and P is less than or equal to M and less than or equal to N.
  • the determining whether the first parameter and the second parameter satisfy a predetermined condition comprises: determining whether a color parameter or a gray parameter of the key frame image and the first frame image match; when the key When the frame image and the color parameter or the grayscale parameter of the first frame image match, it is determined that the first parameter and the second parameter satisfy a predetermined condition.
  • the first parameter of the key frame image and the second parameter of the first frame image are characterized by parameters of the same attribute.
  • the color parameter may be a color system, and then determining the color of the key frame image and the first frame image Whether the parameters match, whether: the color system of the key frame image and the color system of the first frame image are consistently matched, when the color system of the key frame image and the color of the first frame image are consistently matched.
  • the distance of the image color in the preset color system table is less than a second threshold, when the color system of the key frame image and the color of the first frame image are smaller than the second color in the preset color system table Threshold value (if the second threshold is 2, the color frame number of the key frame image is 21, the RGB values of the corresponding pixel points are: 240, 156, 66; the color system number of the first key frame image is 22.
  • the RGB values of the corresponding pixel points are: 245, 177, 109), and the color parameters of the key frame image and the first frame image are determined to match.
  • the gray parameter when the first parameter and the second parameter are both characterized by a gray parameter, the gray parameter may be a gray value, and the key frame image and the first are determined. Whether the gray level parameter of the frame image matches, whether: determining whether the color system of the key frame image and the gray value of the first frame image match, when the gray value of the key frame image and the first When the gray value matching of the frame image is consistent, determining that the key frame image and the gray frame parameter of the first frame image match; or determining the gray value of the key frame image and the gray of the first frame image Whether the degree value is less than a third threshold, when the gray value of the key frame image and the gray value of the first frame image are less than a third threshold (as the third threshold is 20, the key frame image The gray value is 150, and the gray value of the first key frame image is 165), and the gray frame parameters of the key frame image and the first frame image are determined to match.
  • the key frame image in the first multimedia information may include N, and N is a positive integer; then determine the first of the M (M is a positive integer and M is less than N) key image
  • the insertion point of the second multimedia information at the location of the P key frame images is selected according to a preset rule; wherein P is a positive integer and P is less than M.
  • the key frame image is obtained by identifying a key frame image of the first multimedia information and identifying the first frame image of the second multimedia information.
  • the position of the key frame image matching the first parameter of the first frame image is used as the insertion point of the second multimedia information, and on the one hand, the automatic control insertion of the second multimedia information is implemented.
  • matching the first parameter of the key frame image with the second parameter of the first frame image means color or grayscale of the key frame image and the first frame image The same or similar, such that when the second multimedia information is inserted at the location of the key frame image, the second multimedia information viewed by the user can be displayed with the first multimedia information The effect is perfectly integrated, so that the user does not feel abruptly abruptly inserting the second multimedia information when viewing the first multimedia information, thereby greatly improving the user's visual experience.
  • the embodiment of the invention further provides a computer storage medium, wherein the computer storage medium stores computer executable instructions, and the computer executable instructions are configured to execute the multimedia information processing method according to the embodiment of the invention.
  • An embodiment of the present invention further provides an apparatus.
  • 4 is a schematic structural diagram of a device according to an embodiment of the present invention; as shown in FIG. 4, the device includes: a first acquiring unit 41, a first analyzing unit 42, a second acquiring unit 43, a second analyzing unit 44, and matching. Unit 45 and determining unit 46; wherein
  • the first acquiring unit 41 is configured to acquire first multimedia information.
  • the first analyzing unit 42 is configured to analyze the first multimedia information acquired by the first acquiring unit 41, obtain a key frame image of the first multimedia information, and identify the key frame image. Obtaining a first parameter of the key frame image; the first parameter characterizing a display attribute parameter of the key frame image;
  • the second obtaining unit 43 is configured to acquire second multimedia information.
  • the second analyzing unit 44 is configured to analyze the second multimedia information acquired by the second acquiring unit 43 to obtain a first frame image of the second multimedia information, and identify the first frame Image, obtaining a second parameter of the first frame image; the second parameter characterizing the first frame image Display attribute parameters of the image;
  • the matching unit 45 is configured to determine whether the first parameter obtained by the first analyzing unit 42 and the second parameter obtained by the second analyzing unit 44 satisfy a predetermined condition
  • the determining unit 46 is configured to determine a location of the key frame image in the first multimedia information when the matching unit 45 determines that the first parameter and the second parameter satisfy a predetermined condition, Determining an insertion point of the second multimedia information according to the location to insert the second multimedia information at the insertion point.
  • the first multimedia information is specifically a video
  • the video acquired by the first acquiring unit 41 is usually video data compressed by using a specific compression method.
  • the compressed video data is usually IPB frame data.
  • the IPB frame data includes an I frame, a P frame, and a B frame.
  • the I frame is a key frame, and it can be understood that the picture in the I frame is completely reserved, that is, only the current frame data is needed for decoding.
  • the P frame is a forward predictive coded frame, indicating the difference between the current frame and the previous key frame (or the previous frame).
  • the image data of the previous key frame (or the previous frame) is required for decoding to generate the frame image, that is, P.
  • the frame does not have full image data.
  • the B frame is a bidirectional predictive coded frame. It can be understood that the B frame is the difference between the current frame and the previous frame and the subsequent frame.
  • the decoding requires the image data of the previous frame and the subsequent frame to generate the frame image.
  • the specific compression mode may be any compression mode (ie, compression standard) including a key frame (also referred to as a reference frame) after compression, such as an MPEG compression standard.
  • the first analyzing unit 42 analyzes the first multimedia information to obtain an I frame image in the first multimedia information.
  • the second multimedia information specifically refers to creative material data
  • the creative material data may be in a video, GIF or image format.
  • the creative data is analyzed, and the first frame image in the creative data is obtained.
  • the creative data is video data, a first frame image of the video data;
  • the creative data is GIF data, obtaining the first piece of the GIF Picture;
  • the creative data is a picture, the picture is directly obtained.
  • the first parameter may represent a color parameter of the key frame image; the second parameter may represent a color parameter of the first frame image; the color parameter may pass red, green, and blue (RGB) Color characterization.
  • the first analyzing unit 42 identifies the RGB value of each pixel in the key frame image, determines the color system of the corresponding pixel point in the RGB color system table based on the RGB value of each pixel point, and statistics The number of pixels in the same color system in the key frame image, the color system corresponding to the pixel number having the largest number of the same color system (or the color system corresponding to the pixel point whose number is greater than the first preset threshold) is regarded as the key The color parameter of the frame image.
  • the manner in which the second analysis unit 44 obtains the color parameter of the first frame image is similar to the above, and details are not described herein again.
  • the first parameter may also characterize a grayscale parameter of the key frame image
  • the second parameter may characterize a grayscale parameter of the first frame image
  • the grayscale parameter may be characterized by a grayscale value.
  • the device processes the key frame image in a preset processing manner, and converts the key frame image into a grayscale image; wherein the preset processing mode may be an image binarization processing mode.
  • the first analyzing unit 42 identifies the gray value of each pixel in the grayscale image (the processed key frame image), and counts the number of pixel points in the same grayscale (where the the same grayscale is in the same grayscale).
  • the pixel points may be pixels at the same gray value, or may be pixels in the same gray value range, and the pixels having the largest number of pixels in the same gray level (or pixels having a quantity greater than the second preset threshold)
  • the gray level corresponding to the gray level corresponding to the point is used as the gray level parameter of the key frame image.
  • the manner in which the second analyzing unit 44 obtains the grayscale parameter of the first frame image is similar to the foregoing, and details are not described herein again.
  • the first parameter represents a color parameter or a gray parameter of the key frame image
  • the second parameter represents a color parameter or a gray parameter of the first frame image
  • the matching unit 45 is configured to determine whether the color frame or the gray parameter of the key frame image and the first frame image match; when the key frame image and the first frame image When the color parameter or the grayscale parameter of the image matches, it is determined that the first parameter and the second parameter satisfy a predetermined condition.
  • the gray parameter may be a gray value
  • the matching unit 45 determines the key frame image and the first Whether the gray level parameters of the image of one frame match, whether: determining whether the color system of the key frame image and the gray value of the first frame image match, when the gray value of the key frame image and the first When the gray value matching of one frame image is consistent, determining that the key frame image and the gray frame parameter of the first frame image match; or determining the gray value of the key frame image and the first frame image Whether the gray value is smaller than a third threshold, when the gray value of the key frame image and the gray value of the first frame image are smaller than a third threshold (as the third threshold is 20, the key frame image The gray value is 150, and the gray value of the first key frame image is 165), and the gray frame parameters of the key frame image and the first frame image are determined to match.
  • the first parameter and the second parameter are both characterized by the color parameter
  • the key frame image in the first multimedia information may include N, and N is a positive integer; then determine the first of the M (M is a positive integer and M is less than N) key image
  • the insertion point of the second multimedia information at the location of the P key frame images is selected according to a preset rule; wherein P is a positive integer and P is less than M.
  • the first acquiring unit 41 is further configured to acquire a third parameter; the third parameter is configured to represent a label information parameter associated with the first multimedia information;
  • the second obtaining unit 43 is further configured to acquire a fourth parameter; the fourth parameter represents a label information parameter associated with the second multimedia information;
  • the matching unit 45 is further configured to determine the first parameter and the second parameter Determining whether the third parameter of the first multimedia information and the fourth parameter of the second multimedia information match each other before determining whether the number meets the predetermined condition; determining the third parameter of the first multimedia information After matching with the fourth parameter of the second multimedia information, it is further determined whether the first parameter and the second parameter satisfy a predetermined condition.
  • the third parameter represents a tag information parameter associated with the first multimedia information
  • the tag information parameter includes at least: an advertisement type that the first multimedia information allows insertion and/or the The type of channel in which a multimedia message is located.
  • the matching unit 45 is based on the third parameter of the first multimedia information (ie, the type of advertisement that the first multimedia information allows to be inserted and/or the channel type in which the first multimedia information is located) and
  • the fourth parameter of the second multimedia information ie, the channel type that the second multimedia information allows to be served and/or the type of the second multimedia information
  • the second multimedia information is used to filter a large amount of creative materials, and can achieve targeted advertising.
  • the first obtaining unit 41 is further configured to acquire a fifth parameter associated with the first multimedia information, where the fifth parameter includes at least one of the following parameters: allowing insertion a quantity of the second multimedia information, a time interval between the second multimedia information that is allowed to be inserted, and a time range in which the second multimedia information is not allowed to be inserted in the first multimedia information;
  • the determining unit 46 is configured to determine the second according to the location of the key frame image in the first multimedia information and the fifth parameter acquired by the first acquiring unit 41. An insertion point of the multimedia information in the first multimedia information.
  • the determining unit 46 is configured according to the foregoing At least one of five parameters: a number of second multimedia information that is allowed to be inserted, a time interval between the second multimedia information that is allowed to be inserted, and a second most a time range of the media information, determining a number of second multimedia information allowed to be inserted and/or a time range of the second multimedia information allowed to be inserted in the first multimedia information, based on the allowed insertion
  • the number of second multimedia information and/or the time range of the second multimedia information allowed to be inserted in the first multimedia information selects the insertion of the second multimedia information at the location of the P key frame images a point; wherein, the P value is not greater than the number of the second multimedia information allowed to be inserted, and the location where the P key frame images are located allows the second multimedia to be inserted in the first multimedia information.
  • processing units in the device of the embodiments of the present invention can be understood by referring to the related description of the foregoing multimedia information processing method, and the processing units in the device of the embodiments of the present invention can be implemented by implementing the present invention.
  • the function of the analog circuit described in the example is implemented, and can also be implemented by running the software of the function described in the embodiment of the present invention on the smart terminal.
  • the first analyzing unit 42, the second analyzing unit 44, the matching unit 45 and the determining unit 46 in the device may be used by the central processing unit (CPU, Central) in the device in practical applications.
  • a processing unit a digital signal processor (DSP), or a field-programmable gate array (FPGA); the first acquiring unit 41 and the second obtaining unit 43 in the device are actually
  • the application may be implemented by a receiver or a receiving antenna in the device.
  • FIG. 5 is a schematic structural diagram of hardware structure of a device according to an embodiment of the present invention, and an example of the device as a hardware entity is shown in FIG. 5.
  • the terminal includes a processor 61, a storage medium 62, and at least one external communication interface 63; the processor 61, the storage medium 62, and at least one external Communication interfaces 63 are each connected by a bus 64.
  • the integrated modules described in the embodiments of the present invention may also be stored in a computer readable storage medium if they are implemented in the form of software functional modules and sold or used as separate products. Based on such understanding, those skilled in the art will appreciate that embodiments of the present application can be provided as a method, system, or computer program product. Thus, the present application can take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment in combination of software and hardware.
  • the application can take the form of a computer program product embodied on one or more computer-usable storage media containing computer usable program code, including but not limited to a USB flash drive, a mobile hard drive, a read only memory (ROM, Read-Only Memory), Random Access Memory (RAM), disk storage, CD-ROM, optical storage, and the like.
  • a USB flash drive a mobile hard drive
  • ROM read only memory
  • RAM Random Access Memory
  • disk storage CD-ROM, optical storage, and the like.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.
  • the first parameter of the key frame image is compared with the identification of the key frame image of the first multimedia information and the identification of the first frame image of the second multimedia information.
  • the position of the key frame image matched by the second parameter of the first frame image is used as the insertion point of the second multimedia information, and on the one hand, the automatic control insertion of the second multimedia information is realized, which greatly reduces the labor cost;
  • matching the first parameter of the key frame image with the second parameter of the first frame image means that the key frame image is the same or similar to a color or gray level of the first frame image, such that when When the location of the key frame image is inserted into the second multimedia information, the second multimedia information viewed by the user can be perfectly integrated with the first multimedia information in a display effect.
  • the second multimedia information that is suddenly inserted is not abrupt, which greatly enhances the user's visual experience.

Abstract

本发明实施例公开了一种多媒体信息处理方法、设备及计算机存储介质;所述方法包括:获取并分析第一多媒体信息,获得所述第一多媒体信息的关键帧图像;识别所述关键帧图像,获得所述关键帧图像的第一参数;获取并分析第二多媒体信息,获得所述第二多媒体信息的第一帧图像;识别所述第一帧图像,获得所述第一帧图像的第二参数;判断所述第一参数和所述第二参数是否满足预定条件,当所述第一参数和所述第二参数满足预定条件时,确定所述关键帧图像在所述第一多媒体信息中的位置,依据所述位置确定所述第二多媒体信息的插入点,以在所述插入点插入所述第二多媒体信息。

Description

一种多媒体信息处理方法、设备及计算机存储介质 技术领域
本发明设计信息处理技术,具体设计一种多媒体信息处理方法、设备及计算机存储介质。
背景技术
本申请发明人在实现本申请实施例技术方案的过程中,至少发现相关技术中存在如下技术问题:
以现有的多媒体信息播放场景为例,在播放第一多媒体信息时,通常会插入至少一段第二多媒体信息。其中,第一多媒体信息,比如电影或电视剧视频是用户想要观看的内容;而第二多媒体信息,比如广告通常都是设备推送的内容,并不是用户想要观看的内容。
现有技术存在的问题是:第一多媒体信息中插入第二多媒体信息的插入点完全由人工进行选择,这样会大大增加人力成本。然而,对于上述问题,相关技术中并未存在有效的解决方案。
发明内容
本发明实施例期望提供一种多媒体信息处理方法、设备及计算机存储介质,能够实现在第一多媒体信息中自动插入第二多媒体信息,即能够实现广告的自动插入。
为达到上述目的,本发明实施例的技术方案是这样实现的:
本发明实施例提供了一种多媒体信息处理方法,所述方法包括:
获取并分析第一多媒体信息,获得所述第一多媒体信息的关键帧图像;
识别所述关键帧图像,获得所述关键帧图像的第一参数;所述第一参 数表征所述关键帧图像的显示属性参数;
获取并分析第二多媒体信息,获得所述第二多媒体信息的第一帧图像;
识别所述第一帧图像,获得所述第一帧图像的第二参数;所述第二参数表征所述第一帧图像的显示属性参数;
判断所述第一参数和所述第二参数是否满足预定条件,当所述第一参数和所述第二参数满足预定条件时,确定所述关键帧图像在所述第一多媒体信息中的位置,依据所述位置确定所述第二多媒体信息的插入点,以在所述插入点插入所述第二多媒体信息。
本发明实施例还提供了一种设备,所述设备包括:第一获取单元、第一分析单元、第二获取单元、第二分析单元、匹配单元和确定单元;其中,
所述第一获取单元,配置为获取第一多媒体信息;
所述第一分析单元,配置为分析所述第一获取单元获取的所述第一多媒体信息,获得所述第一多媒体信息的关键帧图像;识别所述关键帧图像,获得所述关键帧图像的第一参数;所述第一参数表征所述关键帧图像的显示属性参数;
所述第二获取单元,配置为获取第二多媒体信息;
所述第二分析单元,配置为分析所述第二获取单元获取的所述第二多媒体信息,获得所述第二多媒体信息的第一帧图像;识别所述第一帧图像,获得所述第一帧图像的第二参数;所述第二参数表征所述第一帧图像的显示属性参数;
所述匹配单元,配置为判断所述第一分析单元获得的所述第一参数和所述第二分析单元获得的所述第二参数是否满足预定条件;
所述确定单元,配置为当所述匹配单元确定所述第一参数和所述第二参数满足预定条件时,确定所述关键帧图像在所述第一多媒体信息中的位置,依据所述位置确定所述第二多媒体信息的插入点,以在所述插入点插 入所述第二多媒体信息。
本发明实施例还提供了一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,所述计算机可执行指令配置为执行本发明实施例所述的多媒体信息处理方法。
本发明实施例提供了一种多媒体信息处理方法、设备及计算机存储介质,通过获取并分析第一多媒体信息(所述第一多媒体信息即视频),获得所述第一多媒体信息的关键帧图像;识别所述关键帧图像,获得所述关键帧图像的第一参数;所述第一参数表征所述关键帧图像的显示属性参数;获取并分析第二多媒体信息(所述第二多媒体信息即广告素材),获得所述第二多媒体信息的第一帧图像;识别所述第一帧图像,获得所述第一帧图像的第二参数;所述第二参数表征所述第一帧图像的显示属性参数;判断所述第一参数和所述第二参数是否满足预定条件,当所述第一参数和所述第二参数满足预定条件时,确定所述关键帧图像在所述第一多媒体信息中的位置,依据所述位置确定所述第二多媒体信息的插入点,以在所述插入点插入所述第二多媒体信息。如此,采用本发明实施例的技术方案,通过对第一多媒体信息的关键帧图像的识别,以及对第二多媒体信息的第一帧图像的识别,将所述关键帧图像的第一参数与所述第一帧图像的第二参数匹配的关键帧图像所在位置作为所述第二多媒体信息的插入点,实现了第二多媒体信息的自动控制插入,大大减少了人力成本。
附图说明
图1为本发明实施例一的多媒体信息处理方法的流程示意图;
图2为本发明实施例的多媒体信息处理方法的应用场景示意图;
图3为本发明实施例二的多媒体信息处理方法的流程示意图;
图4为本发明实施例的设备的组成结构示意图;
图5为本发明实施例的设备的硬件结构组成示意图。
具体实施方式
下面结合附图及具体实施例对本发明作进一步详细的说明。
实施例一
本发明实施例提供了一种多媒体信息处理方法,所述多媒体信息处理方法应用于设备中。图1为本发明实施例一的多媒体信息处理方法的流程示意图;如图1所示,所述多媒体信息处理方法包括:
步骤101:获取并分析第一多媒体信息,获得所述第一多媒体信息的关键帧图像。
本发明实施例所述的多媒体信息处理方法应用于设备中,所述设备具体可以是多媒体平台所属服务器或服务器集群,所述多媒体平台如腾讯视频等等。则本步骤中,所述获取第一多媒体信息,分析所述第一多媒体信息,获得所述第一多媒体信息的关键帧图像,包括:设备获取第一多媒体信息,分析所述第一多媒体信息,获得所述第一多媒体信息的关键帧图像。
这里,所述第一多媒体信息具体指视频,且所述设备获取到的视频通常为采用特定压缩方式压缩后的视频数据。本实施例中,所述压缩后的视频数据通常为IPB帧数据。具体的,IPB帧数据包括I帧、P帧和B帧。其中,I帧为关键帧,可以理解为处于I帧的画面被完整保留,即解码时只需要本帧数据便可完成解码。P帧为向前预测编码帧,表示当前帧与上一个关键帧(或上一帧)的区别,解码时需要上一个关键帧(或上一帧)的图像数据才能生成本帧图像,即P帧没有完整图像数据。B帧为双向预测编码帧,可以理解为B帧是当前帧与前一帧和后一帧的区别,解码是需要前一帧和后一帧的图像数据才能生成本帧图像。则本实施例中,所述特定压缩方式可以是压缩后包含有关键帧(也可以理解为参考帧)的任一压缩方式(即压缩标准),如运动图像专家组(MPEG,Moving Picture Experts Group)压缩标准等等。
基于此,本步骤中,所述分析所述第一多媒体信息,获得所述第一多媒体信息的关键帧图像,为:分析所述第一多媒体信息,获得所述第一多媒体信息中的I帧图像。具体的,分析所述第一多媒体信息,即分析基于IPB帧数据表征的第一多媒体信息,由于所述IPB帧数据中的I帧画面被完整保留,而P帧和B帧没有完整图像,则可从所述IPB帧数据中抽取I帧画面,且抽取到的I帧画面为至少一帧。
步骤102:识别所述关键帧图像,获得所述关键帧图像的第一参数;所述第一参数表征所述关键帧图像的显示属性参数。
这里,所述第一参数可以表征所述关键帧图像的色彩参数;所述色彩参数可通过红绿蓝(RGB)色系表征。具体的,所述设备识别所述关键帧图像中每个像素点的RGB值,基于每个像素点的RGB值确定相应像素点在RGB色系表中的色系,统计所述关键帧图像中处于同一色系的像素点的数量,将处于同一色系的数量最多的像素点对应的色系(或者数量大于第一预设阈值的像素点对应的色系)作为所述关键帧图像的色彩参数。
所述第一参数还可以表征所述关键帧图像的灰度参数,所述灰度参数可通过灰度值表征。具体的,所述设备按预设处理方式处理所述关键帧图像,将所述关键帧图像转换为灰度图像;其中,所述预设处理方式可以为图像二值化处理方式。所述设备识别所述灰度图像(处理后的关键帧图像)中每个像素点的灰度值,统计处于同一灰度的像素点的数量(其中,所述处于同一灰度的像素点可以是处于同一灰度值的像素点,也可以是处于同一灰度值范围的像素点),将处于同一灰度的数量最多的像素点(或者数量大于第二预设阈值的像素点对应的灰度)对应的灰度作为所述关键帧图像的灰度参数。
步骤103:获取并分析第二多媒体信息,获得所述第二多媒体信息的第一帧图像。
这里,所述第二多媒体信息具体指广告素材数据,所述广告素材数据可采用视频、图像互换格式(GIF,Graphics Interchange Format)或图片格式。则所述设备获取到广告素材数据后,分析所述广告素材数据,获得所述广告素材数据中的第一帧图像,具体的,当所述广告素材数据为视频数据时,获得所述视频数据的第一帧图像;当所述广告素材数据为GIF数据时,获得所述GIF的第一张图片;当所述广告素材数据为一张图片时,则直接获得所述图片。
步骤104:识别所述第一帧图像,获得所述第一帧图像的第二参数;所述第二参数表征所述第一帧图像的显示属性参数。
这里,所述第二参数表征所述第一帧图像的色彩参数;所述色彩参数可通过红绿蓝(RGB)色系表征。具体的,所述设备识别所述第一帧图像中每个像素点的RGB值,基于每个像素点的RGB值确定相应像素点在RGB色系表中的色系,统计所述第一帧图像中处于同一色系的像素点的数量,将处于同一色系的数量最多的像素点对应的色系(或者数量大于第一预设阈值的像素点对应的色系)作为所述第一帧图像的色彩参数。
所述第一参数还可以表征所述第一帧图像的灰度参数,所述灰度参数可通过灰度值表征。具体的,所述设备按预设处理方式处理所述第一帧图像,将所述第一帧图像转换为灰度图像;其中,所述预设处理方式可以为图像二值化处理方式。所述设备识别所述灰度图像(处理后的第一帧图像)中每个像素点的灰度值,统计处于同一灰度的像素点的数量(其中,所述处于同一灰度的像素点可以是处于同一灰度值的像素点,也可以是处于同一灰度值范围的像素点),将处于同一灰度的数量最多的像素点(或者数量大于第二预设阈值的像素点对应的灰度)对应的灰度作为所述第一帧图像的灰度参数。
本实施例中,步骤102中获得的第一参数和本步骤中获得的第二参数 采用同属性的参数表征,如所述第一参数通过所述关键帧图像的色彩参数表征时,所述第二参数同样采用所述第一帧图像的色彩参数表征;相应的,当所述第一参数通过所述关键帧图像的灰度参数表征时,所述第二参数同样采用所述第一帧图像的灰度参数表征。
步骤105:判断所述第一参数和所述第二参数是否满足预定条件,当所述第一参数和所述第二参数满足预定条件时,确定所述关键帧图像在所述第一多媒体信息中的位置,依据所述位置确定所述第二多媒体信息的插入点,以在所述插入点插入所述第二多媒体信息。
这里,所述判断所述第一参数和所述第二参数是否满足预定条件,包括:判断所述关键帧图像和所述第一帧图像的色彩参数或灰度参数是否匹配;当所述关键帧图像和所述第一帧图像的色彩参数或灰度参数匹配时,确定所述第一参数和所述第二参数满足预定条件。
具体的,所述关键帧图像的第一参数和所述第一帧图像的第二参数采用同属性的参数表征。作为一种实施方式,当所述第一参数和所述第二参数均通过色彩参数表征时,所述色彩参数可以为色系,则判断所述关键帧图像和所述第一帧图像的色彩参数是否匹配,包括:判断所述关键帧图像的色系和所述第一帧图像的色系是否匹配一致,当所述关键帧图像的色系和所述第一帧图像的色系匹配一致时,确定所述关键帧图像和所述第一帧图像的色彩参数匹配;或者,判断所述关键帧图像的色系与所述第一帧图像的色系在预设色系表中的距离是否小于第二阈值,当所述关键帧图像的色系与所述第一帧图像的色系在预设色系表中的距离小于第二阈值时(如所述第二阈值为2,所述关键帧图像的色系序号为21,对应像素点的RGB值为:240、156、66;所述第一关键帧图像的色系序号为22,对应像素点的RGB值为:245、177、109),确定所述关键帧图像和所述第一帧图像的色彩参数匹配。
作为另一种实施方式,当所述第一参数和所述第二参数均通过灰度参数表征时,所述灰度参数可以为灰度值,则判断所述关键帧图像和所述第一帧图像的灰度参数是否匹配,包括:判断所述关键帧图像的色系和所述第一帧图像的灰度值是否匹配一致,当所述关键帧图像的灰度值和所述第一帧图像的灰度值匹配一致时,确定所述关键帧图像和所述第一帧图像的灰度参数匹配;或者,判断所述关键帧图像的灰度值与所述第一帧图像的灰度值是否小于第三阈值,当所述关键帧图像的灰度值与所述第一帧图像的灰度值小于第三阈值时(如所述第三阈值为20,所述关键帧图像的灰度值为150,所述第一关键帧图像的灰度值为165),确定所述关键帧图像和所述第一帧图像的灰度参数匹配。
其中,当所述关键帧图像的第一参数和所述第一帧图像的第二参数匹配时,可以理解为所述关键帧图像和所述第一帧图像的色系或灰度相同或相近,则确定所述关键帧图像所在位置为所述第二多媒体信息的插入点。在实际应用中,所述第一多媒体信息中的关键帧图像可以包含N个,N为正整数;则确定其中的M(M为正整数且M小于N)个关键帧图像的第一参数和所述第一帧图像的第二参数匹配时,依据预设规则选择P个关键帧图像所在位置所述第二多媒体信息的插入点;其中,P为正整数且P小于M。
作为一种实施方式,所述获取第一多媒体信息后,所述方法还包括:获取所述第一多媒体信息相关联的第五参数,所述第五参数包括以下参数的至少之一:允许插入的第二多媒体信息的数量、允许插入的第二多媒体信息之间的时间间隔、所述第一多媒体信息中不允许插入第二多媒体信息的时间范围;
相应的,所述依据所述位置确定所述第二多媒体信息的插入点,包括:依据所述关键帧图像在所述第一多媒体信息中的位置,以及所述第五参数确定所述第二多媒体信息在所述第一多媒体信息中的插入点。
具体的,确定M(M为正整数且M小于N)个关键帧图像的第一参数和所述第一帧图像的第二参数匹配后,依据上述第五参数中的至少之一:允许插入的第二多媒体信息的数量、允许插入的第二多媒体信息之间的时间间隔、所述第一多媒体信息中不允许插入第二多媒体信息的时间范围,确定允许插入的第二多媒体信息的数量和/或所述第一多媒体信息中允许插入的第二多媒体信息的时间范围,基于所述允许插入的第二多媒体信息的数量和/或所述第一多媒体信息中允许插入的第二多媒体信息的时间范围选择P个关键帧图像所在位置所述第二多媒体信息的插入点;其中,P值不大于所述允许插入的第二多媒体信息的数量,且所述P个关键帧图像所在位置在所述第一多媒体信息中允许插入的第二多媒体信息的时间范围中。
作为一种具体实施方式,所述设备在获取每个第二多媒体信息(即广告素材数据)时,识别每个第二多媒体信息的第一图像,并获得每一个第二多媒体信息的第一帧图像的第二参数,所述第二参数可通过色彩参数(如色系)或灰度参数(灰度值)表征,将每个第二多媒体信息的第二参数按相同色彩参数类别或相同灰度类别参数存储在数据库中。当获取到第一多媒体信息后,识别所述第一多媒体信息的关键帧图像(即I帧图像),并获得所述关键帧图像的第一参数,在所述数据库中选择与所述第一参数相匹配的类别集合,在所述类别集合中选择第二多媒体信息插入在所述关键帧图像在所述第一多媒体信息中的位置(其中,所述关键帧图像在所述第一多媒体信息中的位置可基于预先获取的第五参数确定)。
采用本发明实施例的技术方案,通过对第一多媒体信息的关键帧图像的识别,以及对第二多媒体信息的第一帧图像的识别,将所述关键帧图像的第一参数与所述第一帧图像的第二参数匹配的关键帧图像所在位置作为所述第二多媒体信息的插入点,一方面实现了第二多媒体信息的自动控制插入,大大减少了人力成本;另一方面,所述关键帧图像的第一参数与所 述第一帧图像的第二参数匹配意味着所述关键帧图像与所述第一帧图像的色彩或灰度相同或近似,使得当在所述关键帧图像所在位置插入所述第二多媒体信息时,用户所观看到的所述第二多媒体信息能够与所述第一多媒体信息在显示效果上达到完美融合,使用户在观看第一多媒体信息时对突然插入的第二多媒体信息不会感到突兀,大大提升了用户的视觉体验。
图2为本发明实施例的多媒体信息处理方法的应用场景示意图;如图2所示,本发明实施例所述的多媒体信息处理方法可应用于以下场景:用户1通过第一终端11上传第一多媒体信息(所述第一多媒体信息即视频),并在上传过程中设定第三参数(所述第三参数至少包括:所述第一多媒体信息允许插入的广告类型和/或所述第一多媒体信息所在的频道类型);服务器3获取到所述第一多媒体信息,并获取所述第三参数,将所述第三参数作为所述第一多媒体信息相关联的标签信息参数存储。用户2通过第二终端21上传第二多媒体信息(所述第二多媒体信息即广告素材),并在上传过程中设定第四参数(所述第四参数至少包括:所述第二多媒体信息允许投放的频道类型和/或所述第二多媒体信息的类型);所述服务器3获取到所述第二多媒体信息,并获取所述第四参数,将所述第四参数作为所述第二多媒体信息相关联的标签信息参数存储。所述服务器3可基于实施例一中所述的多媒体信息处理方法确定所述第一多媒体信息中的插入点,所述插入点为所述第二多媒体信息插入所述第一多媒体信息中的插入点。
本发明实施例还提供了一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,所述计算机可执行指令配置为执行本发明实施例所述的多媒体信息处理方法。
实施例二
基于图2所示的应用场景,本发明实施例还提供了一种多媒体信息处理方法。图3为本发明实施例二的多媒体信息处理方法的流程示意图;如 图3所示,所述多媒体信息处理方法包括:
步骤301:获取第一多媒体信息和第三参数;所述第三参数表征所述第一多媒体信息相关联的标签信息参数。
本发明实施例所述的多媒体信息处理方法应用于设备中,所述设备具体可以是多媒体平台所属服务器或服务器集群,所述多媒体平台如腾讯视频等等。则本步骤中,所述获取第一多媒体信息,分析所述第一多媒体信息,获得所述第一多媒体信息的关键帧图像,包括:设备获取第一多媒体信息,分析所述第一多媒体信息,获得所述第一多媒体信息的关键帧图像。
这里,所述第一多媒体信息具体指视频,且所述设备获取到的视频通常为采用特定压缩方式压缩后的视频数据。所述第三参数表征所述第一多媒体信息相关联的标签信息参数,所述标签信息参数至少包括:所述第一多媒体信息允许插入的广告类型和/或所述第一多媒体信息所在的频道类型。具体的,基于如2所示的应用场景,用户1通过第一终端上传第一多媒体信息至所述设备,用户1通常基于所述第一多媒体信息的类型选择频道类型,例如,当所述第一多媒体信息为电影时,通常选择电影频道进行上传;当所述第一多媒体信息为电视剧时,通常选择电视就频道进行上传;则本实施例的所述第一多媒体信息所在的频道类型为获取到的用户设定的与所述第一多媒体信息的类型相关联的频道类型。和/或,在用户1通过第一终端上传所述第一多媒体信息时,通常还需要设定允许插入的广告类型,所述广告类型如生活洗护类、药品保健品类、食品类等等;所述允许插入的广告类型可预先设定n个类型以供用户1选择;其中,n为正整数。则当所述用户1选定所述允许插入的广告类型后,所述设备依据用户1在所述第一终端上的操作,获得所述允许插入的广告类型。
步骤302:获取第二多媒体信息和第四参数;所述第四参数表征所述第二多媒体信息相关联的标签信息参数。
这里,所述第二多媒体信息具体指广告素材数据,所述广告素材数据可采用视频、GIF或图片格式。具体的,基于如图2所示的应用场景,用户2通过第二终端上传第二多媒体信息至所述设备,即上传广告素材至所述设备。在用户2通过第二终端上传所述第二多媒体信息时,还需要设定所述第二多媒体信息相关联的标签信息参数,所述标签信息参数包括所述第二多媒体信息允许投放的频道类型和/或所述第二多媒体信息的类型。其中,所述第二多媒体信息允许投放的频道类型例如电影频道、电视剧频道、综艺频道等等,以所述设备所承载的多媒体平台中包括的频道数量为准。所述第二多媒体信息的类型可预先设定n个类型以供用户2选择,如生活洗护类、药品保健品类、食品类等等。则当用户2选定所述第二多媒体信息允许投放的频道类型和/或所述第二多媒体信息的类型后,所述设备依据用户2在所述第二终端上的操作,获得所述第二多媒体信息允许投放的频道类型和/或所述第二多媒体信息的类型。
步骤303:判断所述第一多媒体信息的第三参数和所述第二多媒体信息的第四参数是否匹配一致。
步骤304:确定所述第一多媒体信息的第三参数和所述第二多媒体信息的第四参数匹配一致后,获取所述第一多媒体信息相关联的第五参数,所述第五参数包括以下参数的至少之一:允许插入的第二多媒体信息的数量、允许插入的第二多媒体信息之间的时间间隔、所述第一多媒体信息中不允许插入第二多媒体信息的时间范围。
步骤303和步骤304应用的场景通常包括海量的第一多媒体信息和海量的第二多媒体信息,即海量的视频和海量的广告素材。这里,基于所述第一多媒体信息的第三参数(即所述第一多媒体信息允许插入的广告类型和/或所述第一多媒体信息所在的频道类型)和所述第二多媒体信息的第四参数(即所述第二多媒体信息允许投放的频道类型和/或所述第二多媒体信 息的类型),匹配所述第一多媒体信息的第三参数和所述第二多媒体信息的第四参数,即所述设备选择与所述第一多媒体信息的第三参数匹配一致的第二多媒体信息作为待插入的第二多媒体信息,以对海量的广告素材进行筛选,能够实现对广告的定向投放。
这里,所述第一多媒体信息相关联的第五参数包括以下参数的至少之一:允许插入的第二多媒体信息的数量、允许插入的第二多媒体信息之间的时间间隔、所述第一多媒体信息中不允许插入第二多媒体信息的时间范围。具体的,所述设备可基于所述第一多媒体信息的时长确定所述第一多媒体信息中允许插入的第二多媒体信息的数量,所述允许插入的第二多媒体信息的数量为最高上限,在实际应用中,所插入的第二多媒体信息的数量不高于所述最高上限。所述允许插入的第二多媒体信息之间的时间间隔即两个第二多媒体信息插入点之间的时间间隔。所述第一多媒体信息中不允许插入第二多媒体信息的时间范围也即所述第一多媒体信息中允许插入第二多媒体信息的时间范围,例如当视频的片头所在的时间范围插入广告时,用户通常感到反感;而在视频的片尾所在的时间范围插入广告时,用户浏览到视频的片尾通常会切换至另一视频,通常也看不到插入的广告;因此,本实施例中可设置所述第一多媒体信息中不允许插入第二多媒体信息的时间范围(或所述第一多媒体信息中允许插入第二多媒体信息的时间范围)。
步骤305:分析所述第一多媒体信息,获得所述第一多媒体信息的N个关键帧图像;N为正整数。
这里,所述第一多媒体信息为采用特定压缩方式压缩后的视频数据。本实施例中,所述压缩后的视频数据通常为IPB帧数据。具体的,IPB帧数据包括I帧、P帧和B帧。其中,I帧为关键帧,可以理解为处于I帧的画面被完整保留,即解码时只需要本帧数据便可完成解码。P帧为向前预测编 码帧,表示当前帧与上一个关键帧(或上一帧)的区别,解码时需要上一个关键帧(或上一帧)的图像数据才能生成本帧图像,即P帧没有完整图像数据。B帧为双向预测编码帧,可以理解为B帧是当前帧与前一帧和后一帧的区别,解码是需要前一帧和后一帧的图像数据才能生成本帧图像。则本实施例中,所述特定压缩方式可以是压缩后包含有关键帧(也可以理解为参考帧)的任一压缩方式(即压缩标准),如MPEG压缩标准等等。
基于此,本步骤中,所述分析所述第一多媒体信息,获得所述第一多媒体信息的关键帧图像,为:分析所述第一多媒体信息,获得所述第一多媒体信息中的I帧图像。具体的,分析所述第一多媒体信息,即分析基于IPB帧数据表征的第一多媒体信息,由于所述IPB帧数据中的I帧画面被完整保留,而P帧和B帧没有完整图像,则可从所述IPB帧数据中抽取I帧画面,且抽取到的I帧画面为N个。
步骤306:识别所述N个关键帧图像,获得所述N个关键帧图像的第一参数;所述第一参数表征所述关键帧图像的显示属性参数。
这里,所述第一参数可以表征所述关键帧图像的色彩参数;所述色彩参数可通过红绿蓝(RGB)色系表征。具体的,所述设备识别所述关键帧图像中每个像素点的RGB值,基于每个像素点的RGB值确定相应像素点在RGB色系表中的色系,统计所述关键帧图像中处于同一色系的像素点的数量,将处于同一色系的数量最多的像素点对应的色系(或者数量大于第一预设阈值的像素点对应的色系)作为所述关键帧图像的色彩参数。
所述第一参数还可以表征所述关键帧图像的灰度参数,所述灰度参数可通过灰度值表征。具体的,所述设备按预设处理方式处理所述关键帧图像,将所述关键帧图像转换为灰度图像;其中,所述预设处理方式可以为图像二值化处理方式。所述设备识别所述灰度图像(处理后的关键帧图像)中每个像素点的灰度值,统计处于同一灰度的像素点的数量(其中,所述 处于同一灰度的像素点可以是处于同一灰度值的像素点,也可以是处于同一灰度值范围的像素点),将处于同一灰度的数量最多的像素点(或者数量大于第二预设阈值的像素点对应的灰度)对应的灰度作为所述关键帧图像的灰度参数。
步骤307:分析所述第二多媒体信息,获得所述第二多媒体信息的第一帧图像。
这里,所述设备分析所述第二多媒体信息(即广告素材数据),获得所述第二多媒体信息中的第一帧图像,具体的,当所述第二多媒体信息为视频数据时,获得所述视频数据的第一帧图像;当所述第二多媒体信息为GIF数据时,获得所述GIF的第一张图片;当所述第二多媒体信息为一张图片时,则直接获得所述图片。
步骤308:识别所述第一帧图像,获得所述第一帧图像的第二参数;所述第二参数表征所述第一帧图像的显示属性参数。
这里,所述第二参数表征所述第一帧图像的色彩参数;所述色彩参数可通过红绿蓝(RGB)色系表征。具体的,所述设备识别所述第一帧图像中每个像素点的RGB值,基于每个像素点的RGB值确定相应像素点在RGB色系表中的色系,统计所述第一帧图像中处于同一色系的像素点的数量,将处于同一色系的数量最多的像素点对应的色系(或者数量大于第一预设阈值的像素点对应的色系)作为所述第一帧图像的色彩参数。
所述第一参数还可以表征所述第一帧图像的灰度参数,所述灰度参数可通过灰度值表征。具体的,所述设备按预设处理方式处理所述第一帧图像,将所述第一帧图像转换为灰度图像;其中,所述预设处理方式可以为图像二值化处理方式。所述设备识别所述灰度图像(处理后的第一帧图像)中每个像素点的灰度值,统计处于同一灰度的像素点的数量(其中,所述处于同一灰度的像素点可以是处于同一灰度值的像素点,也可以是处于同 一灰度值范围的像素点),将处于同一灰度的数量最多的像素点(或者数量大于第二预设阈值的像素点对应的灰度)对应的灰度作为所述第一帧图像的灰度参数。
本实施例中,步骤306中获得的第一参数和本步骤中获得的第二参数采用同属性的参数表征,如所述第一参数通过所述关键帧图像的色彩参数表征时,所述第二参数同样采用所述第一帧图像的色彩参数表征;相应的,当所述第一参数通过所述关键帧图像的灰度参数表征时,所述第二参数同样采用所述第一帧图像的灰度参数表征。
步骤309:判断所述第一参数和所述第二参数是否满足预定条件,当所述N个关键帧图像中M个关键帧图像的第一参数和所述第二参数满足预定条件时,确定所述M个关键帧图像在所述第一多媒体信息中的位置,依据所述M个关键帧图像在所述第一多媒体信息中的位置以及所述第五参数确定所述第二多媒体信息的P个插入点,以在所述P个插入点插入所述第二多媒体信息;其中,M和P均为正整数,且P小于等于M小于等于N。
这里,所述判断所述第一参数和所述第二参数是否满足预定条件,包括:判断所述关键帧图像和所述第一帧图像的色彩参数或灰度参数是否匹配;当所述关键帧图像和所述第一帧图像的色彩参数或灰度参数匹配时,确定所述第一参数和所述第二参数满足预定条件。
具体的,所述关键帧图像的第一参数和所述第一帧图像的第二参数采用同属性的参数表征。作为一种实施方式,当所述第一参数和所述第二参数均通过色彩参数表征时,所述色彩参数可以为色系,则判断所述关键帧图像和所述第一帧图像的色彩参数是否匹配,包括:判断所述关键帧图像的色系和所述第一帧图像的色系是否匹配一致,当所述关键帧图像的色系和所述第一帧图像的色系匹配一致时,确定所述关键帧图像和所述第一帧图像的色彩参数匹配;或者,判断所述关键帧图像的色系与所述第一帧图 像的色系在预设色系表中的距离是否小于第二阈值,当所述关键帧图像的色系与所述第一帧图像的色系在预设色系表中的距离小于第二阈值时(如所述第二阈值为2,所述关键帧图像的色系序号为21,对应像素点的RGB值为:240、156、66;所述第一关键帧图像的色系序号为22,对应像素点的RGB值为:245、177、109),确定所述关键帧图像和所述第一帧图像的色彩参数匹配。
作为另一种实施方式,当所述第一参数和所述第二参数均通过灰度参数表征时,所述灰度参数可以为灰度值,则判断所述关键帧图像和所述第一帧图像的灰度参数是否匹配,包括:判断所述关键帧图像的色系和所述第一帧图像的灰度值是否匹配一致,当所述关键帧图像的灰度值和所述第一帧图像的灰度值匹配一致时,确定所述关键帧图像和所述第一帧图像的灰度参数匹配;或者,判断所述关键帧图像的灰度值与所述第一帧图像的灰度值是否小于第三阈值,当所述关键帧图像的灰度值与所述第一帧图像的灰度值小于第三阈值时(如所述第三阈值为20,所述关键帧图像的灰度值为150,所述第一关键帧图像的灰度值为165),确定所述关键帧图像和所述第一帧图像的灰度参数匹配。
其中,当所述关键帧图像的第一参数和所述第一帧图像的第二参数匹配时,可以理解为所述关键帧图像和所述第一帧图像的色系或灰度相同或相近,则确定所述关键帧图像所在位置为所述第二多媒体信息的插入点。在实际应用中,所述第一多媒体信息中的关键帧图像可以包含N个,N为正整数;则确定其中的M(M为正整数且M小于N)个关键帧图像的第一参数和所述第一帧图像的第二参数匹配时,依据预设规则选择P个关键帧图像所在位置所述第二多媒体信息的插入点;其中,P为正整数且P小于M。
采用本发明实施例的技术方案,通过对第一多媒体信息的关键帧图像的识别,以及对第二多媒体信息的第一帧图像的识别,将所述关键帧图像 的第一参数与所述第一帧图像的第二参数匹配的关键帧图像所在位置作为所述第二多媒体信息的插入点,一方面实现了第二多媒体信息的自动控制插入,大大减少了人力成本;另一方面,所述关键帧图像的第一参数与所述第一帧图像的第二参数匹配意味着所述关键帧图像与所述第一帧图像的色彩或灰度相同或近似,使得当在所述关键帧图像所在位置插入所述第二多媒体信息时,用户所观看到的所述第二多媒体信息能够与所述第一多媒体信息在显示效果上达到完美融合,使用户在观看第一多媒体信息时对突然插入的第二多媒体信息不会感到突兀,大大提升了用户的视觉体验。
本发明实施例还提供了一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,所述计算机可执行指令配置为执行本发明实施例所述的多媒体信息处理方法。
实施例三
本发明实施例还提供了一种设备。图4为本发明实施例的设备的组成结构示意图;如图4所示,所述设备包括:第一获取单元41、第一分析单元42、第二获取单元43、第二分析单元44、匹配单元45和确定单元46;其中,
所述第一获取单元41,配置为获取第一多媒体信息;
所述第一分析单元42,配置为分析所述第一获取单元41获取的所述第一多媒体信息,获得所述第一多媒体信息的关键帧图像;识别所述关键帧图像,获得所述关键帧图像的第一参数;所述第一参数表征所述关键帧图像的显示属性参数;
所述第二获取单元43,配置为获取第二多媒体信息;
所述第二分析单元44,配置为分析所述第二获取单元43获取的所述第二多媒体信息,获得所述第二多媒体信息的第一帧图像;识别所述第一帧图像,获得所述第一帧图像的第二参数;所述第二参数表征所述第一帧图 像的显示属性参数;
所述匹配单元45,配置为判断所述第一分析单元42获得的所述第一参数和所述第二分析单元44获得的所述第二参数是否满足预定条件;
所述确定单元46,配置为当所述匹配单元45确定所述第一参数和所述第二参数满足预定条件时,确定所述关键帧图像在所述第一多媒体信息中的位置,依据所述位置确定所述第二多媒体信息的插入点,以在所述插入点插入所述第二多媒体信息。
本实施例中,所述第一多媒体信息具体指视频,且所述第一获取单元41获取到的视频通常为采用特定压缩方式压缩后的视频数据。本实施例中,所述压缩后的视频数据通常为IPB帧数据。具体的,IPB帧数据包括I帧、P帧和B帧。其中,I帧为关键帧,可以理解为处于I帧的画面被完整保留,即解码时只需要本帧数据便可完成解码。P帧为向前预测编码帧,表示当前帧与上一个关键帧(或上一帧)的区别,解码时需要上一个关键帧(或上一帧)的图像数据才能生成本帧图像,即P帧没有完整图像数据。B帧为双向预测编码帧,可以理解为B帧是当前帧与前一帧和后一帧的区别,解码是需要前一帧和后一帧的图像数据才能生成本帧图像。则本实施例中,所述特定压缩方式可以是压缩后包含有关键帧(也可以理解为参考帧)的任一压缩方式(即压缩标准),如MPEG压缩标准等等。基于此,所述第一分析单元42分析所述第一多媒体信息,获得所述第一多媒体信息中的I帧图像。
本实施例中,所述第二多媒体信息具体指广告素材数据,所述广告素材数据可采用视频、GIF或图片格式。则所述第二获取单元43获取到广告素材数据后,分析所述广告素材数据,获得所述广告素材数据中的第一帧图像,具体的,当所述广告素材数据为视频数据时,获得所述视频数据的第一帧图像;当所述广告素材数据为GIF数据时,获得所述GIF的第一张 图片;当所述广告素材数据为一张图片时,则直接获得所述图片。
本实施例中,所述第一参数可以表征所述关键帧图像的色彩参数;所述第二参数可以表征所述第一帧图像的色彩参数;所述色彩参数可通过红绿蓝(RGB)色系表征。具体的,所述第一分析单元42识别所述关键帧图像中每个像素点的RGB值,基于每个像素点的RGB值确定相应像素点在RGB色系表中的色系,统计所述关键帧图像中处于同一色系的像素点的数量,将处于同一色系的数量最多的像素点对应的色系(或者数量大于第一预设阈值的像素点对应的色系)作为所述关键帧图像的色彩参数。所述第二分析单元44获得所述第一帧图像的色彩参数的方式与上述类似,这里不再赘述。
所述第一参数还可以表征所述关键帧图像的灰度参数,所述第二参数可以表征所述第一帧图像的灰度参数;所述灰度参数可通过灰度值表征。具体的,所述设备按预设处理方式处理所述关键帧图像,将所述关键帧图像转换为灰度图像;其中,所述预设处理方式可以为图像二值化处理方式。所述第一分析单元42识别所述灰度图像(处理后的关键帧图像)中每个像素点的灰度值,统计处于同一灰度的像素点的数量(其中,所述处于同一灰度的像素点可以是处于同一灰度值的像素点,也可以是处于同一灰度值范围的像素点),将处于同一灰度的数量最多的像素点(或者数量大于第二预设阈值的像素点对应的灰度)对应的灰度作为所述关键帧图像的灰度参数。所述第二分析单元44获得所述第一帧图像的灰度参数的方式与上述类似,这里不再赘述。
具体的,所述第一参数表征所述关键帧图像的色彩参数或灰度参数;所述第二参数表征所述第一帧图像的色彩参数或灰度参数;
相应的,所述匹配单元45,配置为判断所述关键帧图像和所述第一帧图像的色彩参数或灰度参数是否匹配;当所述关键帧图像和所述第一帧图 像的色彩参数或灰度参数匹配时,确定所述第一参数和所述第二参数满足预定条件。
具体的,当所述第一参数和所述第二参数均通过灰度参数表征时,所述灰度参数可以为灰度值,则所述匹配单元45判断所述关键帧图像和所述第一帧图像的灰度参数是否匹配,包括:判断所述关键帧图像的色系和所述第一帧图像的灰度值是否匹配一致,当所述关键帧图像的灰度值和所述第一帧图像的灰度值匹配一致时,确定所述关键帧图像和所述第一帧图像的灰度参数匹配;或者,判断所述关键帧图像的灰度值与所述第一帧图像的灰度值是否小于第三阈值,当所述关键帧图像的灰度值与所述第一帧图像的灰度值小于第三阈值时(如所述第三阈值为20,所述关键帧图像的灰度值为150,所述第一关键帧图像的灰度值为165),确定所述关键帧图像和所述第一帧图像的灰度参数匹配。当所述第一参数和所述第二参数均通过色彩参数表征时所述匹配单元的匹配方式与上述类似,这里不再赘述。
当所述关键帧图像的第一参数和所述第一帧图像的第二参数匹配时,可以理解为所述关键帧图像和所述第一帧图像的色系或灰度相同或相近,则确定所述关键帧图像所在位置为所述第二多媒体信息的插入点。在实际应用中,所述第一多媒体信息中的关键帧图像可以包含N个,N为正整数;则确定其中的M(M为正整数且M小于N)个关键帧图像的第一参数和所述第一帧图像的第二参数匹配时,依据预设规则选择P个关键帧图像所在位置所述第二多媒体信息的插入点;其中,P为正整数且P小于M。
作为另一种实施方式,所述第一获取单元41,还配置为获取第三参数;所述第三参数表征所述第一多媒体信息相关联的标签信息参数;
所述第二获取单元43,还配置为获取第四参数;所述第四参数表征所述第二多媒体信息相关联的标签信息参数;
相应的,所述匹配单元45,还配置为判断所述第一参数和所述第二参 数是否满足预定条件之前,判断所述第一多媒体信息的第三参数和所述第二多媒体信息的第四参数是否匹配一致;确定所述第一多媒体信息的第三参数和所述第二多媒体信息的第四参数匹配一致后,进一步判断所述第一参数和所述第二参数是否满足预定条件。
这里,所述第三参数表征所述第一多媒体信息相关联的标签信息参数,所述标签信息参数至少包括:所述第一多媒体信息允许插入的广告类型和/或所述第一多媒体信息所在的频道类型。所述第四参数表征所述第二多媒体信息相关联的标签信息参数,所述标签信息参数包括所述第二多媒体信息允许投放的频道类型和/或所述第二多媒体信息的类型。当所述设备中存储有海量的第一多媒体信息和海量的第二多媒体信息时,即海量的视频和海量的广告素材。所述匹配单元45基于所述第一多媒体信息的第三参数(即所述第一多媒体信息允许插入的广告类型和/或所述第一多媒体信息所在的频道类型)和所述第二多媒体信息的第四参数(即所述第二多媒体信息允许投放的频道类型和/或所述第二多媒体信息的类型),匹配所述第一多媒体信息的第三参数和所述第二多媒体信息的第四参数,即所述设备选择与所述第一多媒体信息的第三参数匹配一致的第二多媒体信息作为待插入的第二多媒体信息,以对海量的广告素材进行筛选,能够实现对广告的定向投放。
作为又一种实施方式,所述第一获取单元41,还配置为获取所述第一多媒体信息相关联的第五参数,所述第五参数包括以下参数的至少之一:允许插入的第二多媒体信息的数量、允许插入的第二多媒体信息之间的时间间隔、所述第一多媒体信息中不允许插入第二多媒体信息的时间范围;
相应的,所述确定单元46,配置为依据所述关键帧图像在所述第一多媒体信息中的位置,以及所述第一获取单元41获取的所述第五参数确定所述第二多媒体信息在所述第一多媒体信息中的插入点。
具体的,所述匹配单元45确定M(M为正整数且M小于N)个关键帧图像的第一参数和所述第一帧图像的第二参数匹配后,所述确定单元46依据上述第五参数中的至少之一:允许插入的第二多媒体信息的数量、允许插入的第二多媒体信息之间的时间间隔、所述第一多媒体信息中不允许插入第二多媒体信息的时间范围,确定允许插入的第二多媒体信息的数量和/或所述第一多媒体信息中允许插入的第二多媒体信息的时间范围,基于所述允许插入的第二多媒体信息的数量和/或所述第一多媒体信息中允许插入的第二多媒体信息的时间范围选择P个关键帧图像所在位置所述第二多媒体信息的插入点;其中,P值不大于所述允许插入的第二多媒体信息的数量,且所述P个关键帧图像所在位置在所述第一多媒体信息中允许插入的第二多媒体信息的时间范围中。
本领域技术人员应当理解,本发明实施例的设备中各处理单元的功能,可参照前述多媒体信息处理方法的相关描述而理解,本发明实施例的设备中各处理单元,可通过实现本发明实施例所述的功能的模拟电路而实现,也可以通过执行本发明实施例所述的功能的软件在智能终端上的运行而实现。
在本发明实施例中,所述设备中的第一分析单元42、第二分析单元44、匹配单元45和确定单元46,在实际应用中均可由所述设备中的中央处理器(CPU,Central Processing Unit)、数字信号处理器(DSP,Digital Signal Processor)或可编程门阵列(FPGA,Field-Programmable Gate Array)实现;所述设备中的第一获取单元41和第二获取单元43,在实际应用中可由所述设备中的接收机或接收天线实现。
图5为本发明实施例的设备的硬件结构组成示意图,所述设备作为硬件实体的一个示例如图5所示。所述终端包括处理器61、存储介质62以及至少一个外部通信接口63;所述处理器61、存储介质62和至少一个外部 通信接口63均通过总线64连接。
这里需要指出的是:以上涉及设备的描述,与上述方法描述是类似的,同方法的有益效果描述,不做赘述。对于本发明终端实施例中未披露的技术细节,请参照本发明方法实施例的描述。
本发明实施例所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质上实施的计算机程序产品的形式,所述存储介质包括但不限于U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁盘存储器、CD-ROM、光学存储器等。
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
尽管已描述了本申请的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例做出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本申请范围的所有变更和修改。
以上所述,仅为本发明的较佳实施例而已,并非用于限定本发明的保护范围。
工业实用性
采用本发明实施例,通过对第一多媒体信息的关键帧图像的识别,以及对第二多媒体信息的第一帧图像的识别,将所述关键帧图像的第一参数与所述第一帧图像的第二参数匹配的关键帧图像所在位置作为所述第二多媒体信息的插入点,一方面实现了第二多媒体信息的自动控制插入,大大减少了人力成本;另一方面,所述关键帧图像的第一参数与所述第一帧图像的第二参数匹配意味着所述关键帧图像与所述第一帧图像的色彩或灰度相同或近似,使得当在所述关键帧图像所在位置插入所述第二多媒体信息时,用户所观看到的所述第二多媒体信息能够与所述第一多媒体信息在显示效果上达到完美融合,使用户在观看第一多媒体信息时对突然插入的第二多媒体信息不会感到突兀,大大提升了用户的视觉体验。

Claims (11)

  1. 一种多媒体信息处理方法,所述方法包括:
    获取并分析第一多媒体信息,获得所述第一多媒体信息的关键帧图像;
    识别所述关键帧图像,获得所述关键帧图像的第一参数;所述第一参数表征所述关键帧图像的显示属性参数;
    获取并分析第二多媒体信息,获得所述第二多媒体信息的第一帧图像;
    识别所述第一帧图像,获得所述第一帧图像的第二参数;所述第二参数表征所述第一帧图像的显示属性参数;
    判断所述第一参数和所述第二参数是否满足预定条件,当所述第一参数和所述第二参数满足预定条件时,确定所述关键帧图像在所述第一多媒体信息中的位置,依据所述位置确定所述第二多媒体信息的插入点,以在所述插入点插入所述第二多媒体信息。
  2. 根据权利要求1所述的方法,其中,所述获取第一多媒体信息时,所述方法还包括:获取第三参数;所述第三参数表征所述第一多媒体信息相关联的标签信息参数;
    相应的,所述获取第二多媒体信息时,所述方法还包括:获取第四参数;所述第四参数表征所述第二多媒体信息相关联的标签信息参数。
  3. 根据权利要求2所述的方法,其中,所述判断所述第一参数和所述第二参数是否满足预定条件之前,所述方法还包括:判断所述第一多媒体信息的第三参数和所述第二多媒体信息的第四参数是否匹配一致;确定所述第一多媒体信息的第三参数和所述第二多媒体信息的第四参数匹配一致后,进一步判断所述第一参数和所述第二参数是否满足预定条件。
  4. 根据权利要求1所述的方法,其中,所述获取第一多媒体信息后,所述方法还包括:获取所述第一多媒体信息相关联的第五参数,所述第五参数包括以下参数的至少之一:允许插入的第二多媒体信息的数量、允许 插入的第二多媒体信息之间的时间间隔、所述第一多媒体信息中不允许插入第二多媒体信息的时间范围;
    相应的,所述依据所述位置确定所述第二多媒体信息的插入点,包括:依据所述关键帧图像在所述第一多媒体信息中的位置,以及所述第五参数确定所述第二多媒体信息在所述第一多媒体信息中的插入点。
  5. 根据权利要求1所述的方法,其中,所述第一参数表征所述关键帧图像的色彩参数或灰度参数;所述第二参数表征所述第一帧图像的色彩参数或灰度参数;
    相应的,所述判断所述第一参数和所述第二参数是否满足预定条件,包括:判断所述关键帧图像和所述第一帧图像的色彩参数或灰度参数是否匹配;当所述关键帧图像和所述第一帧图像的色彩参数或灰度参数匹配时,确定所述第一参数和所述第二参数满足预定条件。
  6. 一种设备,所述设备包括:第一获取单元、第一分析单元、第二获取单元、第二分析单元、匹配单元和确定单元;其中,
    所述第一获取单元,配置为获取第一多媒体信息;
    所述第一分析单元,配置为分析所述第一获取单元获取的所述第一多媒体信息,获得所述第一多媒体信息的关键帧图像;识别所述关键帧图像,获得所述关键帧图像的第一参数;所述第一参数表征所述关键帧图像的显示属性参数;
    所述第二获取单元,配置为获取第二多媒体信息;
    所述第二分析单元,配置为分析所述第二获取单元获取的所述第二多媒体信息,获得所述第二多媒体信息的第一帧图像;识别所述第一帧图像,获得所述第一帧图像的第二参数;所述第二参数表征所述第一帧图像的显示属性参数;
    所述匹配单元,配置为判断所述第一分析单元获得的所述第一参数和 所述第二分析单元获得的所述第二参数是否满足预定条件;
    所述确定单元,配置为当所述匹配单元确定所述第一参数和所述第二参数满足预定条件时,确定所述关键帧图像在所述第一多媒体信息中的位置,依据所述位置确定所述第二多媒体信息的插入点,以在所述插入点插入所述第二多媒体信息。
  7. 根据权利要求6所述的设备,其中,所述第一获取单元,还配置为获取第三参数;所述第三参数表征所述第一多媒体信息相关联的标签信息参数;
    所述第二获取单元,还配置为获取第四参数;所述第四参数表征所述第二多媒体信息相关联的标签信息参数。
  8. 根据权利要求7所述的设备,其中,所述匹配单元,还配置为判断所述第一参数和所述第二参数是否满足预定条件之前,判断所述第一多媒体信息的第三参数和所述第二多媒体信息的第四参数是否匹配一致;确定所述第一多媒体信息的第三参数和所述第二多媒体信息的第四参数匹配一致后,进一步判断所述第一参数和所述第二参数是否满足预定条件。
  9. 根据权利要求6所述的设备,其中,所述第一获取单元,还配置为获取所述第一多媒体信息相关联的第五参数,所述第五参数包括以下参数的至少之一:允许插入的第二多媒体信息的数量、允许插入的第二多媒体信息之间的时间间隔、所述第一多媒体信息中不允许插入第二多媒体信息的时间范围;
    所述确定单元,配置为依据所述关键帧图像在所述第一多媒体信息中的位置,以及所述第一获取单元获取的所述第五参数确定所述第二多媒体信息在所述第一多媒体信息中的插入点。
  10. 根据权利要求6所述的设备,其中,所述第一参数表征所述关键帧图像的色彩参数或灰度参数;所述第二参数表征所述第一帧图像的色彩 参数或灰度参数;
    相应的,所述匹配单元,配置为判断所述关键帧图像和所述第一帧图像的色彩参数或灰度参数是否匹配;当所述关键帧图像和所述第一帧图像的色彩参数或灰度参数匹配时,确定所述第一参数和所述第二参数满足预定条件。
  11. 一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,所述计算机可执行指令配置为执行权利要求1至5任一项所述的多媒体信息处理方法。
PCT/CN2016/077243 2015-04-07 2016-03-24 一种多媒体信息处理方法、设备及计算机存储介质 WO2016161899A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510162015.2A CN104754367A (zh) 2015-04-07 2015-04-07 一种多媒体信息处理方法及设备
CN201510162015.2 2015-04-07

Publications (1)

Publication Number Publication Date
WO2016161899A1 true WO2016161899A1 (zh) 2016-10-13

Family

ID=53593373

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/077243 WO2016161899A1 (zh) 2015-04-07 2016-03-24 一种多媒体信息处理方法、设备及计算机存储介质

Country Status (2)

Country Link
CN (1) CN104754367A (zh)
WO (1) WO2016161899A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106470358B (zh) * 2015-08-21 2020-09-25 深圳市天易联科技有限公司 智能电视机的显存图像识别方法及装置
CN106604128B (zh) * 2016-12-30 2019-10-25 中广热点云科技有限公司 一种在网络视频中插入预促销项目的方法和系统
CN110769291B (zh) * 2019-11-18 2022-08-30 上海极链网络科技有限公司 一种视频处理方法、装置、电子设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101179739A (zh) * 2007-01-11 2008-05-14 腾讯科技(深圳)有限公司 一种插入广告的方法及装置
CN102884538A (zh) * 2010-04-26 2013-01-16 微软公司 通过内容检测、搜索和信息聚集来丰富在线视频
CN103888785A (zh) * 2014-03-10 2014-06-25 百度在线网络技术(北京)有限公司 信息的提供方法和装置
CN104219559A (zh) * 2013-05-31 2014-12-17 奥多比公司 在视频内容中投放不明显叠加

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8861898B2 (en) * 2007-03-16 2014-10-14 Sony Corporation Content image search
CN102547462B (zh) * 2010-12-28 2016-08-17 联想(北京)有限公司 信息推送系统
CN103218734B (zh) * 2013-04-01 2016-09-14 天脉聚源(北京)传媒科技有限公司 一种广告信息的推送方法及装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101179739A (zh) * 2007-01-11 2008-05-14 腾讯科技(深圳)有限公司 一种插入广告的方法及装置
CN102884538A (zh) * 2010-04-26 2013-01-16 微软公司 通过内容检测、搜索和信息聚集来丰富在线视频
CN104219559A (zh) * 2013-05-31 2014-12-17 奥多比公司 在视频内容中投放不明显叠加
CN103888785A (zh) * 2014-03-10 2014-06-25 百度在线网络技术(北京)有限公司 信息的提供方法和装置

Also Published As

Publication number Publication date
CN104754367A (zh) 2015-07-01

Similar Documents

Publication Publication Date Title
US10368123B2 (en) Information pushing method, terminal and server
KR102263898B1 (ko) 동적 비디오 오버레이
CN110933490B (zh) 一种画质和音质的自动调整方法、智能电视机及存储介质
CN106658200B (zh) 直播视频分享和获取的方法、装置及其终端设备
CN110121098B (zh) 视频播放方法、装置、存储介质和电子装置
CN110300316B (zh) 视频中植入推送信息的方法、装置、电子设备及存储介质
US11184646B2 (en) 360-degree panoramic video playing method, apparatus, and system
CN109120949B (zh) 视频集合的视频消息推送方法、装置、设备及存储介质
CN105469381B (zh) 一种信息处理方法及终端
US11451858B2 (en) Method and system of processing information flow and method of displaying comment information
CN105718861A (zh) 一种识别视频流数据类别的方法及装置
WO2016161899A1 (zh) 一种多媒体信息处理方法、设备及计算机存储介质
CN112448962B (zh) 视频抗锯齿显示方法、装置、计算机设备及可读存储介质
CN115396705B (zh) 投屏操作验证方法、平台及系统
CN112788329A (zh) 视频静帧检测方法、装置、电视及存储介质
US10270872B2 (en) Information pushing method and system
CN117014649A (zh) 视频处理方法、装置及电子设备
CN110619362B (zh) 一种基于感知与像差的视频内容比对方法及装置
CN111343475B (zh) 数据处理方法和装置、直播服务器及存储介质
CN112860941A (zh) 一种封面推荐方法、装置、设备及介质
WO2018107601A1 (zh) 动态演示使用说明的方法、装置及系统
CN110140357A (zh) 用于播放代用广告的电子装置及其控制方法
CN114007133B (zh) 基于视频播放的视频起播封面自动生成方法及装置
CN110784716B (zh) 媒体数据处理方法、装置及介质
CN117640967A (zh) 图像显示方法、图像处理方法、装置、设备及介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16776074

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16776074

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205N DATED 12/04/2018)

122 Ep: pct application non-entry in european phase

Ref document number: 16776074

Country of ref document: EP

Kind code of ref document: A1