WO2020259676A1 - 一种信息植入方法、装置、设备及计算机存储介质 - Google Patents

一种信息植入方法、装置、设备及计算机存储介质 Download PDF

Info

Publication number
WO2020259676A1
WO2020259676A1 PCT/CN2020/098462 CN2020098462W WO2020259676A1 WO 2020259676 A1 WO2020259676 A1 WO 2020259676A1 CN 2020098462 W CN2020098462 W CN 2020098462W WO 2020259676 A1 WO2020259676 A1 WO 2020259676A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
implanted
preset
video
slope
Prior art date
Application number
PCT/CN2020/098462
Other languages
English (en)
French (fr)
Inventor
生辉
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to EP20831160.5A priority Critical patent/EP3993433A4/en
Publication of WO2020259676A1 publication Critical patent/WO2020259676A1/zh
Priority to US17/384,516 priority patent/US11854238B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/49Segmenting video sequences, i.e. computational techniques such as parsing or cutting the sequence, low-level clustering or determining units such as shots or scenes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/266Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
    • H04N21/2668Creating a channel for a dedicated end-user group, e.g. insertion of targeted commercials based on end-user profiles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/812Monomedia components thereof involving advertisement data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Definitions

  • This application relates to information processing technology in the computer field, and in particular to an information implantation method, device, equipment and computer storage medium.
  • Implanting multimedia information refers to the form of implanting multimedia information such as three-dimensional models or physical objects on preset implanted entities such as desktops and desks in video information.
  • the staff in order to improve the implantation effect, the staff usually implant multiple multimedia information into the preset implanted entities in the video frame, and determine the difference with the video frame through manual judgment and screening.
  • the target multimedia information with the highest degree of relevance to the embedded entity is preset.
  • the intelligence is low.
  • the embodiments of the present application provide an information implantation method, device, equipment, and computer storage medium.
  • the embodiment of the application provides an information implantation method, including:
  • the preset video frame to be implanted is the smallest unit used to implant multimedia information in the preset video information, and the rear
  • the scene tilt information is the tilt information of the bearing surface of the preset implanted entity in the preset video frame to be implanted;
  • each foreground inclination information in the at least one foreground inclination information is the inclination information of the to-be-contacted surface of the corresponding preset multimedia information;
  • the at least one slope difference information from the at least one preset multimedia information, determine target multimedia information that meets a preset slope difference condition;
  • the target multimedia information is implanted into the preset implantation entity of the preset video frame to be implanted to obtain the target video frame.
  • An embodiment of the application provides an information implantation device, including:
  • the background tilt acquisition part is configured to acquire the background tilt information of the preset implanted entity in the preset video frame to be implanted; the preset video frame to be implanted is used for implantation in the preset video information
  • the foreground inclination acquiring part is configured to acquire at least one foreground inclination information corresponding to at least one preset multimedia information; each foreground inclination information in the at least one foreground inclination information is a to-be-contacted corresponding preset multimedia information The slope information of the surface;
  • An inclination difference acquiring part configured to acquire the inclination difference between the background inclination information and the at least one foreground inclination information to obtain at least one inclination difference information;
  • the target determination part is configured to determine, from the at least one preset multimedia information, target multimedia information that meets the preset tilt difference condition according to the at least one tilt difference information;
  • the implantation part is configured to implant the target multimedia information into the preset implantation entity of the preset video frame to be implanted to obtain the target video frame.
  • the embodiment of the application provides an information implantation device, including:
  • Memory configured to store executable instructions
  • the processor is configured to execute the executable instructions stored in the memory to implement the information implantation method provided in the embodiment of the present application.
  • the embodiment of the present application provides a computer storage medium that stores executable instructions, which are used to cause the processor to execute the information implantation method provided in the embodiment of the present application.
  • the beneficial effects of the embodiments of the present application include at least: since the target multimedia information implanted in the video frame is obtained by comparing the inclination information, the target multimedia information has a higher degree of fit with the preset video frame to be implanted; therefore, the realization A process of automatically screening high-fit target multimedia information; thus, when the multimedia information is implanted according to the target multimedia information, the multimedia information can be intelligently embedded in the video frame; thus, the multimedia information can be improved Implanted intelligence.
  • Figures 1a-1d are schematic diagrams of exemplary placement of advertisements
  • FIG. 2 is a schematic diagram of an optional architecture of the information implantation system provided by an embodiment of the application.
  • FIG. 3 is a schematic diagram of the composition structure of an information implantation server provided by an embodiment of the application.
  • FIG. 4 is an optional flowchart of the information implantation method provided by the embodiment of the application.
  • FIG. 5 is a schematic flowchart of an exemplary information implantation method provided by an embodiment of the application.
  • FIG. 6 is an exemplary schematic diagram of obtaining background tilt information according to an embodiment of the application.
  • FIG. 7 is an exemplary schematic diagram of obtaining foreground tilt information according to an embodiment of the application.
  • FIG. 8 is an exemplary schematic diagram of determining minimum tilt difference information provided by an embodiment of this application.
  • FIG. 9 is an exemplary schematic diagram of determining a target video frame provided by an embodiment of the application.
  • FIG. 10 is a schematic diagram of an exemplary beneficial effect analysis provided by an embodiment of this application.
  • FIG. 11 is an exemplary information implantation system architecture provided by an embodiment of this application.
  • Video information When continuous image changes exceed a predetermined number of frames per second, according to the principle of persistence of vision, human eyes cannot distinguish a single static image, but it looks like a smooth and continuous visual effect, so the continuous image is Video information; for example, a single video file, or a video clip.
  • Video library a database for storing video information.
  • Lens refers to a piece of video continuously shot by the camera at a time; a lens is composed of several video frames, which is also called video lens information in the embodiment of the present application.
  • Video frame refers to the smallest unit of video, which is a static image; for example, when playing video information, a frame that freezes at any moment is a video frame.
  • Multimedia information refers to the combination of computer and video technology; in the embodiments of this application, multimedia information refers to information used to be embedded in video frames; for example, advertisement pictures.
  • Implanted entity refers to the physical objects in the real world that are used to implant multimedia information presented in the video frame, such as the table and bar in the video frame.
  • Foreground tilt corresponding to the embedded multimedia information, it refers to the visual tilt information presented by the multimedia information in the corresponding image.
  • Background tilt corresponding to the implanted entity, it refers to the visual tilt information presented by the implanted entity in the corresponding video frame.
  • the advertisement can be divided into a pop-up advertisement (Video-Out) and an embedded advertisement (Video-In) according to the display form of the advertisement;
  • Window advertisement is a kind of scene pop-up advertisement, which refers to display pop-up advertisement related to the video information content based on the video information content of the car, face, target, and scene in the video information;
  • the implanted advertisement it is a soft advertisement Form refers to the placement of flat or physical advertisements on the desktop, wall, photo frame, bar, and billboard of the video frame.
  • Figures 1a-1d are schematic diagrams of exemplary placement of advertisements, where Figure 1a depicts a schematic diagram of a scenario where a milk carton is implanted on the desktop, the left image is a video frame a 1 before the milk carton is implanted, and the right image It is the video frame a 1 ′ of the implanted milk carton; as shown in the left image of Figure 1a, a table a 1 -1 is placed in the video frame a 1 and a cup a is placed on the table a 1 -1 1 -2 and plate a 1 -3; as shown on the right side of Fig. 1a, besides the cup a 1 -2 and the plate a 1 -3 placed on the table a 1 -1 in the video frame a 1 '
  • Figure 1b depicts a schematic diagram of the scene where a 3D model carrying a poster is implanted on the desktop.
  • the left picture is the video frame a 1 before the 3D model carrying the poster is implanted
  • the right picture is the 3D model carrying the poster.
  • Video frame a 1 "; the left image in Figure 1b is the same as the left image in Figure 1a, and the right image in Figure 1b shows that the table a 1 -1 in video frame a 1 "except for the cup placed on it
  • a 1 -2 and a plate a 1 -3 are also placed with a three-dimensional model a 1-5 that carries a poster; here, the three-dimensional model a 1-5 that carries a poster is an advertisement.
  • Figure 1c depicts a schematic diagram of a scene where a poster is implanted in a photo frame.
  • the upper image is a video frame c 1 without a poster implanted, and the lower image is a video frame c 1 ′ with a poster implanted; the upper image of Figure 1c , in place of the video frame c 1 c 1 -1 chandeliers, and wall frame c 1 and c 1 -2 -3; lower side as shown in FIG. 1c, placed in the video frame c '1
  • the photo frame c 1 -3 also displays poster c 1 -4; here, poster c 1 -4 is an advertisement.
  • Figure 1d depicts a schematic diagram of a scene where a poster is implanted in the display screen.
  • the upper image is a video frame d 1 without a poster implanted
  • the lower image is a video frame d 1 'with a poster implanted
  • the table d 1 -1 in the video frame d 1 is placed with a display screen d 1 -2; as shown in the lower diagram of Fig. 1d, in the video frame d 1 ′, the table d 1 -1 is placed
  • the placed display screen d 1 -2 also displays posters d 1 -3; here, the posters d 1 -3 are embedded advertisements.
  • physical advertisements (referring to promoted entities, such as milk, cars, or beverages, etc.) correspond to multiple physical pictures from different angles, and choose from multiple physical pictures from different angles
  • the process of implanting a real object picture with the orientation similar to the implanted entity in the video frame as a foreground object picture for advertising is currently done manually by experienced designers.
  • the advertiser uploads 30 physical pictures corresponding to a physical advertisement.
  • the designer implants these 30 physical pictures on the desktop in the video frame, and then performs manual judgment.
  • the whole process of screening takes 30 minutes. In this way, when the multimedia information is implanted into the implanted entity in the video frame, the time cost is high, the efficiency is low, the degree of automation is low, and the intelligence is low.
  • the embodiments of the present application provide an information implantation method, device, equipment, and computer storage medium.
  • multimedia information is implanted into an implanted entity in a video frame, time cost can be reduced, and implantation efficiency and automation can be improved. And intelligence.
  • the information implantation device provided by the embodiments of the present application can be implemented as various types of user terminals such as smart phones, tablets, and laptops, or as server.
  • user terminals such as smart phones, tablets, and laptops
  • server or as server.
  • an exemplary application when the information implantation device is implemented as a server will be explained.
  • FIG 2 is a schematic diagram of an optional architecture of the information implantation system provided by an embodiment of the application; as shown in Figure 2, in order to support an information implantation application, in the information implantation system 100, information implantation
  • the server 500 connects the multimedia server 300 and the video server 200 through a network 400; the network 400 may be a wide area network or a local area network, or a combination of the two.
  • the information implantation system 100 also includes a terminal 501, a database 502, a terminal 201, a database 202, a terminal 301, and a database 302; the information implantation server 500 is connected to the terminal 501 and the database 502, and the video server 200 is connected to the terminal 201 and The database 202 is connected, and the multimedia server 300 is respectively connected to the terminal 301 and the database 302; and the network corresponding to the connection here may also be a wide area network or a local area network, or a combination of the two.
  • the terminal 201 is configured to store the video information in the database 202 through the video server 200 when the video uploading object (user) uploads video information.
  • the database 202 is configured to store video information uploaded through the terminal 201 and the video server 200.
  • the video server 200 is configured to store the video information uploaded by the terminal 201 in the database 202, obtain preset video information from the database 202, and send it to the information implant server 500 via the network 400.
  • the terminal 301 is configured to deliver multimedia information corresponding to a promotion entity (such as an advertisement object, a multimedia object) to a multimedia information delivery object (such as an advertiser), and pass at least one preset multimedia information corresponding to the promotion entity through the multimedia server 300 Store to the database 302.
  • a promotion entity such as an advertisement object, a multimedia object
  • a multimedia information delivery object such as an advertiser
  • the database 302 is configured to store at least one preset multimedia information uploaded through the terminal 301 and the multimedia server 200.
  • the multimedia server 300 is configured to store at least one preset multimedia information delivered by the terminal 301 in the database 302, and obtain at least one preset multimedia information from the data 302, and send it to the information implantation server 500 via the network 400.
  • the terminal 501 is configured to receive a user's touch operation, generate an information implantation request, and send the information implantation request to the information implantation server 500. And, receiving the target video information sent by the information implant server 500, and playing the target video information on the graphical interface.
  • the database 502 is configured to store target video information processed by the information implant server 500.
  • the information implantation server 500 is configured to receive the information implantation request sent by the terminal 501, respond to the information implantation request, obtain preset video information from the database 202 through the video server 200, and obtain at least one preset from the database 302 of the multimedia server 300 Multimedia information.
  • the preset video frame to be implanted is the smallest unit used to implant multimedia information in the preset video information, and the background tilt
  • the information is the tilt information of the bearing surface of the preset implanted entity in the preset video frame to be implanted; at least one foreground tilt information corresponding to at least one preset multimedia information is acquired; each foreground in the at least one foreground tilt information
  • the inclination information is the inclination information of the contact surface of the corresponding preset multimedia information; the inclination difference between the background inclination information and the at least one foreground inclination information is obtained to obtain at least one inclination difference information; according to the at least one inclination Difference information, from at least one preset multimedia information, determine the target multimedia information that meets the preset slope difference condition; implant the target multimedia information into the preset implantation entity of the preset video frame to be implanted, Obtain the target video frame, thereby obtaining the target
  • FIG. 3 is a schematic diagram of the composition structure of an information implantation server provided by an embodiment of the application.
  • the information implantation server 500 shown in FIG. 3 includes: at least one processor 510, a memory 550, at least one network interface 520, and a user Interface 530.
  • the various components in the server 500 are coupled together through the bus system 540.
  • the bus system 540 is used to implement connection and communication between these components.
  • the bus system 540 also includes a power bus, a control bus, and a status signal bus.
  • various buses are marked as the bus system 540 in FIG. 3.
  • the processor 510 may be an integrated circuit chip with signal processing capabilities, such as a general-purpose processor, a digital signal processor (DSP, Digital Signal Processor), or other programmable logic devices, discrete gates or transistor logic devices, and discrete hardware Components, etc., where the general-purpose processor may be a microprocessor or any conventional processor.
  • DSP Digital Signal Processor
  • the user interface 530 includes one or more output devices 531 that enable the presentation of media content, including one or more speakers and/or one or more visual display screens.
  • the user interface 530 also includes one or more input devices 532, including user interface components that facilitate user input, such as a keyboard, a mouse, a microphone, a touch screen display, a camera, and other input buttons and controls.
  • the memory 550 includes volatile memory or nonvolatile memory, and may also include both volatile and nonvolatile memory.
  • the non-volatile memory may be a read only memory (ROM, Read Only Memory), and the volatile memory may be a random access memory (RAM, Random Access Memory).
  • the memory 550 described in the embodiment of the present application is intended to include any suitable type of memory.
  • the memory 550 optionally includes one or more storage devices that are physically remote from the processor 510.
  • the memory 550 can store data to support various operations. Examples of these data include programs, modules, and data structures, or a subset or superset thereof, as illustrated below.
  • the operating system 551 includes system programs for processing various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and processing hardware-based tasks;
  • the network communication module 552 is used to reach other computing devices via one or more (wired or wireless) network interfaces 520.
  • Exemplary network interfaces 520 include: Bluetooth, wireless compatibility authentication (Wi-Fi), and universal serial Bus (USB, Universal Serial Bus), etc.;
  • the display module 553 is used to enable the presentation of information via one or more output devices 531 (for example, a display screen, a speaker, etc.) associated with the user interface 530 (for example, a user interface for operating peripheral devices and displaying content and information) );
  • output devices 531 for example, a display screen, a speaker, etc.
  • user interface 530 for example, a user interface for operating peripheral devices and displaying content and information
  • the input processing module 554 is configured to detect one or more user inputs or interactions from one of the one or more input devices 532 and translate the detected inputs or interactions.
  • the information implantation device provided in the embodiments of the present application can be implemented in software.
  • FIG. 3 shows the information implantation device 555 stored in the memory 550, which can be software in the form of programs and plug-ins. Including the following software parts: background tilt acquisition part 5551, foreground tilt acquisition part 5552, tilt difference acquisition part 5553, target determination part 5554, implant part 5555, video frame determination part 5556, video fusion part 5557 and video playback Part 5558, where the background gradient acquisition part 5551 includes a recognition part 5551-1, an edge acquisition part 5551-2, a contour point screening part 5551-3, a straight line fitting part 5551-4 and a slope acquisition part 5551-5, The functions of each part are explained below.
  • the information implantation device provided in the embodiments of the present application may be implemented in hardware.
  • the information implantation device provided in the embodiments of the present application may be a processor in the form of a hardware decoding processor. It is programmed to implement the information implantation method provided in the embodiments of the present application.
  • a processor in the form of a hardware decoding processor may adopt one or more application specific integrated circuits (ASIC, Application Specific Integrated Circuit), DSP, and programmable logic Device (PLD, Programmable Logic Device), complex programmable logic device (CPLD, Complex Programmable Logic Device), field programmable gate array (FPGA, Field-Programmable Gate Array) or other electronic components.
  • FIG. 4 is an optional flowchart of the information implantation method provided by an embodiment of the application, and will be described in conjunction with the steps shown in FIG. 4.
  • the execution subject in the embodiment of the present application is the information implantation server, which is hereinafter referred to as the implantation device for short.
  • the delivered video information is the preset video information;
  • the preset video information determines a video frame to be implanted in multimedia information and contains a preset implanted entity, and the preset to be implanted video frame is obtained;
  • the preset implanted entity refers to the pre-obtained by the implanted device An entity used to carry multimedia information.
  • the implantation device acquires the visual tilt information of the preset implant entity in the preset to be implanted video frame , It also obtains the background tilt information of the preset implanted entity in the preset to be implanted video frame.
  • the preset video frame to be implanted is the smallest unit used to implant multimedia information in the preset video information, which refers to a video frame in the preset video information, such as the first video frame and the third video frame. Frames, etc.; and, the preset video frame to be implanted contains image information corresponding to the preset implanted entity, where the preset implanted entity is a physical object with a bearing surface in the preset to be implanted video frame, such as a table Or the bar.
  • the background tilt information is the tilt information of the bearing surface of the preset implanted entity in the preset to-be-implanted video frame; for example, at least one slope of the lower edge of the table, or at least one slope of the lower edge of the bar.
  • the entity promoted by the multimedia delivery object is a physical object (for example, a milk carton or a three-dimensional model displaying a poster), for a promoted entity, there are corresponding image information of different angles, and the image information of different angles is at least A preset multimedia message.
  • the implanted device obtains the inclination information of the to-be-contacted surface of each preset multimedia information in at least one preset multimedia information, and also obtains at least one corresponding to the at least one preset multimedia information.
  • Foreground tilt information that is, each foreground tilt information in the at least one foreground tilt information is the tilt information of the contact surface of the preset multimedia information corresponding to each foreground tilt information; for example, a milk carton At least one slope of the lower edge, or at least one slope of the lower edge of the three-dimensional model on which the poster is displayed.
  • At least one piece of preset multimedia information is a picture obtained through picture processing such as a matting or a mask, which only includes a picture corresponding to a promoted entity.
  • S101 and S102 are executed in no particular order; that is to say, S101 can be executed first and then S102, or S102 can be executed first and then S101, or S101 and S102 can be executed simultaneously, etc.
  • S101 and S102 are executed in no particular order; that is to say, S101 can be executed first and then S102, or S102 can be executed first and then S101, or S101 and S102 can be executed simultaneously, etc.
  • the embodiments of this application do not specifically limit this.
  • the detection device after acquiring the background tilt information and at least one foreground tilt information, compares the background tilt information with each foreground tilt information in the at least one foreground tilt information, respectively, It also obtains the slope information of the background slope information and the slope difference of each foreground slope information, thereby obtaining the slope difference between the background slope information and at least one foreground slope information, that is, at least one slope difference.
  • Information that is to say, the at least one slope difference information is a collection of the background slope information and the slope difference of each foreground slope information in the at least one foreground slope information.
  • S104 Determine target multimedia information that meets the preset gradient difference condition from the at least one preset multimedia information according to the at least one gradient difference information.
  • the preset tilt difference condition is preset in the implantation device, it is used to determine the multimedia information of the preset video frame to be implanted, and at least one tilt difference information and at least one preset One-to-one correspondence of multimedia information; therefore, after the implanted device obtains at least one inclination difference information, it uses a preset inclination difference condition to judge each inclination difference information in the at least one inclination difference information, and finally determines that it satisfies
  • the gradient difference information of the preset gradient difference condition is the preset multimedia information corresponding to the at least one preset multimedia information, that is, the target multimedia information.
  • the preset gradient difference condition is information with the smallest gradient difference.
  • the implanted device determines the target multimedia information that satisfies the preset tilt difference condition from at least one preset multimedia information according to at least one tilt difference information, including: The difference condition is to select minimum inclination difference information from at least one inclination difference information; determine preset multimedia information corresponding to the minimum inclination difference information from at least one preset multimedia information to obtain initial target multimedia information.
  • the initial multimedia information can be directly used as the target multimedia information for information implantation processing; in order to improve the implantation effect of the multimedia information, the initial target multimedia information can also be rendered according to the preset video frames to be implanted to obtain the target Multimedia information.
  • the implanted device Set the video frame to be implanted, and perform rendering processing on the initial target multimedia information, so that the picture display attributes of the preset multimedia information (ie target multimedia information) after rendering processing are different from the picture display attributes of the preset video frame to be implanted Minimize, achieve harmony between foreground and background, optimize the implantation of target multimedia information, and improve the implantation effect of target multimedia information.
  • the object to be implanted is also clarified, so that the target multimedia information is implanted into the preset video frame to be implanted, and the pre-implantation is completed.
  • the information implantation of the video frame to be implanted here, the implantation device implants the target multimedia information on the carrier surface of the preset implantation entity of the preset video frame to be implanted, and implants the preset target multimedia information
  • the video frame to be implanted is the target video frame.
  • the implantation position of the target multimedia information on the bearing surface is the preset implantation position.
  • the preset implantation position can be near the desktop where things are placed, or the desktop
  • the location farthest from the key information can be the location closest to the key information on the desktop, or any location on the desktop, and so on.
  • the foreground tilt information with the smallest difference between the at least one foreground tilt information and the background tilt information is filtered out, and the difference is the smallest
  • the preset multimedia information corresponding to the foreground tilt information is used as the target multimedia information, so that the target multimedia information is implanted into the preset video frame to be implanted, and an information implantation scheme that automatically selects the orientation of the physical object (multimedia object) is realized , Improve the implantation effect.
  • the implanted device in S101 obtains the background tilt information of the preset implanted entity in the preset video frame to be implanted, including S101a-S101e:
  • S101a In the preset video frame to be implanted, identify the area where the preset implanted entity is located, and obtain the initial implantation location area.
  • the implantation device recognizes the area of the preset implanted entity in the preset to be implanted video frame.
  • the identified area is the initial implant location area.
  • a preset instance segmentation algorithm can be used to obtain the initial implantation location area, for example, Mask R-CNN (Mask Region-based Convolutional Neural Network); through input prediction Suppose a video frame to be implanted, and based on the entity feature information of a preset implanted entity, perform target detection and instance segmentation on the preset to be implanted video frame to obtain.
  • Mask R-CNN Mask Region-based Convolutional Neural Network
  • edge processing is performed on the initial implantation position area to obtain the implantation position edge information;
  • the implantation position edge information refers to the preset
  • the edge corresponding to the implanted entity is preset in the video frame to be implanted.
  • the implantation device when the implantation device obtains the implantation position edge information of the initial implantation position area: first, the implantation device filters the implantation position area from the initial implantation position area according to the preset area characteristics; that is, In other words, because the effective location area in the initial implantation location area can be used to carry preset multimedia information, the implantation device needs to further filter the initial implantation location area, that is, according to the characteristics of the preset area, from the initial implantation The effective location area is filtered out from the location area, so that the implant location area is also obtained.
  • the preset area feature is the feature of the bearing surface used to carry the preset multimedia information
  • the implant location area is the area information corresponding to the bearing surface of the preset implanted entity.
  • the implantation device selects the implantation location feature area from the implantation location area according to the preset flatness condition; that is, after the implantation device obtains the implantation location area, according to the preset flatness condition, By removing the plane area from the input location area to obtain the area related to the inclination, the feature area of the implant location is also filtered out.
  • the preset flatness condition means that the flatness of the plane area is greater than the flatness of the area related to the inclination.
  • the color patch clustering can be performed on the implanted location area through the preset color patch clustering algorithm, and after the color patch clustering, the corresponding flat area and the area related to the inclination are obtained.
  • the implanted device performs edge detection on the implant location feature area to obtain the implant location edge information; that is, after the implant device obtains the implant location feature area, the edge detection process is used to obtain the preset to be implanted
  • the edge information of the bearing surface of the embedded entity is preset in the video frame, that is, the edge information of the embedded position.
  • the edge detection algorithm can be used to perform edge detection on the feature area of the implanted location.
  • the preset edge detection algorithm is the algorithm used for edge detection, such as Laplace edge detection algorithm, Sobel edge detection algorithm and Canny (Multi-level) edge detection algorithm.
  • S101c According to a preset edge point threshold, filter the characteristic contour points of each edge in the edge information of the implanted position to obtain at least one characteristic contour point combination.
  • the implanted device screens the edge points in the edge information of the implanted position according to the preset edge point threshold; and since the edge information of the implanted position corresponds to at least one edge, the implanted device targets the edge information of the implanted position
  • the characteristic contour points of each edge of the, the characteristic contour point value is greater than the preset edge point threshold as the characteristic contour point, and the characteristic contour point combination corresponding to each edge is obtained, so as to obtain the corresponding implant position edge information At least one feature contour point combination.
  • the preset edge point threshold is used to determine that the feature contour point is a reference threshold obtained through adaptive threshold learning, which can be a gray value or other feature value, which is not specifically limited in the embodiment of this application. .
  • S101d Perform straight line fitting on at least one combination of characteristic contour points respectively to obtain at least one background fitting straight line information.
  • a straight line fitting is performed with each feature contour point combination as a unit, and a background scene is obtained for each feature contour point combination. Fit the straight line information, thereby obtaining at least one background fitting straight line information corresponding to the at least one feature contour point combination.
  • the preset straight line fitting algorithm is an algorithm used for straight line fitting, for example, the RANSAC (Random Sample Consensus) algorithm and the least square method.
  • the implanted device obtains at least one piece of background fitting straight line information
  • the slope corresponding to each piece of background fitting straight line information in the at least one piece of background fitting straight line information is also obtained.
  • At least one slope information corresponding to at least one background fitting straight line information is obtained.
  • the at least one slope information is the background slope information.
  • S101a-S101e describe the realization process of obtaining background tilt information, and the obtaining process of foreground tilt information corresponding to each preset multimedia information is consistent with the realization process of background tilt information; and , A slope in the background slope information corresponds to a slope in each foreground slope information, and the number of slopes in the background slope information is equal to the number of slopes in each foreground slope information.
  • at least one preset multimedia information is a picture containing only multimedia objects, the information in the picture is already the initial implantation location area corresponding to the preset implantation entity, and it is no longer necessary to execute the corresponding S101a step.
  • the steps corresponding to S101a need to be performed to determine the initial implantation location area corresponding to the preset implantation entity in the preset multimedia information.
  • at least one slope information of at least one edge information corresponding to the surface to be carried of the preset multimedia information constitutes foreground slope information.
  • the preset multimedia information is picture information of the multimedia object.
  • the implanted device in S103 obtains the tilt difference between the background tilt information and the at least one foreground tilt information to obtain at least one tilt difference information, including S103a-S103b:
  • S103a Obtain the difference between each slope information in the background slope information and the corresponding slope information in the current foreground slope information to obtain at least one slope difference information corresponding to the current foreground slope information and the background slope information.
  • the implanted device regards each foreground slope information in at least one foreground slope information as the current foreground slope information, and uses each slope information in the background slope information to Subtracted from the corresponding slope information in the current foreground slope information, the resulting difference is the slope difference information corresponding to each slope information in the background slope information and the corresponding slope information in the current foreground slope information, Thus, at least one slope difference information corresponding to the current foreground slope information and the background slope information is obtained.
  • the current foreground tilt information is any one of the at least one foreground tilt information
  • one piece of slope information in the current foreground tilt information corresponds to one piece of slope information in the background tilt information.
  • the implanted device after the implanted device obtains at least one slope difference information, it multiplies each slope difference information in the at least one slope difference information one by one, and the obtained result is the current foreground slope information and the post The tilt difference information corresponding to the scene tilt information. Since the current foreground tilt information is any foreground tilt information in the at least one foreground tilt information, when each foreground tilt information in the at least one foreground tilt information is used as the current foreground tilt information to obtain the corresponding After the tilt difference information, at least one tilt difference information corresponding to the background tilt information and the at least one foreground tilt information is also obtained.
  • the slope difference information between the current foreground slope information and the background slope information can be obtained by formula (1):
  • M is the number of preset multimedia information in at least one preset multimedia information
  • N i is the value of the slope information in the foreground tilt information corresponding to the i-th preset multimedia information
  • j is the number of the slope information
  • ⁇ fij is the j-th slope information in the i-th preset multimedia information
  • ⁇ bij is the preset plant corresponding to the j-th slope information in the i-th preset multimedia information. Enter the j-th slope information in the entity.
  • the implanted device acquires the tilt difference between the background tilt information and the at least one foreground tilt information to obtain at least A slope difference information, including S103c-S103d:
  • S103c Obtain the ratio of each slope information in the background slope information to the corresponding slope information in the current foreground slope information, and obtain at least one slope ratio information corresponding to the current foreground slope information and the background slope information.
  • the implanted device uses each foreground inclination information in at least one foreground inclination information as the current foreground inclination information, and combines each inclination information in the background inclination information with the current foreground inclination information.
  • the obtained ratio is the slope ratio information corresponding to each slope information in the background slope information and the corresponding slope information in the current foreground slope information, thereby obtaining the current foreground At least one slope ratio information corresponding to the slope information and the background slope information.
  • the current foreground tilt information is any one of the at least one foreground tilt information
  • one piece of slope information in the current foreground tilt information corresponds to one piece of slope information in the background tilt information.
  • the implanted device after the implanted device obtains at least one slope ratio information, the sum of the at least one slope ratio information is used as the numerator, and the number of slope ratio information corresponding to the at least one slope ratio information is used as the denominator to calculate the ratio.
  • the obtained ratio result is the slope difference information corresponding to the current foreground slope information and the background slope information. Since the current foreground tilt information is any foreground tilt information in the at least one foreground tilt information, when each foreground tilt information in the at least one foreground tilt information is used as the current foreground tilt information to obtain the corresponding After the tilt difference information, at least one tilt difference information corresponding to the background tilt information and the at least one foreground tilt information is also obtained.
  • the slope difference information between the current foreground slope information and the background slope information can be obtained by formula (2):
  • S103a-S103b and S103c-S103d respectively describe two different implementation manners for obtaining at least one slope difference information.
  • the implantation device acquires the preset implanted entity in the preset video frame to be implanted Before the background tilt information, the information implantation method also includes S106-S109:
  • S106 When receiving the information implantation request, obtain preset video information from the preset video library according to the information implantation request.
  • the implantation device when the implantation device receives the information implantation request information requesting to implant the multimedia object into the preset video information, it responds to the information implantation request and obtains the requested video name from the information implantation request, according to The requested video name obtains the requested video identifier, and then the corresponding preset video information is obtained from the preset video library according to the requested video identifier.
  • the preset video library refers to the image data 502 in FIG. 1 and stores video information.
  • the process of obtaining the preset video information by the implanted device can also be obtained directly from the information implantation request; that is, the preset video information can be uploaded on the terminal side, and the information including the preset video information can be generated by the terminal side The implantation request, therefore, the implantation device can directly obtain the preset video information from the information implantation request sent by the terminal.
  • the preset video information is split into video segments according to the lens, and each video segment is a piece of lens information.
  • the video lens information corresponding to the preset video information is also obtained.
  • the preset lens segmentation algorithm is a lens segmentation algorithm.
  • S108 According to the preset implanted entity detection algorithm, detect the implanted entity for each video frame in each lens information of the video lens information to obtain the preset implanted entity and the target video lens where the preset implanted entity is located combination.
  • the implanted entity is detected according to the preset implanted entity detection algorithm. In this way, It can determine the preset entity to be implanted and the target video lens combination where the preset entity to be implanted is located.
  • the implanted entity is a preset designated entity, such as a table; the preset implanted entity belongs to the implanted entity; the target video lens combination is at least one lens information including the lens information composition of the image corresponding to the preset implanted entity Therefore, the number of shot information included in the target video shot combination is at least one.
  • the implanted entity is detected for each video frame in each shot information of the video lens information to obtain the preset implanted entity and the preset implanted entity.
  • the target video lens combination where the entity is located including S108a-S108c:
  • S108a According to a preset implanted entity detection algorithm, perform implanted entity detection on each video frame in each shot information of the video lens information, to obtain at least one implanted entity and at least one target where the at least one implanted entity is located. Implant a video lens combination.
  • the implanted device when it performs implanted entity detection on video lens information, it can obtain at least one implanted entity and at least one to-be implanted video lens combination where the at least one implanted entity is located according to the video lens information; For example, when the implanted entity is a table, the combination of at least one table and at least one table to be implanted can be detected.
  • each of the video lens combinations to be implanted includes at least one lens information.
  • the implanted device obtains the combination of at least one implanted entity and at least one to-be implanted video lens where the at least one implanted entity is located, since each lens information corresponds to playback time information, here,
  • the implantation device integrates (for example, summing) the playback time information combination corresponding to each of the at least one video lens combination to be implanted, to obtain corresponding time information, and for at least one to be implanted
  • the video lens combination also obtains at least one piece of corresponding time information.
  • S108c Determine the preset implanted entity from at least one implanted entity according to the at least one time information and the preset implanted time information, and determine the target video where the preset implanted entity is located from the combination of at least one video lens to be implanted Lens combination.
  • each piece of time information in the at least one piece of time information can be compared with preset implantation time information, and the at least one piece of time information is greater than the preset implantation time
  • any implanted entity is used as a preset implanted entity; at the same time, from at least one to-be implanted video lens combination, the target video lens combination where the preset implanted entity is located is determined.
  • the implantation entity corresponding to the time information closest to the preset implantation time information in at least one time information as the preset implantation entity; at the same time, determine the preset implantation entity from at least one combination of video shots to be implanted The target video lens combination where the entry entity is located; etc., this embodiment of the present application does not specifically limit this.
  • the preset implantation time information is 10 seconds
  • at least one piece of time information is 1 second, 5 seconds, 11 seconds, and 25 seconds, respectively. Since the time information 11 seconds is the closest to the preset implantation time information 10,
  • the implanted entity corresponding to the time information of 10 seconds is the preset implanted entity, and the video lens combination to be implanted where the implanted entity corresponding to the time information of 10 seconds is located is used as the target video lens combination where the preset implanted entity is located.
  • S109 Select a video frame from the current video lens information to be implanted, and obtain a preset video frame to be implanted.
  • each piece of lens information in the target video lens combination is used as the current video lens information to be implanted, and the current video lens information to be implanted Any video frame is used as a preset video frame to be implanted.
  • the current video shot information to be implanted is any piece of shot information in the target video shot combination.
  • the implanted device in S105 implants the target multimedia information into the preset implanting entity's bearing surface of the preset video frame to be implanted
  • the information embedding method further includes S110 and S112:
  • the implanted video lens information is the current video lens information to be implanted after the target multimedia information is implanted.
  • the implantation device completes the implantation of any lens information of the target multimedia information in the target video lens combination, it also obtains the implanted video lens combination after the target multimedia information is implanted.
  • the implanted video lens combination is a combination of lens information after the target multimedia information has been implanted in the video lens information; thus, The implanted device removes the remaining lens information other than the lens information corresponding to at least one target video lens information from the video lens information, and thus obtains the unimplanted video lens combination.
  • the unembedded video lens combination is the remaining lens information except the target video lens combination in the video lens information.
  • the implanted video lens combination and the unimplanted video lens combination after obtaining the implanted video lens combination and the unimplanted video lens combination, based on the connection relationship between the lens information in the video lens information, the implanted video lens combination and the unimplanted video lens combination The combination of video shots and video fusion will obtain the target video information.
  • the target video information can also be obtained by replacing the corresponding lens information in the video lens information with at least one implanted video lens combination.
  • the implantation device completes the implantation of the target multimedia information in the current video lens information to be implanted according to the target video frame, and obtains the implanted video lens information, including S110a-S110d:
  • S110a Determine a motion reference object from a preset video frame to be implanted, and the motion reference object is an object that is preset to be implanted on the bearing surface of the entity.
  • the loading surface of the preset implanted entity in the preset video frame to be implanted carries at least one object, such as a cup on the desktop; therefore, the implantation device selects one object from the at least one object
  • the object is used as a reference object for the position offset between each frame in the current video lens information to be implanted, or between each frame and the reference video frame in the current video lens information to be implanted, that is, as a motion reference object.
  • the reference video frame may be a preset video frame to be implanted
  • the motion reference object is an object preset to be implanted on the bearing surface of the entity.
  • the motion reference object is obtained between frames in the current video shot information to be implanted, or between each frame in the current video shot information to be implanted and the reference video frame.
  • the position offset can also obtain the motion track information.
  • S110c Determine, according to the motion track information, at least one target bearing position of the target multimedia information in at least one unembedded video frame.
  • the motion reference object is between frames in the current video lens information to be implanted, or the current video lens information to be implanted
  • the position offset between each frame in the video frame and the reference video frame so as to obtain the target multimedia information between the frames in the current video lens information to be implanted, or between the frames in the current video lens information to be implanted and the reference
  • the position offset between video frames can be based on the target multimedia information between the frames in the current video lens information to be implanted, or between the frames in the current video lens information to be implanted and the reference video frame
  • the position offset between the target multimedia information is determined at least one target bearing position corresponding to at least one preset implanted entity bearing surface of at least one unembedded video frame; here, at least one unembedded video frame is currently to be implanted
  • S110d Based on at least one target bearing position, implant the target multimedia information into the bearing surface of at least one preset implanted entity that has no video frame implanted to obtain the implanted video lens information.
  • the target multimedia information is implanted on the at least one preset implanted physical bearing surface without video frames, and the target multimedia information is implanted. At least one position on the bearing surface of the implanted entity that is not implanted in the video frame is preset to obtain the implanted video lens information.
  • the implanted device in S112 performs video fusion of at least one implanted video lens combination and at least one unimplanted video lens combination to obtain the target video
  • the information implantation method also includes S113:
  • the corresponding video loading request can be received, and the target video information can be played in response to the video loading request; while the target video information is played, the target can be played through the playback device Video information.
  • the preset implanted entity is a designated table, and at least one piece of multimedia information is a cutout of two beverage boxes with different angles corresponding to the beverage box, as shown in FIG. 5, the steps are as follows:
  • beverage box implantation request is an information implantation request
  • video element stream source file is preset video information
  • S202 Perform single-shot video segmentation on the video element stream source file according to the shot segmentation algorithm to obtain video shot information.
  • S203 Use a multi-modal video implantation advertising space detection algorithm to obtain a video lens combination where the designated table is located.
  • the multi-modal video implanted advertising space detection algorithm is a preset implanted entity detection algorithm
  • the video lens combination is the target video lens combination.
  • the obtained video lens combination is the wall where the information is implanted; if it is determined to implant the information on the photo frame, the obtained video lens combination is where the photo frame is located.
  • the area where the table is designated is the initial implantation location area.
  • S205 According to the size of the desktop area of the designated table, mask the area where the designated table is located to obtain an effective desktop area.
  • the size of the desktop area of the designated table is the preset area feature
  • the effective desktop area is the implantation location area.
  • Box a in FIG. 6 shows a preset video frame to be implanted, and the preset video frame to be implanted contains a preset implanted entity: a designated table.
  • the box b in Figure 6 shows the effective desktop area.
  • the flat upper desktop is not included in the calculation, and only the lower desktop area needs to be calculated.
  • the flatness of the table top is greater than the flatness of the table side is the preset flatness condition
  • the lower table top area is the feature area of the implanted position. Box c in Fig. 6 shows the lower desktop area.
  • S207 Use the Laplace edge detection algorithm to perform edge detection on the lower desktop area to obtain the edge of the lower desktop.
  • the Laplacian edge detection algorithm is a preset edge detection algorithm
  • the edge of the lower desktop is the edge information of the implanted position.
  • Box d in Fig. 6 shows the edge of the lower table top.
  • S208 Determine a preset edge point threshold through adaptive threshold learning, and use a combination of edge points that are greater than the preset edge point threshold selected from the edge point combinations of each edge of the lower desktop edge as two feature contour points combination.
  • two characteristic contour points are combined into an edge point included in area 1 and an edge point included in area 2, and the combination of two characteristic contour points refers to at least one characteristic contour point combination.
  • y 1 and y 2 are dependent variables
  • x 1 and x 2 are independent variables
  • ⁇ 1 and ⁇ 2 are the two slopes corresponding to the two background fitting line information
  • ⁇ 1 and ⁇ 2 are constants.
  • formula (3) is also called the modeled representation of the background tilt information.
  • the box f in Fig. 6 shows the background fitting straight line information corresponding to the two fitted edges 61 and 62.
  • obtaining the two foreground inclination information corresponding to the two beverage box cutouts is similar to the aforementioned steps for obtaining the background inclination information, and for each beverage box cutout, there are two corresponding slope information, and , The two slope information of the beverage box matting corresponds to the two slope information of the designated table.
  • Figure 7 shows two foreground tilt information obtained through edge extraction and edge fitting.
  • one foreground tilt information corresponding to each beverage box matting includes two slopes, which are the corresponding slope and The slope corresponding to side 72, and the slope corresponding to side 73 and the slope corresponding to side 74.
  • the side 71 and the side 73 both correspond to the edge 61 in FIG. 6, and the side 72 and the side 74 both correspond to the edge 62 in FIG. 6.
  • the minimum slope difference information is selected from the two slope difference information, and the minimum slope difference information is also obtained.
  • 8-1 is a cutout of a beverage box in the two beverage box cutouts, a schematic diagram of the acquisition scene of the slope difference information corresponding to the designated table
  • 8-2 is a cutout of two beverage boxes Another beverage box cutout of, showing the scene of obtaining the tilt difference information corresponding to the designated table.
  • the selection result 8-3 is obtained; it is easy to know that 8-3 corresponds to the scene corresponding to the minimum tilt difference information Gesture.
  • the video lens that has been implanted in the beverage box is the target video lens
  • the video lens combination that has been implanted in the beverage box is the implanted video lens combination
  • a video lens combination without a beverage box is a combination without a video lens.
  • the lower edge line fitting is performed on the designated table, as shown in Figure 9 as shown in 9-1, and then the bottom edge fitting 9-2 is performed on the real object (two beverage box cutouts), and then combined
  • the bottom edge line fitting results of 9-1 and 9-2 respectively, the edge line slope difference calculation is performed, and finally the orientation selection result 9-3 is obtained according to the calculation result; among them, 9-3 refers to the target video frame.
  • the use of the information implantation method provided in the embodiments of this application can completely replace the designer's manual determination of target multimedia information, saving labor costs; at the same time, compared to using the designer to manually determine the target multimedia information, the duration can be changed from 30 minutes dropped to 1 minute, saving time cost.
  • the information implantation method application and advertisement implantation scenario provided by the embodiments of this application are used; and the corresponding beneficial effects of advertisement implantation are shown in Figure 10-1. On the one hand, the advertisement form cannot be skipped and is visible to members.
  • the reach rate is high (10-11); on the other hand, advertisers do not need to bet on dramas, and the risk of advertising investment is small (10-12); on the other hand, implanted ads are divided into groups and the budget cost is low (10-13) ; On the other hand, for video providers, the value information is higher (10-14).
  • the information implantation is realized by integrating the video platform 11-1 and the advertising system 11-2, and the resulting implantation advertisement is the trend of advertising development; among them, video
  • the platform 11-1 refers to an example of a system composed of a terminal 201, a database 202, and a video server 200 in FIG. 2
  • an advertising system 11-2 refers to an example of a system composed of a terminal 301, the database 302, and the multimedia server 300 in FIG.
  • the software module in the information implantation device 555 stored in the memory 550 can include:
  • the background tilt acquisition part 5551 is configured to acquire the background tilt information of the preset implanted entity in the preset video frame to be implanted; the preset video frame to be implanted is used for the implant in the preset video information
  • the smallest unit of multimedia information, the background tilt information is the tilt information of the bearing surface of the preset implanted entity in the preset video frame to be implanted;
  • the foreground inclination acquisition part 5552 is configured to acquire at least one foreground inclination information corresponding to at least one preset multimedia information; each foreground inclination information in the at least one foreground inclination information is the waiting information of the corresponding preset multimedia information. Inclination information of the contact surface;
  • An inclination difference acquisition part 5553 configured to acquire the inclination difference between the background inclination information and the at least one foreground inclination information to obtain at least one inclination difference information;
  • the target determination part 5554 is configured to determine, from the at least one preset multimedia information, target multimedia information that meets a preset tilt difference condition according to the at least one tilt difference information;
  • the implantation part 5555 is configured to implant the target multimedia information into the preset implantation entity of the preset video frame to be implanted to obtain a target video frame.
  • the background gradient acquisition part 5551 includes an identification part 5551-1, an edge acquisition part 5551-2, a contour point screening part 5551-3, a straight line fitting part 5551-4 and Slope acquisition part 5551-5;
  • the identification part 5551-1 is configured to identify the area where the preset implanted entity is located in the preset to-be-implanted video frame to obtain the corresponding initial implantation location area;
  • the edge obtaining part 5551-2 is configured to obtain the edge information of the implantation position of the initial implantation position area
  • the contour point screening part 5551-3 is configured to screen the characteristic contour points of each edge in the implant position edge information according to a preset edge point threshold to obtain at least one characteristic contour point combination;
  • the straight line fitting part 5551-4 is configured to respectively perform straight line fitting on the at least one combination of characteristic contour points to obtain at least one background fitting straight line information;
  • the slope acquisition part 5551-5 is configured to use at least one slope information corresponding to the at least one background scene fitting straight line information as the background scene slope information.
  • the edge acquisition part 5551-2 is further configured to filter the implantation location area from the initial implantation location area according to the preset area characteristics; according to the preset flatness The condition is to filter the implantation location feature region from the implantation location region; perform edge detection on the implantation location feature region to obtain the implantation location edge information.
  • the slope difference acquisition part 5553 is further configured to acquire each slope information in the background slope information, corresponding to the slope information in the current foreground slope information Difference value to obtain at least one slope difference information corresponding to the current foreground slope information and the background slope information, and the current foreground slope information is any foreground slope in the at least one foreground slope information Information, one piece of slope information in the current foreground slope information corresponds to one piece of slope information in the background slope information; obtaining the product of the at least one slope difference information to obtain the current foreground slope information and the Slope difference information corresponding to the background slope information, thereby obtaining the at least one slope difference information corresponding to the background slope information and the at least one foreground slope information.
  • the slope difference acquisition part 5553 is further configured to acquire each slope information in the background slope information, corresponding to the slope information in the current foreground slope information Ratio, obtain at least one slope ratio information corresponding to the current foreground slope information and the background slope information; obtain the sum of the at least one slope ratio information, and the ratio of the quantity corresponding to the at least one slope ratio information To obtain the slope difference information corresponding to the current foreground slope information and the background slope information, so as to obtain the background slope information and the at least one slope corresponding to the at least one foreground slope information Poor information.
  • the information implantation device 555 further includes a video frame determination part 5556, and the video frame determination part 5556 is configured to, when an information implantation request is received, according to the information implantation Incoming request, the preset video information is obtained from the preset video library; the preset video information is segmented according to the shots to obtain the video shot information; according to the preset embedded entity detection algorithm, the video shot information Each video frame in each shot information performs the detection of the implanted entity to obtain the preset implanted entity and the target video lens combination where the preset implanted entity is located; select from the current video lens information to be implanted The video frame obtains the preset video frame to be implanted; the current video lens information to be implanted is any lens information in the target video lens combination.
  • the video frame determining part 5556 is further configured to perform an analysis on each video frame in each shot information of the video shot information according to the preset embedded entity detection algorithm Perform detection of implanted entities to obtain at least one implanted entity and at least one to-be-implanted video lens combination where the at least one implanted entity is located; obtain at least one piece of time information corresponding to the at least one to-be-implanted video lens combination; According to the at least one time information and the preset implant time information, the preset implantation entity is determined from the at least one implantation entity, and the at least one video lens combination to be implanted is determined The target video lens combination where the implanted entity is located is preset.
  • the information implantation device 555 further includes a video fusion part 5557, and the video fusion part 5557 is configured to complete the target multimedia information in the target video frame according to the target video frame.
  • the current implantation of the video lens information to be implanted to obtain the implanted video lens information, until the completion of the implantation of each lens information of the target multimedia information in the target video lens combination, to obtain the implanted video lens combination;
  • the implanted video lens combination obtain the unimplanted video lens combination from the video lens information;
  • the unimplanted video lens combination is the combination of the target video lens information in the video lens information
  • the remaining lens information outside; video fusion is performed on the implanted video lens combination and the non-implanted video lens combination to obtain target video information.
  • the video fusion part 5557 is further configured to determine a motion reference object from the preset video frames to be implanted, and the motion reference object is the preset implantation Objects on the physical bearing surface; obtain the motion trajectory information of the motion reference object in the current video lens information to be implanted; according to the motion trajectory information, determine that the target multimedia information is in at least one unimplanted video frame The at least one target bearing position of the target; the at least one unimplanted video frame is the remaining video frame in the current to-be-implanted video lens information except for the preset to-be-implanted video frame; based on the at least one target The bearing position, the target multimedia information is implanted into the bearing surface of the at least one preset implanted entity without the implanted video frame, to obtain the implanted video lens information.
  • the target determination part 5554 is further configured to select minimum inclination difference information from the at least one inclination difference information according to the preset inclination difference condition; In the at least one preset multimedia information, the preset multimedia information corresponding to the minimum tilt difference information is determined to obtain the initial target multimedia information; according to the preset video frames to be implanted, the initial target multimedia information Perform rendering processing to obtain the target multimedia information.
  • the information implantation device 555 further includes a video playback part 5558, the video playback part 5558 is configured to receive a video loading request, according to the video loading request, through The playback device plays the target video information.
  • the integrated part described in the embodiment of the present application is implemented in the form of a software function part and sold or used as an independent product, it may also be stored in a computer storage medium.
  • a computer storage medium includes USB (Universal Serial Bus) disks, mobile hard disks, Read-Only Memory (ROM, Read-Only Memory), Random Access Memory (RAM, Random Access Memory), disk storage, CD-ROM, optical storage, etc.
  • an embodiment of the present application further provides a computer storage medium in which computer-executable instructions are stored, and the computer-executable instructions are executed by a processor to implement the information implantation method of the embodiments of the present application.
  • executable instructions may be in the form of programs, software, software modules, scripts or codes, written in any form of programming language (including compiled or interpreted languages, or declarative or procedural languages), and their It can be deployed in any form, including being deployed as an independent program or deployed as a module, component, subroutine or other unit suitable for use in a computing environment.
  • executable instructions may but do not necessarily correspond to files in the file system, and may be stored as part of a file that saves other programs or data, for example, in a HyperText Markup Language (HTML, HyperText Markup Language) document
  • HTML HyperText Markup Language
  • One or more scripts in are stored in a single file dedicated to the program in question, or in multiple coordinated files (for example, a file storing one or more parts, subroutines, or code parts).
  • executable instructions can be deployed to be executed on one computing device, or on multiple computing devices located in one location, or on multiple computing devices that are distributed in multiple locations and interconnected by a communication network Executed on.
  • the embodiment of the present application obtains the corresponding tilt difference information by comparing the background tilt information of the preset implanted entity in the preset video to be implanted with the foreground tilt information of each preset multimedia information Therefore, according to the inclination difference information, from at least one preset multimedia information, the target multimedia information with the highest degree of fit with the preset video frame to be implanted is determined, and a process of automatically screening high-fit multimedia information is realized;
  • the multimedia information can be intelligently implanted into the video frame; in this way, the intelligence of the multimedia information implantation can be improved.
  • the inclination difference information determines the target multimedia information with the highest degree of compatibility with the preset video frame to be implanted from at least one preset multimedia information, and realizes a process of automatically screening the multimedia information with high relevance;
  • the target multimedia information completes the implantation of the multimedia information, it can also intelligently implant the multimedia information into the video frame; in this way, the intelligence of the multimedia information implantation can be improved.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Marketing (AREA)
  • Business, Economics & Management (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本申请实施例提供一种信息植入方法、装置、服务器及计算机存储介质;该方法包括:获取预设植入实体在预设待植入视频帧中的后景倾斜度信息;获取至少一个预设多媒体信息对应的至少一个前景倾斜度信息,至少一个前景倾斜度信息中的每个前景倾斜度信息为对应的预设多媒体信息的待接触面的倾斜度信息;获取后景倾斜度信息和至少一个前景倾斜度信息的倾斜度差,得到至少一个倾斜度差信息,并根据至少一个倾斜度差信息,从至少一个预设多媒体信息中确定满足预设倾斜度差条件的目标多媒体信息;将目标多媒体信息植入预设待植入视频帧的预设植入实体的承载面上,得到目标视频帧。

Description

一种信息植入方法、装置、设备及计算机存储介质
相关申请的交叉引用
本申请基于申请号为201910569777.2、申请日为2019年06月27日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此以引入方式并入本申请。
技术领域
本申请涉及计算机领域中的信息处理技术,尤其涉及一种信息植入方法、装置、设备及计算机存储介质。
背景技术
在互联网的视频信息播放场景中,除了播放自身的视频信息外,还有展示多媒体信息的需求;一般来说,该多媒体信息的展示形式主要包括植入多媒体信息和弹出多媒体信息两种形式;其中,植入多媒体信息是指在视频信息中的桌面和台面等预设植入实体上植入三维模型或者实物等多媒体信息的形式。
针对植入多媒体信息的实现,为了提高植入效果,通常由工作人员将多个多媒体信息分别植入到视频帧中的预设植入实体上,并通过人工判图筛选来确定与视频帧中预设植入实体切合度最高的目标多媒体信息。然而,上述多媒体信息植入的过程中,由于是通过人工方式实现的,智能性低。
发明内容
本申请实施例提供一种信息植入方法、装置、设备及计算机存储介质。
本申请实施例的技术方案是这样实现的:
本申请实施例提供一种信息植入方法,包括:
获取预设植入实体在预设待植入视频帧中的后景倾斜度信息;所述预设待植入视频帧为预设视频信息中用于植入多媒体信息的最小单位,所述后景倾斜度信息为所述预设待植入视频帧中所述预设植入实体的承载面的倾斜度信息;
获取至少一个预设多媒体信息对应的至少一个前景倾斜度信息;所述至少一个前景倾斜度信息中的每个前景倾斜度信息为对应的预设多媒体信息的待接触面的倾斜度信息;
获取所述后景倾斜度信息和所述至少一个前景倾斜度信息的倾斜度差, 得到至少一个倾斜度差信息;
根据所述至少一个倾斜度差信息,从所述至少一个预设多媒体信息中,确定满足预设倾斜度差条件的目标多媒体信息;
将所述目标多媒体信息,植入所述预设待植入视频帧的所述预设植入实体的承载面上,得到目标视频帧。
本申请实施例提供一种信息植入装置,包括:
后景倾斜度获取部分,配置为获取预设植入实体在预设待植入视频帧中的后景倾斜度信息;所述预设待植入视频帧为预设视频信息中用于植入多媒体信息的最小单位,所述后景倾斜度信息为所述预设待植入视频帧中所述预设植入实体的承载面的倾斜度信息;
前景倾斜度获取部分,配置为获取至少一个预设多媒体信息对应的至少一个前景倾斜度信息;所述至少一个前景倾斜度信息中的每个前景倾斜度信息为对应的预设多媒体信息的待接触面的倾斜度信息;
倾斜度差获取部分,配置为获取所述后景倾斜度信息和所述至少一个前景倾斜度信息的倾斜度差,得到至少一个倾斜度差信息;
目标确定部分,配置为根据所述至少一个倾斜度差信息,从所述至少一个预设多媒体信息中,确定满足预设倾斜度差条件的目标多媒体信息;
植入部分,配置为将所述目标多媒体信息,植入所述预设待植入视频帧的所述预设植入实体的承载面上,得到目标视频帧。
本申请实施例提供一种信息植入设备,包括:
存储器,配置为存储可执行指令;
处理器,配置为执行所述存储器中存储的可执行指令时,实现本申请实施例提供的信息植入方法。
本申请实施例提供一种计算机存储介质,存储有可执行指令,用于引起处理器执行时,实现本申请实施例提供的信息植入方法。
本申请实施例的有益效果至少包括:由于在视频帧中植入的目标多媒体信息是通过对比倾斜度信息获得的,目标多媒体信息与预设待植入视频帧切合度较高;因此,实现了一种自动筛选高切合度的目标多媒体信息的过程;从而,当根据该目标多媒体信息完成多媒体信息的植入时,也就能够智能地向视频帧中植入多媒体信息;如此,能够提升多媒体信息植入的智能性。
附图说明
图1a-1d为示例性的植入广告的示意图;
图2为本申请实施例提供的信息植入系统的一个可选的架构示意图;
图3为本申请实施例提供的信息植入服务器的组成结构示意图;
图4为本申请实施例提供的信息植入方法的一个可选的流程示意图;
图5为本申请实施例提供的一种示例性的信息植入方法的流程示意图;
图6为本申请实施例提供的一种示例性的获取后景倾斜度信息的示意图;
图7为本申请实施例提供的一种示例性的获取前景倾斜度信息的示意图;
图8为本申请实施例提供的一种示例性的确定最小倾斜度差信息的示意图;
图9为本申请实施例提供的一种示例性的确定目标视频帧的示意图;
图10为本申请实施例提供的一种示例性的有益效果分析示意图;
图11为本申请实施例提供的一种示例性的信息植入系统架构。
具体实施方式
为了使本发明的目的、技术方案和优点更加清楚,下面将结合附图对本发明作进一步地详细描述,所描述的实施例不应视为对本发明的限制,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都属于本发明保护的范围。
在以下的描述中,涉及到“一些实施例”,其描述了所有可能实施例的子集,但是可以理解,“一些实施例”可以是所有可能实施例的相同子集或不同子集,并且可以在不冲突的情况下相互结合。
除非另有定义,本申请实施例所使用的所有的技术和科学术语,与属于本发明的技术领域的技术人员通常理解的含义相同。本申请实施例中所使用的术语只是为了描述本申请实施例的目的,不是旨在限制本申请。
对本申请实施例进行详细说明之前,对本申请实施例中涉及的名词和术语进行说明,本申请实施例中涉及的名词和术语适用于如下的解释。
1)视频信息:当连续的图像变化每秒超过预定数量帧时,根据视觉暂留原理,人眼无法辨别单幅的静态画面,而看上去是平滑连续的视觉效果,这样连续的图像即为视频信息;比如,单独的一个视频文件,或一个视频片段。
2)视频库:用于存储视频信息的数据库。
3)镜头:指摄像机一次连续拍摄的一段视频;一个镜头由若干视频帧组成,在本申请实施例中又称为视频镜头信息。
4)视频帧:指视频的最小单位,是一幅静态的图像;比如,在播放视频信息时,定格在任意时刻的画面,即为一个视频帧。
5)多媒体信息:指计算机和视频技术的结合;在本申请实施例中,多媒体信息指用于植入视频帧中的信息;比如,广告图片。
6)植入实体:指视频帧中所呈现出的用于植入多媒体信息的现实世界中的实物,比如,视频帧中的桌子和吧台等。
7)前景倾斜度:与植入的多媒体信息对应,指多媒体信息在对应的图像中所呈现出的视觉上的倾斜信息。
8)后景倾斜度:与植入实体对应,指植入实体在对应的视频帧中所呈现出的视觉上的倾斜信息。
需要说明的是,当在视频信息中展示的多媒体信息为广告时,依据广告的展示形式可将广告划分为弹窗广告(Video-Out)和植入广告(Video-In);其中,对于弹窗广告,是一种场景弹窗广告,指基于视频信息中汽车、人脸、目标和场景等视频信息内容,展示与视频信息内容相关的弹窗广告;而对于植入广告,是一种软广告形式,指在视频帧的桌面、墙面、相框、吧台和广告牌等位置植入平面或者实物广告。
图1a-1d为示例性的植入广告的示意图,其中,图1a描述了在桌面上植入牛奶盒的场景示意图,左侧图为未植入牛奶盒之前的视频帧a 1,右侧图为已植入牛奶盒的视频帧a 1';如图1a的左侧图所示,在视频帧a 1中摆放有桌子a 1-1,且在桌子a 1-1上放置有杯子a 1-2和盘子a 1-3;如图1a的右侧图所示,视频帧a 1'中的桌子a 1-1上除了放置有杯子a 1-2和盘子a 1-3,还放置了牛奶盒a 1-4;这里,牛奶盒a 1-4即植入广告。
图1b描述了在桌面上植入携带海报的三维模型的场景示意图,左侧图为未植入携带海报的三维模型之前的视频帧a 1,右侧图为已植入携带海报的三维模型的视频帧a 1”;图1b中的左侧图同图1a的左侧图一致,图1b中的右侧图示出了,视频帧a 1”中的桌子a 1-1上除了放置有杯子a 1-2和盘子a 1-3,还放置了携带海报的三维模型a 1-5;这里,携带海报的三维模型a 1-5即植入广告。
图1c描述了在相框中植入海报的场景示意图,上侧图为未植入海报的视频帧c 1,下侧图为已植入海报的视频帧c 1';如图1c的上侧图所示,在视频帧c 1中摆放有吊灯c 1-1,以及墙面c 1-2和相框c 1-3;如图1c的下侧图所示,在视频帧c 1'中摆放有吊灯c 1-1,以及墙面c 1-2和相框c 1-3,相框c 1-3中还显示有海报c 1-4;这里,海报c 1-4即植入广告。
图1d描述了在显示屏中植入海报的场景示意图,上侧图为未植入海报的视频帧d 1,下侧图为已植入海报的视频帧d 1';如图1d的上侧图所示,在视频帧d 1中的桌子d 1-1摆放有显示屏d 1-2;如图1d的下侧图所示,在视频帧d 1'中,桌子d 1-1摆放的显示屏d 1-2中还显示有海报d 1-3;这里,海报d 1-3即植入广告。
针对上述的植入广告,一般来说,实物广告(指推广的实体,比如,牛奶、汽车或饮料等)对应存在多张不同角度的实物图片,而从多张不同角度的实物图片中,选择和视频帧中的植入实体的朝向相近的实物图片作为前景实物图片进行广告的植入这一过程,目前都是由有经验的设计人员人工完成。比如,在桌面上植入实物广告等多媒体信息时,广告主针对一实物广告对应上传了30张实物图片,设计人员将这30张实物图片植入视 频帧中的桌面上,再进行人工判图筛选,整个过程需要30分钟。如此,上述向视频帧中的植入实体植入多媒体信息时,时间成本高,效率低,自动化程度低,以及智能性低。
基于此,本申请实施例提供一种信息植入方法、装置、设备及计算机存储介质,在向视频帧中的植入实体植入多媒体信息时,能够降低时间成本,提高植入效率、自动化程度和智能性。
下面说明本申请实施例提供的信息植入设备的示例性应用,本申请实施例提供的信息植入设备可以实施为智能手机、平板电脑、笔记本电脑等各种类型的用户终端,也可以实施为服务器。下面,将说明信息植入设备实施为服务器时的示例性应用。
参见图2,图2为本申请实施例提供的信息植入系统的一个可选的架构示意图;如图2所示,为支撑一个信息植入应用,在信息植入系统100中,信息植入服务器500通过网络400连接多媒体服务器300和视频服务器200;网络400可以是广域网或者局域网,又或者是二者的组合。另外,信息植入系统100中还包括终端501、数据库502、终端201、数据库202、终端301和数据库302;信息植入服务器500分别与终端501和数据库502连接,视频服务器200分别与终端201和数据库202连接,多媒体服务器300分别与终端301和数据库302连接;并且,这里的连接对应的网络也可以是广域网或者局域网,又或者是二者的组合。
其中,终端201,配置为视频上传对象(用户)上传视频信息时,将视频信息通过视频服务器200存储至数据库202中。
数据库202,配置为存储通过终端201和视频服务器200上传的视频信息。
视频服务器200,配置为将终端201上传的视频信息存储至数据库202,以及从数据库202中获取预设视频信息,通过网络400发送至信息植入服务器500。
终端301,配置为多媒体信息投放对象(比如,广告主)进行推广实体(比如,广告实物,多媒体对象)对应的多媒体信息投放时,将推广实体对应的至少一个预设多媒体信息,通过多媒体服务器300存储至数据库302。
数据库302,配置为存储通过终端301和多媒体服务器200上传的至少一个预设多媒体信息。
多媒体服务器300,配置为将终端301投放的至少一个预设多媒体信息存储至数据库302,以及从数据302中获取至少一个预设多媒体信息,通过网络400发送至信息植入服务器500。
终端501,配置为接收用户的触控操作,生成信息植入请求,并将该信息植入请求发送至信息植入服务器500。以及,接收信息植入服务器500发送的目标视频信息,在图形界面上播放目标视频信息。
数据库502,配置为存储信息植入服务器500处理得到的目标视频信息。
信息植入服务器500,配置为接收终端501发送的信息植入请求,响应信息植入请求,通过视频服务器200从数据库202中获取预设视频信息,通过多媒体服务器300数据库302中获取至少一个预设多媒体信息。以及获取预设植入实体在预设待植入视频帧中的后景倾斜度信息;预设待植入视频帧为预设视频信息中用于植入多媒体信息的最小单位,后景倾斜度信息为预设待植入视频帧中预设植入实体的承载面的倾斜度信息;获取至少一个预设多媒体信息对应的至少一个前景倾斜度信息;至少一个前景倾斜度信息中的每个前景倾斜度信息为对应的预设多媒体信息的待接触面的倾斜度信息;获取后景倾斜度信息和至少一个前景倾斜度信息的倾斜度差,得到至少一个倾斜度差信息;根据至少一个倾斜度差信息,从至少一个预设多媒体信息中,确定满足预设倾斜度差条件的目标多媒体信息;将目标多媒体信息,植入预设待植入视频帧的预设植入实体的承载面上,得到目标视频帧,从而,得到与预设视频信息对应的目标视频信息,将目标视频信息存储至数据库502。最后,当接收到视频加载请求时,响应视频加载请求,从数据库502中获取目标视频信息,将目标视频信息发送至终端501,以在终端501的图形界面上播放该目标视频信息。
参见图3,图3为本申请实施例提供的信息植入服务器的组成结构示意图,图3所示的信息植入服务器500包括:至少一个处理器510、存储器550、至少一个网络接口520和用户接口530。服务器500中的各个组件通过总线系统540耦合在一起。可理解,总线系统540用于实现这些组件之间的连接通信。总线系统540除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图3中将各种总线都标为总线系统540。
处理器510可以是一种集成电路芯片,具有信号的处理能力,例如通用处理器、数字信号处理器(DSP,Digital Signal Processor),或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等,其中,通用处理器可以是微处理器或者任何常规的处理器等。
用户接口530包括使得能够呈现媒体内容的一个或多个输出装置531,包括一个或多个扬声器和/或一个或多个视觉显示屏。用户接口530还包括一个或多个输入装置532,包括有助于用户输入的用户接口部件,比如键盘、鼠标、麦克风、触屏显示屏、摄像头、其他输入按钮和控件。
存储器550包括易失性存储器或非易失性存储器,也可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(ROM,Read Only Memory),易失性存储器可以是随机存取存储器(RAM,Random Access Memory)。本申请实施例描述的存储器550旨在包括任意适合类型的存储器。存储器550可选地包括在物理位置上远离处理器510的一个或多个存储设备。
在一些实施例中,存储器550能够存储数据以支持各种操作,这些数 据的示例包括程序、模块和数据结构或者其子集或超集,下面示例性说明。
操作系统551,包括用于处理各种基本系统服务和执行硬件相关任务的系统程序,例如框架层、核心库层、驱动层等,用于实现各种基础业务以及处理基于硬件的任务;
网络通信模块552,用于经由一个或多个(有线或无线)网络接口520到达其他计算设备,示例性的网络接口520包括:蓝牙、无线相容性认证(Wi-Fi)、和通用串行总线(USB,Universal Serial Bus)等;
显示模块553,用于经由一个或多个与用户接口530相关联的输出装置531(例如,显示屏、扬声器等)使得能够呈现信息(例如,用于操作外围设备和显示内容和信息的用户接口);
输入处理模块554,用于对一个或多个来自一个或多个输入装置532之一的一个或多个用户输入或互动进行检测以及翻译所检测的输入或互动。
在一些实施例中,本申请实施例提供的信息植入装置可以采用软件方式实现,图3示出了存储在存储器550中的信息植入装置555,其可以是程序和插件等形式的软件,包括以下软件部分:后景倾斜度获取部分5551、前景倾斜度获取部分5552、倾斜度差获取部分5553、目标确定部分5554、植入部分5555、视频帧确定部分5556、视频融合部分5557和视频播放部分5558,其中,后景倾斜度获取部分5551包括识别部分5551-1、边缘获取部分5551-2、轮廓点筛选部分5551-3、直线拟合部分5551-4和斜率获取部分5551-5,将在下文中说明各个部分的功能。
在另一些实施例中,本申请实施例提供的信息植入装置可以采用硬件方式实现,作为示例,本申请实施例提供的信息植入装置可以是采用硬件译码处理器形式的处理器,其被编程以执行本申请实施例提供的信息植入方法,例如,硬件译码处理器形式的处理器可以采用一个或多个应用专用集成电路(ASIC,Application Specific Integrated Circuit)、DSP、可编程逻辑器件(PLD,Programmable Logic Device)、复杂可编程逻辑器件(CPLD,Complex Programmable Logic Device)、现场可编程门阵列(FPGA,Field-Programmable Gate Array)或其他电子元件。
下面,将结合本申请实施例提供的信息植入服务器的示例性应用和实施,说明本申请实施例提供的信息植入方法。
参见图4,图4为本申请实施例提供的信息植入方法的一个可选的流程示意图,将结合图4示出的步骤进行说明。另外,本申请实施例中的执行主体为信息植入服务器,以下简称为植入设备。
S101、获取预设植入实体在预设待植入视频帧中的后景倾斜度信息。
在本申请实施例中,当广告商或广告主等多媒体投放对象进行实体推广,在确定的预设视频信息中进行多媒体信息投放时,所投放的视频信息即预设视频信息;植入设备从预设视频信息中确定一个待植入多媒体信息且包含预设植入实体的视频帧,也就得到了预设待植入视频帧;这里,预 设植入实体指植入设备预先获取到的用于承载多媒体信息的实体。
接下来,植入设备为确定多媒体信息在预设待植入视频帧中的植入位置,对预设植入实体在预设待植入视频帧中呈现出的视觉上的倾斜度信息进行获取,也就得到了预设植入实体在预设待植入视频帧中的后景倾斜度信息。
需要说明的是,预设待植入视频帧为预设视频信息中用于植入多媒体信息的最小单位,指预设视频信息中的一帧视频帧,比如,首视频帧,第3帧视频帧等;并且,预设待植入视频帧中包含预设植入实体对应的图像信息,其中,预设植入实体为预设待植入视频帧中具有承载面的实物对象,比如,桌子或者吧台。另外,后景倾斜度信息为预设待植入视频帧中预设植入实体的承载面的倾斜度信息;比如,桌子下边缘的至少一个斜率,或者吧台下边缘的至少一个斜率。
S102、获取至少一个预设多媒体信息对应的至少一个前景倾斜度信息。
需要说明的是,由于多媒体投放对象推广的实体为实物(比如,牛奶盒或者显示有海报的三维模型),针对一个推广的实体,对应存在不同角度的图片信息,该不同角度的图片信息即至少一个预设多媒体信息。
在本申请实施例中,植入设备对至少一个预设多媒体信息中的每个预设多媒体信息的待接触面的倾斜度信息进行获取,也就获得了至少一个预设多媒体信息对应的至少一个前景倾斜度信息;也就是说,至少一个前景倾斜度信息中的每个前景倾斜度信息,为每个前景倾斜度信息对应的预设多媒体信息的待接触面的倾斜度信息;比如,牛奶盒下边缘的至少一个斜率,或者显示有海报的三维模型下边缘的至少一个斜率。
这里,至少一个预设多媒体信息作为一个多媒体对象对应的至少一个角度的图片信息,为经过抠图或蒙版等图片处理获得的仅包括推广的实体对应的图片。
需要说明的是,S101和S102在执行顺序上不分先后;也就是说,可以是先执行S101再执行S102,还可以是先执行S102再执行S101,又可以是同时执行S101和S102,等等,本申请实施例对此不做具体限定。
S103、获取后景倾斜度信息和至少一个前景倾斜度信息的倾斜度差,得到至少一个倾斜度差信息。
在本申请实施例中,检测设备获取到后景倾斜度信息和至少一个前景倾斜度信息之后,将后景倾斜度信息分别与至少一个前景倾斜度信息中的每个前景倾斜度信息进行比较,也就获得了后景倾斜度信息和每个前景倾斜度信息的倾斜度差,从而也就获得了后景倾斜度信息和至少一个前景倾斜度信息中的倾斜度差,即至少一个倾斜度差信息;也就是说,至少一个倾斜度差信息为后景倾斜度信息和至少一个前景倾斜度信息中的每个前景倾斜度信息的倾斜度差所构成的集合。
S104、根据至少一个倾斜度差信息,从至少一个预设多媒体信息中, 确定满足预设倾斜度差条件的目标多媒体信息。
在本申请实施例中,由于植入设备中预先设置有预设倾斜度差条件,用于确定植入预设待植入视频帧的多媒体信息,而至少一个倾斜度差信息与至少一个预设多媒体信息一一对应;因此,植入设备获得了至少一个倾斜度差信息之后,利用预设倾斜度差条件对至少一个倾斜度差信息中的每个倾斜度差信息进行判断,最后确定的满足预设倾斜度差条件的倾斜度差信息,在从至少一个预设多媒体信息中对应的预设多媒体信息,即目标多媒体信息。这里,预设倾斜度差条件为倾斜度差最小的信息。
在本申请实施例中,植入设备根据至少一个倾斜度差信息,从至少一个预设多媒体信息中,确定满足预设倾斜度差条件的目标多媒体信息,包括:植入设备根据预设倾斜度差条件,从至少一个倾斜度差信息中,选择最小倾斜度差信息;从至少一个预设多媒体信息中,确定与最小倾斜度差信息对应的预设多媒体信息,得到初始目标多媒体信息。这里,可以直接将初始多媒体信息作为目标多媒体信息进行信息植入的处理;为了提升多媒体信息的植入效果,还可以根据预设待植入视频帧,对初始目标多媒体信息进行渲染处理,得到目标多媒体信息。
可以理解的是,由于初始目标多媒体信息对应的图片显示属性(比如,饱和度、亮度和对比度等),与预设待植入视频信息对应的图片显示属性存在差别;因此,植入设备根据预设待植入视频帧,对初始目标多媒体信息进行渲染处理,以使渲染处理后的预设多媒体信息(即目标多媒体信息)的图片显示属性与预设待植入视频帧的图片显示属性的差别最小,实现前景和后景的和谐化,优化了目标多媒体信息的植入,提升了目标多媒体信息的植入效果。
S105、将目标多媒体信息植入预设待植入视频帧的预设植入实体的承载面上,得到目标视频帧。
在本申请实施例中,当植入设备获得了目标多媒体信息之后,也就明确了待植入的对象,从而将目标多媒体信息植入到预设待植入视频帧中,也就完成了预设待植入视频帧的信息植入;这里,植入设备将目标多媒体信息植入到预设待植入视频帧的预设植入实体的承载面上,而植入了目标多媒体信息的预设待植入视频帧即为目标视频帧。这里,目标多媒体信息在承载面上的植入位置为预设的植入位置,比如,当承载面为桌面时,预设的植入位置可以为桌面上摆放东西的附近,也可以是桌面上离关键信息最远的位置,又可以是桌面上离关键信息最近的位置,还可以是桌面上的任一位置,等等。
可以理解的是,通过将后景倾斜度信息与至少一个前景倾斜度信息进行对比,将至少一个前景倾斜度信息中与后景倾斜度信息差别最小的前景倾斜度信息筛选出来,并将差别最小的前景倾斜度信息对应的预设多媒体信息作为目标多媒体信息,从而将目标多媒体信息植入到预设待植入视频 帧中,实现了一种自动甄选实物(多媒体对象)朝向的信息植入方案,提升了植入效果。
在一些实施例中,S101中植入设备获取预设植入实体在预设待植入视频帧中的后景倾斜度信息,包括S101a-S101e:
S101a、在预设待植入视频帧中,识别预设植入实体的所在区域,得到初始植入位置区域。
在本申请实施例中,预设待植入视频帧中存在预设植入实体对应的图像,植入设备通过对预设待植入视频帧中的预设植入实体的所在区域进行识别,所识别出的区域即初始植入位置区域。
在一些实施例中,获取初始植入位置区域时可采用预设实例分割算法,比如,Mask R-CNN(Mask Region-based Convolutional Neural Network,基于区域的掩膜卷积神经网络);通过输入预设待植入视频帧,并基于预设植入实体的实体特征信息,对该预设待植入视频帧进行目标检测和实例分割来获得。
S101b、获取初始植入位置区域的植入位置边缘信息。
在本申请实施例中,当植入设备获得了初始植入位置区域之后,对初始植入位置区域进行边缘处理,也就获得了植入位置边缘信息;这里,植入位置边缘信息指预设待植入视频帧中预设植入实体对应的边缘。
在一些实施例中,植入设备获取初始植入位置区域的植入位置边缘信息时:首先,植入设备根据预设区域特征,从初始植入位置区域中,筛选植入位置区域;也就是说,由于初始植入位置区域中的有效位置区域才能用于承载预设多媒体信息,因此,植入设备需要对初始植入位置区域进行进一步筛选,也就是根据预设区域特征,从初始植入位置区域中筛选出有效位置区域,从而,也就获得了植入位置区域。这里,预设区域特征为用于承载预设多媒体信息的承载面的特征,植入位置区域为预设植入实体的承载面对应的区域信息。
然后,植入设备根据预设平整度条件,从植入位置区域中,筛选植入位置特征区域;也就是说,植入设备获得了植入位置区域之后,根据预设平整度条件,从植入位置区域中去除平面区域而获取与倾斜度相关的区域,也就筛选出了植入位置特征区域。这里,预设平整度条件指平面区域的平整度大于与倾斜度相关的区域的平整度。这里,可以通过预设色块聚类算法对植入位置区域进行色块聚类,色块聚类后,也就获得了对应的平面区域和与倾斜度相关的区域。
最后,植入设备对植入位置特征区域进行边缘检测,得到植入位置边缘信息;也就是说,植入设备获得了植入位置特征区域之后,通过边缘检测处理,来获取预设待植入视频帧中预设植入实体的承载面的边缘信息,即植入位置边缘信息。这里,可以通过预设边缘检测算法对植入位置特征区域进行边缘检测,预设边缘检测算法为用于进行边缘检测的算法,比如, 拉普拉斯边缘检测算法、索贝尔边缘检测算法和Canny(多级)边缘检测算法。
S101c、根据预设边缘点阈值,对植入位置边缘信息中的每个边缘的特征轮廓点进行筛选,得到至少一个特征轮廓点组合。
在本申请实施例中,植入设备获得了植入位置边缘信息之后,由于植入位置边缘信息是由亮度变化明显的像素点组成的边缘点组合,为了提高边缘点组合在边缘点表征方面的准确率,植入设备根据预设边缘点阈值对植入位置边缘信息中的边缘点进行筛选;又由于植入位置边缘信息对应于至少一个边缘,因此,植入设备针对植入位置边缘信息中的每个边缘的特征轮廓点,将特征轮廓点值大于预设边缘点阈值的边缘点作为特征轮廓点,也就获得了每个边缘对应的特征轮廓点组合,从而获得植入位置边缘信息对应的至少一个特征轮廓点组合。
需要说明的是,预设边缘点阈值用于确定特征轮廓点为通过自适应阈值学习获得的参考阈值,可以是灰度值,还可以是其他特征值,本申请实施例对此不做具体限定。
S101d、分别对至少一个特征轮廓点组合进行直线拟合,得到至少一个后景拟合直线信息。
在本申请实施例中,获得了至少一个特征轮廓点组合之后,基于预设直线拟合算法,以每个特征轮廓点组合为单位进行直线拟合,针对每个特征轮廓点组合得到一个后景拟合直线信息,从而获得与至少一个特征轮廓点组合对应的至少一个后景拟合直线信息。
需要说明的是,预设直线拟合算法为用于直线拟合的算法,比如,RANSAC(Random Sample Consensus,随机采样一致)算法和最小二乘法。
S101e、将至少一个后景拟合直线信息对应的至少一个斜率信息,作为后景倾斜度信息。
需要说明的是,在本申请实施例中,植入设备获得了至少一个后景拟合直线信息之后,获取至少一个后景拟合直线信息中每个后景拟合直线信息对应的斜率,也就获得了与至少一个后景拟合直线信息一一对应的至少一个斜率信息,这里,至少一个斜率信息也就是后景倾斜度信息。
需要说明的是,S101a-S101e描述了获取后景倾斜度信息的实现过程,而对于每个预设多媒体信息对应的前景倾斜度信息的获取过程,与后景倾斜度信息的实现过程一致;并且,后景倾斜度信息中的一个斜率与每个前景倾斜度信息中的一个斜率一一对应,后景倾斜度信息中斜率的数量与每个前景倾斜度信息中斜率的数量相等。所不同的是,由于至少一个预设多媒体信息为仅包含多媒体对象的图片,该图片中的信息就已是与预设植入实体对应的初始植入位置区域,不再需要执行与S101a相应的步骤。而如果至少一个预设多媒体信息中不是仅包含多媒体对象的图片,则需要执行与S101a相应的步骤,以在预设多媒体信息中确定与预设植入实体对应的 初始植入位置区域。另外,预设多媒体信息的待承载面对应的至少一个边缘信息的至少一个斜率信息构成了前景倾斜度信息。这里,预设多媒体信息为多媒体对象的图片信息。
在一些实施例中,S103中植入设备获取后景倾斜度信息和至少一个前景倾斜度信息的倾斜度差,得到至少一个倾斜度差信息,包括S103a-S103b:
S103a、获取后景倾斜度信息中的每个斜率信息,与当前前景倾斜度信息中对应的斜率信息的差值,得到当前前景倾斜度信息和后景倾斜度信息对应的至少一个斜率差信息。
在本申请实施例中,由于后景倾斜度信息中的一个斜率信息与每个前景倾斜度信息中的一个斜率信息一一对应,后景倾斜度信息中斜率信息的数量与每个前景倾斜度信息中斜率信息的数量相等;因此,植入设备将至少一个前景倾斜度信息中的每个前景倾斜度信息均作为当前前景倾斜度信息,并将后景倾斜度信息中的每个斜率信息,与当前前景倾斜度信息中对应的斜率信息相减,所得到的差值即为后景倾斜度信息中的每个斜率信息与当前前景倾斜度信息中对应的斜率信息所对应的斜率差信息,从而得到当前前景倾斜度信息和后景倾斜度信息对应的至少一个斜率差信息。
这里,当前前景倾斜度信息为至少一个前景倾斜度信息中的任一前景倾斜度信息,当前前景倾斜度信息中一个斜率信息与后景倾斜度信息中的一个斜率信息对应。
S103b、获取至少一个斜率差信息的乘积,得到当前前景倾斜度信息和后景倾斜度信息对应的倾斜度差信息,从而得到后景倾斜度信息和至少一个前景倾斜度信息对应的至少一个倾斜度差信息。
在本申请实施例中,植入设备获得了至少一个斜率差信息之后,将至少一个斜率差信息中的各个斜率差信息一一相乘,所述得到的结果即为当前前景倾斜度信息与后景倾斜度信息对应的倾斜度差信息。由于当前前景倾斜度信息为至少一个前景倾斜度信息中的任一前景倾斜度信息,因此,当将至少一个前景倾斜度信息中的每个前景倾斜度信息作为当前前景倾斜度信息来获得对应的倾斜度差信息之后,也就获得了后景倾斜度信息和至少一个前景倾斜度信息对应的至少一个倾斜度差信息。比如,当前前景倾斜度信息与后景倾斜度信息的倾斜度差信息,可通过式(1)获取:
Figure PCTCN2020098462-appb-000001
其中,M为至少一个预设多媒体信息中预设多媒体信息的数量,M=i为第i个预设多媒体信息,N i为第i个预设多媒体信息对应的前景倾斜度信息中斜率信息的数量,j为斜率信息的编号,α fij为第i个预设多媒体信息中的第j个斜率信息,α bij为与第i个预设多媒体信息中的第j个斜率信息对应的预设植入实体中的第j个斜率信息。
在一些实施例中,还描述了另一种获取至少一个倾斜度差信息的实现步骤,因此,S103中植入设备获取后景倾斜度信息和至少一个前景倾斜度 信息的倾斜度差,得到至少一个倾斜度差信息,包括S103c-S103d:
S103c、获取后景倾斜度信息中的每个斜率信息,与当前前景倾斜度信息中对应的斜率信息的比值,得到当前前景倾斜度信息和后景倾斜度信息对应的至少一个斜率比信息。
在本申请实施例中,植入设备将至少一个前景倾斜度信息中的每个前景倾斜度信息均作为当前前景倾斜度信息,并将后景倾斜度信息中的每个斜率信息,与当前前景倾斜度信息中对应的斜率信息相比,所得到的比值即为后景倾斜度信息中的每个斜率信息与当前前景倾斜度信息中对应的斜率信息所对应的斜率比信息,从而得到当前前景倾斜度信息和后景倾斜度信息对应的至少一个斜率比信息。
这里,当前前景倾斜度信息为至少一个前景倾斜度信息中的任一前景倾斜度信息,当前前景倾斜度信息中一个斜率信息与后景倾斜度信息中的一个斜率信息对应。
S103d、获取至少一个斜率比信息的总和,与至少一个斜率比信息的数量的比值,得到当前前景倾斜度信息和后景倾斜度信息对应的倾斜度差信息,从而得到后景倾斜度信息和至少一个前景倾斜度信息对应的至少一个倾斜度差信息。
在本申请实施例中,植入设备获得了至少一个斜率比信息之后,将至少一个斜率比信息的总和作为分子,将至少一个斜率比信息对应的斜率比信息的数量作为分母,计算比值,所得到的比值结果即为当前前景倾斜度信息与后景倾斜度信息对应的倾斜度差信息。由于当前前景倾斜度信息为至少一个前景倾斜度信息中的任一前景倾斜度信息,因此,当将至少一个前景倾斜度信息中的每个前景倾斜度信息作为当前前景倾斜度信息来获得对应的倾斜度差信息之后,也就获得了后景倾斜度信息和至少一个前景倾斜度信息对应的至少一个倾斜度差信息。比如,当前前景倾斜度信息与后景倾斜度信息的倾斜度差信息,可通过式(2)获取:
Figure PCTCN2020098462-appb-000002
其中,式(2)中各符号所代表的含义同式(1)中各符号所代表的含义相同。
需要说明的是,S103a-S103b与S103c-S103d分别描述获取至少一个倾斜度差信息的两种不同的实现方式。
在一些实施例中,还包括从预设视频信息中确定预设待植入视频帧的实现步骤,因此,在S101中,植入设备获取预设植入实体在预设待植入视频帧中的后景倾斜度信息之前,该信息植入方法还包括S106-S109:
S106、当接收到信息植入请求时,根据信息植入请求,从预设视频库中获取预设视频信息。
在本申请实施例中,植入设备接收到请求向预设视频信息中植入多媒体对象的信息植入请求信息时,响应该信息植入请求,从信息植入请求中获取请求视频名称,根据请求视频名称获取请求视频标识,进而根据请求视频标识从预设视频库中获取对应的预设视频信息。
需要说明的是,预设视频库指图1中图数据502,存储有视频信息。另外,植入设备获取预设视频信息的过程,还可以通过直接从信息植入请求中获取;也就是说,可以在终端侧上传预设视频信息,由终端侧生成包括预设视频信息的信息植入请求,从而,植入设备也就能够从终端发送来的信息植入请求中直接获得预设视频信息。
S107、对预设视频信息依据镜头进行分割,得到视频镜头信息。
在本申请实施例中,植入设备获得了预设视频信息之后,依据预设镜头切分算法,将预设视频信息依据镜头拆分成视频分片,每个视频分片即一个镜头信息,从而也就得到了与预设视频信息对应的视频镜头信息。这里,预设镜头切分算法为镜头切分算法。
S108、根据预设植入实体检测算法,对视频镜头信息的每个镜头信息中的每个视频帧进行植入实体的检测,得到预设植入实体和预设植入实体所在的目标视频镜头组合。
在本申请实施例中,获得了视频镜头信息之后,针对视频镜头信息中的每个镜头信息中的每个视频帧,依据预设植入实体检测算法进行植入实体的检测,如此,也就能够确定预设待植入实体,并确定预设待植入实体所在的目标视频镜头组合。这里,植入实体为预设指定的实体,比如,桌子;预设植入实体属于植入实体;目标视频镜头组合为至少一个镜头信息中,包括预设植入实体对应的图像的镜头信息组成的集合,从而,目标视频镜头组合中所包括的镜头信息的数量为至少一个。
在一些实施例中,S108中根据预设植入实体检测算法,对视频镜头信息的每个镜头信息中的每个视频帧进行植入实体的检测,得到预设植入实体和预设植入实体所在的目标视频镜头组合,包括S108a-S108c:
S108a、根据预设植入实体检测算法,对视频镜头信息的每个镜头信息中的每个视频帧进行植入实体的检测,得到至少一个植入实体和至少一个植入实体所在的至少一个待植入视频镜头组合。
在本申请实施例中,植入设备对视频镜头信息进行植入实体检测时,能够依据视频镜头信息,得到至少一个植入实体和至少一个植入实体所在的至少一个待植入视频镜头组合;比如,当植入实体为桌子时,能够检测到至少一张桌子和至少一个桌子所在的待植入视频镜头组合。这里,至少一个待植入视频镜头组合中,每个待植入视频镜头组合包括的镜头信息为至少一个。
S108b、获取至少一个待植入视频镜头组合对应的至少一个时间信息。
在本申请实施例中,植入设备获得了至少一个植入实体和至少一个植 入实体所在的至少一个待植入视频镜头组合之后,由于每个镜头信息都对应播放时间信息,因此,这里,植入设备将至少一个待植入视频镜头组合中的每个待植入视频镜头组合对应的播放时间信息组合进行整合(比如,求和),获得对应的时间信息,而对于至少一个待植入视频镜头组合,也就获得了对应的至少一个时间信息。
S108c根据至少一个时间信息和预设植入时间信息,从至少一个植入实体中确定预设植入实体,并从至少一个待植入视频镜头组合中,确定预设植入实体所在的目标视频镜头组合。
在本申请实施例中,植入设备获得了至少一个时间信息之后,可以将至少一个时间信息中的每个时间信息与预设植入时间信息比较,至少一个时间信息中大于预设植入时间信息的时间信息对应的植入实体中,将任一个植入实体作为预设植入实体;同时,从至少一个待植入视频镜头组合中,确定预设植入实体所在的目标视频镜头组合。还可以将至少一个时间信息中与预设植入时间信息最接近的时间信息对应的植入实体,作为预设植入实体;同时,从至少一个待植入视频镜头组合中,确定预设植入实体所在的目标视频镜头组合;等等,本申请实施例对此不做具体限定。
示例性地,当预设植入时间信息为10秒时,至少一个时间信息分别为1秒、5秒、11秒和25秒,由于时间信息11秒与预设植入时间信息10最接近,则时间信息10秒对应的植入实体为预设植入实体,并将时间信息10秒对应的植入实体所在的待植入视频镜头组合作为预设植入实体所在的目标视频镜头组合。
S109、从当前待植入视频镜头信息中选择视频帧,得到预设待植入视频帧。
在本申请实施例中,植入设备获得了目标视频镜头组合之后,将目标视频镜头组合中的每一个镜头信息均作为当前待植入视频镜头信息,并将当前待植入视频镜头信息中的任一视频帧作为预设待植入视频帧。
需要说明的是,当前待植入视频镜头信息为目标视频镜头组合中的任一个镜头信息。
在一些实施例中,还包括基于目标视频帧获得目标视频信息的实现过程,因此,S105中植入设备将目标多媒体信息,植入预设待植入视频帧的预设植入实体的承载面上,得到目标视频帧之后,该信息植入方法还包括S110和S112:
S110、根据目标视频帧,完成目标多媒体信息在当前待植入视频镜头信息的植入,得到已植入视频镜头信息,直至完成目标多媒体信息在目标视频镜头组合中每个镜头信息的植入,得到已植入视频镜头组合。
在本申请实施例中,当植入设备完成了目标多媒体信息在预设待植入视频帧中的植入之后,可以针对当前待植入视频镜头信息中的每帧视频帧都采用类似的方法以完成目标多媒体信息在当前待植入视频镜头信息的植 入;还可以是根据当前待植入视频镜头信息中各帧承载面上承载位置之间的偏移量,来完成目标多媒体信息在当前待植入视频镜头信息的植入;等等,本申请实施例不做具体限定。这里,已植入视频镜头信息为植入了目标多媒体信息之后的当前待植入视频镜头信息。
这里,当植入设备完成了目标多媒体信息在目标视频镜头组合中任一镜头信息的植入之后,也就获得了植入了目标多媒体信息之后的已植入视频镜头组合。
S111、根据已植入视频镜头组合,从视频镜头信息中,获取未植入视频镜头组合。
在本申请实施例中,植入设备获得了已植入视频镜头组合之后,由于已植入视频镜头组合为视频镜头信息中已植入了目标多媒体信息之后的镜头信息所构成的组合;从而,植入设备从视频镜头信息中,除去至少一个目标视频镜头信息对应的镜头信息之外所剩余的镜头信息,也就获得了未植入视频镜头组合。这里,未植入视频镜头组合为视频镜头信息中除目标视频镜头组合之外剩余的镜头信息。
S112、将已植入视频镜头组合与未植入视频镜头组合进行视频融合,得到目标视频信息。
在本申请实施例中,当获得了已植入视频镜头组合与未植入视频镜头组合之后,基于视频镜头信息中各镜头信息之间的连接关系,将已植入视频镜头组合与未植入视频镜头组合进行视频融合,也就获得了目标视频信息。
在一些实施例中,还可以通过将至少一个已植入视频镜头组合替换视频镜头信息中对应的镜头信息,来获得目标视频信息。
在一些实施例中,S110中,植入设备根据目标视频帧,完成目标多媒体信息在当前待植入视频镜头信息的植入,得到已植入视频镜头信息,包括S110a-S110d:
S110a、从预设待植入视频帧中,确定运动参考对象,运动参考对象为预设植入实体承载面上的对象。
在本申请实施例中,预设待植入视频帧中的预设植入实体的承载面上承载有至少一个对象,比如桌面上的杯子;因此,植入设备从该至少一个对象中选择一个对象,作为当前待植入视频镜头信息中各帧之间,或者当前待植入视频镜头信息中各帧与参考视频帧之间的位置偏移量的参考对象,即作为运动参考对象。这里,参考视频帧可以为预设待植入视频帧,运动参考对象为预设植入实体承载面上的对象。
S110b、获取运动参考对象在当前待植入视频镜头信息中的运动轨迹信息。
在本申请实施例中,确定了运动参考对象之后,获取运动参考对象在当前待植入视频镜头信息中各帧之间,或者当前待植入视频镜头信息中各 帧与参考视频帧之间的位置偏移量,也就获得了运动轨迹信息。
S110c、根据运动轨迹信息,确定目标多媒体信息在至少一个未植入视频帧的至少一个目标承载位置。
在本申请实施例中,植入设备获得了运动参考对象的运动轨迹信息之后,也就明确了运动参考对象在当前待植入视频镜头信息中各帧之间,或者当前待植入视频镜头信息中各帧与参考视频帧之间的位置偏移量,从而也就获得了目标多媒体信息在当前待植入视频镜头信息中各帧之间,或者当前待植入视频镜头信息中各帧与参考视频帧之间的位置偏移量;此时,也就能够基于目标多媒体信息在当前待植入视频镜头信息中各帧之间,或者当前待植入视频镜头信息中各帧与参考视频帧之间的位置偏移量,确定目标多媒体信息在至少一个未植入视频帧的预设植入实体承载面上对应的至少一个目标承载位置;这里,至少一个未植入视频帧为当前待植入视频镜头信息中除预设待植入视频帧之外剩余的视频帧。
S110d、基于至少一个目标承载位置,将目标多媒体信息,植入至少一个未植入视频帧的预设植入实体的承载面上,得到已植入视频镜头信息。
在本申请实施例中,获得了至少一个目标承载位置之后,也就明确了目标多媒体信息在至少一个未植入视频帧的预设植入实体承载面上的植入位置,将目标多媒体信息植入至至少一个未植入视频帧的预设植入实体承载面上的位置处,从而也就获得了已植入视频镜头信息。
在一些实施例中,还包括根据目标视频信息进行应用的实现过程,因此,S112中植入设备将至少一个已植入视频镜头组合与至少一个未植入视频镜头组合进行视频融合,得到目标视频信息之后,该信息植入方法还包括S113:
S113、当接收到视频加载请求时,根据视频加载请求,通过播放设备播放目标视频信息。
本申请实施例中,当用户请求观看目标视频信息时,对应能够接收到视频加载请求,响应该视频加载请求,播放该目标视频信息;而在播放目标视频信息时,可以通过播放设备播放该目标视频信息。
下面,将说明本申请实施例在一个实际的应用场景中的示例性应用。在该示例性地应用场景中,预设植入实体为指定桌子,至少一个多媒体信息为饮料盒对应的两张不同角度的饮料盒抠图,如图5所示,步骤如下:
S201、当接收到饮料盒植入请求时,从饮料盒植入请求中获取视频名称,根据视频名称从预设视频库中获取视频标识,以根据视频标识获得视频素流源文件。
需要说明的是,饮料盒植入请求为信息植入请求,视频素流源文件为预设视频信息。
S202、依据镜头切分算法,对视频素流源文件进行单镜头视频分片拆分,得到视频镜头信息。
S203、利用多模态视频植入广告位检测算法,获得指定桌子所在视频镜头组合。
需要说明的是,多模态视频植入广告位检测算法为预设植入实体检测算法,视频镜头组合为目标视频镜头组合。另外,如果确定在墙面进行信息植入,则获得的为墙面所在视频镜头组合;如果确定在相框进行信息植入,则获得的为相框所在视频镜头组合。
S204、将视频镜头组合的每个镜头信息中的任一视频帧作为预设待植入视频帧,并采用实例分割算法从预设待植入视频帧中识别出指定桌子所在区域,得到指定桌子的所在区域。
这里,指定桌子的所在区域即初始植入位置区域。
S205、根据指定桌子的桌面区域大小,对指定桌子的所在区域进行蒙版处理,得到有效桌面区域。
这里,指定桌子的桌面区域大小为预设区域特征,有效桌面区域为植入位置区域。如图6中的框a,示出了预设待植入视频帧,该预设待植入视频帧中包含预设植入实体:指定桌子。而如图6中的框b,示出了有效桌面区域。
S206、采用色块聚类方式对有效桌面区域进行聚类处理,根据桌面的平整度大于桌边的平整度,去除有效桌面区域中平整的上桌面部分,留下下桌面区域。
需要说明的是,由于需要获取的指定桌子的后景倾斜度信息为下桌面边缘的倾斜度,因此,平整的上桌面部分并不纳入计算,仅需对下桌面区域进行计算。这里,桌面的平整度大于桌边的平整度为预设平整度条件,下桌面区域为植入位置特征区域。如图6中的框c,示出了下桌面区域。
S207、采用拉普拉斯边缘检测算法对下桌面区域进行边缘检测,得到下桌面边缘。
这里,拉普拉斯边缘检测算法为预设边缘检测算法,下桌面边缘为植入位置边缘信息。
如图6中的框d,示出了下桌面边缘。
S208、通过自适应阈值学习,确定预设边缘点阈值,并从下桌面边缘的每个边缘的边缘点组合中,将筛选出的大于预设边缘点阈值的边缘点组合作为两个特征轮廓点组合。
如图6中的框e所示,两个特征轮廓点组合为区域1包括的边缘点和区域2包括的边缘点,且两个特征轮廓点组合指至少一个特征轮廓点组合。
S209、采用随机采样一致算法对两个特征轮廓点组合分别进行直线拟合,得到两个后景拟合直线信息。
需要说明的是,随机采样一致算法为预设直线拟合算法,两个后景拟合直线信息如式(3)所示:
y 1=α 1x 11,y 2=α 2x 22      (3);
其中,y 1和y 2为因变量,x 1和x 2为自变量,α 1和α 2为两个后景拟合直线信息对应的两个斜率,β 1和β 2为常数。这里式(3)又称为后景倾斜度信息的模型化表示。
如图6中的框f,示出了拟合出的两条边缘61和62对应的后景拟合直线信息。
S210、将两个后景拟合直线信息对应的两个斜率作为指定桌子的后景倾斜度信息。
S211、获取两张饮料盒抠图对应的两个前景倾斜度信息。
需要说明的是,获取两张饮料盒抠图对应的两个前景倾斜度信息,与前述获取后景倾斜度信息的步骤类似,并且,对于每张饮料盒抠图,对应两个斜率信息,以及,饮料盒抠图的两个斜率信息与指定桌子的两个斜率信息一一对应。
如图7示出了经过边缘提取和边缘拟合获得的两个前景倾斜度信息,其中,每个饮料盒抠图对应的一个前景倾斜度信息包含两个斜率,分别是边71对应的斜率和边72对应的斜率,以及边73对应的斜率和边74对应的斜率。并且,边71和边73均与图6中的边缘61对应,边72和边74均与图6中的边缘62对应。
S212、获取后景倾斜度信息和两个前景倾斜度信息的倾斜度差,得到两个倾斜度差信息,并对两个倾斜度差信息中,最小倾斜度差信息对应的饮料盒抠图进行渲染处理,获得目标饮料盒抠图,将目标饮料盒抠图植入预设待植入视频帧的指定桌子的承载面上。
需要说明的是,在获取最小倾斜度差信息时,从两个倾斜度差信息中选择值最小的倾斜度差信息,也就得到了最小倾斜度差信息。如图8所示,8-1为两张饮料盒抠图中的一张饮料盒抠图,与指定桌子对应的倾斜度差信息的获取场景示意,8-2为两张饮料盒抠图中的另一张饮料盒抠图,与指定桌子对应的倾斜度差信息的获取场景示意,通过选择,得到甄选结果8-3;易知,8-3对应的为最小倾斜度差信息对应的场景示意。
S213、采用仿射变换,获取指定桌子上运动参考对象(比如,杯子)的运动偏移量,并根据运动参考对象的运动偏移量,完成目标饮料盒抠图在指定桌子所在视频镜头组合中的一个镜头信息中的植入,得到已植入饮料盒的视频镜头,从而得到已植入饮料盒的视频镜头组合。
这里,已植入饮料盒的视频镜头为目标视频镜头,已植入饮料盒的视频镜头组合为已植入视频镜头组合。
S214、将已植入饮料盒的视频镜头组合与未植入饮料盒的视频镜头组合进行融合,得到目标视频信息。
这里,未植入饮料盒的视频镜头组合为未植入视频镜头组合。
S215、播放目标视频信息。
上述的示例性应用,通过先对指定桌子进行下边缘线拟合,如图9示 出的9-1,再对实物(两张饮料盒抠图)进行下边缘拟合9-2,进而结合9-1和9-2分别的下边缘线拟合结果,进行边缘线斜率差计算,最后根据计算结果得到朝向甄选结果9-3;其中,9-3指目标视频帧。
可以理解的是,采用本申请实施例提供的信息植入方法,能够完全替代设计人员人工确定目标多媒体信息,节省了人力成本开支;同时,相对于采用设计人员人工确定目标多媒体信息,时长可从30分钟下降至1分钟,节省了时间成本。另外,采用本申请实施例提供的信息植入方法应用与广告植入场景;而广告植入对应的有益效果如图10所示的10-1,一方面,广告形式不可跳过,会员可见,从而触达率高(10-11);另一方面,广告主无需赌剧,广告投入风险小(10-12);再一方面,植入广告分人群投放,预算成本低(10-13);又一方面,对于视频提供方,价值信息较高(10-14)。并且,在如图11示出的信息植入系统架构中,通过整合视频平台11-1和广告系统11-2,实现信息植入,所得到的植入广告是广告发展的趋势;其中,视频平台11-1指图2中终端201、数据库202和视频服务器200构成的系统的示例,广告系统11-2指图2中终端301、数据库302和多媒体服务器300构成的系统的示例。
下面继续说明本申请实施例提供的信息植入装置555的实施为软件模块的示例性结构,在一些实施例中,如图3所示,存储在存储器550的信息植入装置555中的软件模块可以包括:
后景倾斜度获取部分5551,配置为获取预设植入实体在预设待植入视频帧中的后景倾斜度信息;所述预设待植入视频帧为预设视频信息中用于植入多媒体信息的最小单位,所述后景倾斜度信息为所述预设待植入视频帧中所述预设植入实体的承载面的倾斜度信息;
前景倾斜度获取部分5552,配置为获取至少一个预设多媒体信息对应的至少一个前景倾斜度信息;所述至少一个前景倾斜度信息中的每个前景倾斜度信息为对应的预设多媒体信息的待接触面的倾斜度信息;
倾斜度差获取部分5553,配置为获取所述后景倾斜度信息和所述至少一个前景倾斜度信息的倾斜度差,得到至少一个倾斜度差信息;
目标确定部分5554,配置为根据所述至少一个倾斜度差信息,从所述至少一个预设多媒体信息中确定满足预设倾斜度差条件的目标多媒体信息;
植入部分5555,配置为将所述目标多媒体信息,植入所述预设待植入视频帧的所述预设植入实体的承载面上,得到目标视频帧。
在本申请实施例的一实施方式中,所述后景倾斜度获取部分5551包括识别部分5551-1、边缘获取部分5551-2、轮廓点筛选部分5551-3、直线拟合部分5551-4和斜率获取部分5551-5;
所述识别部分5551-1,配置为在所述预设待植入视频帧中,识别所述预设植入实体的所在区域,得到对应的初始植入位置区域;
所述边缘获取部分5551-2,配置为获取所述初始植入位置区域的植入 位置边缘信息;
所述轮廓点筛选部分5551-3,配置为根据预设边缘点阈值,对所述植入位置边缘信息中的每个边缘的特征轮廓点进行筛选,得到至少一个特征轮廓点组合;
所述直线拟合部分5551-4,配置为分别对所述至少一个特征轮廓点组合进行直线拟合,得到至少一个后景拟合直线信息;
所述斜率获取部分5551-5,配置为将所述至少一个后景拟合直线信息对应的至少一个斜率信息,作为所述后景倾斜度信息。
在本申请实施例的一实施方式中,所述边缘获取部分5551-2,还配置为根据预设区域特征,从所述初始植入位置区域中,筛选植入位置区域;根据预设平整度条件,从所述植入位置区域中,筛选植入位置特征区域;对所述植入位置特征区域进行边缘检测,得到所述植入位置边缘信息。
在本申请实施例的一实施方式中,所述倾斜度差获取部分5553,还配置为获取所述后景倾斜度信息中的每个斜率信息,与当前前景倾斜度信息中对应的斜率信息的差值,得到所述当前前景倾斜度信息和所述后景倾斜度信息对应的至少一个斜率差信息,所述当前前景倾斜度信息为所述至少一个前景倾斜度信息中的任一前景倾斜度信息,所述当前前景倾斜度信息中一个斜率信息与所述后景倾斜度信息中的一个斜率信息对应;获取所述至少一个斜率差信息的乘积,得到所述当前前景倾斜度信息和所述后景倾斜度信息对应的倾斜度差信息,从而得到所述后景倾斜度信息和所述至少一个前景倾斜度信息对应的所述至少一个倾斜度差信息。
在本申请实施例的一实施方式中,所述倾斜度差获取部分5553,还配置为获取所述后景倾斜度信息中的每个斜率信息,与当前前景倾斜度信息中对应的斜率信息的比值,得到所述当前前景倾斜度信息和所述后景倾斜度信息对应的至少一个斜率比信息;获取所述至少一个斜率比信息的总和,与所述至少一个斜率比信息对应的数量的比值,得到所述当前前景倾斜度信息和所述后景倾斜度信息对应的倾斜度差信息,从而得到所述后景倾斜度信息和所述至少一个前景倾斜度信息对应的所述至少一个倾斜度差信息。
在本申请实施例的一实施方式中,所述信息植入装置555还包括视频帧确定部分5556,所述视频帧确定部分5556,配置为当接收到信息植入请求时,根据所述信息植入请求,从预设视频库中获取所述预设视频信息;对所述预设视频信息依据镜头进行分割,得到视频镜头信息;根据预设植入实体检测算法,对所述视频镜头信息的每个镜头信息中的每个视频帧进行植入实体的检测,得到所述预设植入实体和所述预设植入实体所在的目标视频镜头组合;从当前待植入视频镜头信息中选择视频帧,得到所述预设待植入视频帧;所述当前待植入视频镜头信息为所述目标视频镜头组合中的任一个镜头信息。
在本申请实施例的一实施方式中,所述视频帧确定部分5556,还配置 为根据所述预设植入实体检测算法,对所述视频镜头信息的每个镜头信息中的每个视频帧进行植入实体的检测,得到至少一个植入实体和所述至少一个植入实体所在的至少一个待植入视频镜头组合;获取所述至少一个待植入视频镜头组合对应的至少一个时间信息;根据所述至少一个时间信息和预设植入时间信息,从所述至少一个植入实体中确定所述预设植入实体,并从所述至少一个待植入视频镜头组合中,确定所述预设植入实体所在的所述目标视频镜头组合。
在本申请实施例的一实施方式中,所述信息植入装置555还包括视频融合部分5557,所述视频融合部分5557,配置为根据所述目标视频帧,完成所述目标多媒体信息在所述当前待植入视频镜头信息的植入,得到已植入视频镜头信息,直至完成所述目标多媒体信息在所述目标视频镜头组合中每个镜头信息的植入,得到已植入视频镜头组合;根据所述已植入视频镜头组合,从所述视频镜头信息中,获取未植入视频镜头组合;所述未植入视频镜头组合为所述视频镜头信息中除所述目标视频镜头信息组合之外剩余的镜头信息;将所述已植入视频镜头组合与所述未植入视频镜头组合进行视频融合,得到目标视频信息。
在本申请实施例的一实施方式中,所述视频融合部分5557,还配置为从所述预设待植入视频帧中,确定运动参考对象,所述运动参考对象为所述预设植入实体承载面上的对象;获取所述运动参考对象在所述当前待植入视频镜头信息中的运动轨迹信息;根据所述运动轨迹信息,确定所述目标多媒体信息在至少一个未植入视频帧的至少一个目标承载位置;所述至少一个未植入视频帧为所述当前待植入视频镜头信息中除所述预设待植入视频帧之外剩余的视频帧;基于所述至少一个目标承载位置,将所述目标多媒体信息,植入所述至少一个未植入视频帧的所述预设植入实体的承载面上,得到所述已植入视频镜头信息。
在本申请实施例的一实施方式中,所述目标确定部分5554,还配置为根据所述预设倾斜度差条件,从所述至少一个倾斜度差信息中,选择最小倾斜度差信息;从所述至少一个预设多媒体信息中,确定与所述最小倾斜度差信息对应的预设多媒体信息,得到初始目标多媒体信息;根据所述预设待植入视频帧,对所述初始目标多媒体信息进行渲染处理,得到所述目标多媒体信息。
在本申请实施例的一实施方式中,所述信息植入装置555还包括视频播放部分5558,所述视频播放部分5558,配置为当接收到视频加载请求时,根据所述视频加载请求,通过播放设备播放所述目标视频信息。
本申请实施例所述集成的部分,如果以软件功能部分的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机存储介质中。基于这样的理解,本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软 件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请实施例可采用在一个或多个其中包含有计算机可执行指令的计算机存储介质上实施的计算机程序产品的形式,所述计算机存储介质包括USB(Universal Serial Bus)盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁盘存储器、CD-ROM、光学存储器等。
相应地,本申请实施例还提供一种计算机存储介质,其中存储有计算机可执行指令,计算机可执行指令用处理器执行时实现本申请实施例的信息植入方法。
在一些实施例中,可执行指令可以采用程序、软件、软件模块、脚本或代码的形式,按任意形式的编程语言(包括编译或解释语言,或者声明性或过程性语言)来编写,并且其可按任意形式部署,包括被部署为独立的程序或者被部署为模块、组件、子例程或者适合在计算环境中使用的其它单元。
作为示例,可执行指令可以但不一定对应于文件系统中的文件,可以可被存储在保存其它程序或数据的文件的一部分,例如,存储在超文本标记语言(HTML,Hyper Text Markup Language)文档中的一个或多个脚本中,存储在专用于所讨论的程序的单个文件中,或者,存储在多个协同文件(例如,存储一个或多个部分、子程序或代码部分的文件)中。
作为示例,可执行指令可被部署为在一个计算设备上执行,或者在位于一个地点的多个计算设备上执行,又或者,在分布在多个地点且通过通信网络互连的多个计算设备上执行。
综上所述,本申请实施例通过对比预设待植入视频中预设植入实体的后景倾斜度信息,和每个预设多媒体信息的前景倾斜度信息,获得对应的倾斜度差信息,从而根据倾斜度差信息从至少一个预设多媒体信息中,确定与预设待植入视频帧切合度最高的目标多媒体信息,实现了一种自动筛选高切合度的多媒体信息的过程;从而,当根据该目标多媒体信息完成多媒体信息的植入时,也就能够智能地向视频帧中植入多媒体信息;如此,能够提升多媒体信息植入的智能性。
以上所述,仅为本申请的实施例而已,并非用于限定本申请的保护范围。凡在本省却的精神和范围之内所作的任何修改、等同替换和改进等,均包含在本申请的保护范围之内。
工业实用性
本申请实施例中,通过对比预设待植入视频中预设植入实体的后景倾斜度信息,和每个预设多媒体信息的前景倾斜度信息,获得对应的倾斜度差信息,从而根据倾斜度差信息从至少一个预设多媒体信息中,确定与预 设待植入视频帧切合度最高的目标多媒体信息,实现了一种自动筛选高切合度的多媒体信息的过程;从而,当根据该目标多媒体信息完成多媒体信息的植入时,也就能够智能地向视频帧中植入多媒体信息;如此,能够提升多媒体信息植入的智能性。

Claims (14)

  1. 一种信息植入方法,所述方法由信息植入设备执行,所述方法包括:
    获取预设植入实体在预设待植入视频帧中的后景倾斜度信息;所述预设待植入视频帧为预设视频信息中用于植入多媒体信息的最小单位,所述后景倾斜度信息为所述预设待植入视频帧中所述预设植入实体的承载面的倾斜度信息;
    获取至少一个预设多媒体信息对应的至少一个前景倾斜度信息;所述至少一个前景倾斜度信息中的每个前景倾斜度信息为对应的预设多媒体信息的待接触面的倾斜度信息;
    获取所述后景倾斜度信息和所述至少一个前景倾斜度信息的倾斜度差,得到至少一个倾斜度差信息;
    根据所述至少一个倾斜度差信息,从所述至少一个预设多媒体信息中,确定满足预设倾斜度差条件的目标多媒体信息;
    将所述目标多媒体信息,植入所述预设待植入视频帧的所述预设植入实体的承载面上,得到目标视频帧。
  2. 根据权利要求1所述的方法,其中,所述获取预设植入实体在预设待植入视频帧中的后景倾斜度信息,包括:
    在所述预设待植入视频帧中,识别所述预设植入实体的所在区域,得到初始植入位置区域;
    获取所述初始植入位置区域的植入位置边缘信息;
    根据预设边缘点阈值,对所述植入位置边缘信息中的每个边缘的特征轮廓点进行筛选,得到至少一个特征轮廓点组合;
    分别对所述至少一个特征轮廓点组合进行直线拟合,得到至少一个后景拟合直线信息;
    将所述至少一个后景拟合直线信息对应的至少一个斜率信息,作为所述后景倾斜度信息。
  3. 根据权利要求2所述的方法,其中,所述获取所述初始植入位置区域的植入位置边缘信息,包括:
    根据预设区域特征,从所述初始植入位置区域中,筛选植入位置区域;
    根据预设平整度条件,从所述植入位置区域中,筛选植入位置特征区域;
    对所述植入位置特征区域进行边缘检测,得到所述植入位置边缘信息。
  4. 根据权利要求1至3任一项所述的方法,其中,所述获取所述后景倾斜度信息和所述至少一个前景倾斜度信息的倾斜度差,得到至少一个倾斜度差信息,包括:
    获取所述后景倾斜度信息中的每个斜率信息,与当前前景倾斜度信息 中对应的斜率信息的差值,得到所述当前前景倾斜度信息和所述后景倾斜度信息对应的至少一个斜率差信息;
    所述当前前景倾斜度信息为所述至少一个前景倾斜度信息中的任一前景倾斜度信息,所述当前前景倾斜度信息中一个斜率信息与所述后景倾斜度信息中的一个斜率信息对应;
    获取所述至少一个斜率差信息的乘积,得到所述当前前景倾斜度信息和所述后景倾斜度信息对应的倾斜度差信息,从而得到所述后景倾斜度信息和所述至少一个前景倾斜度信息对应的所述至少一个倾斜度差信息。
  5. 根据权利要求1至3任一项所述的方法,其中,所述获取所述后景倾斜度信息和所述至少一个前景倾斜度信息的倾斜度差,得到至少一个倾斜度差信息,包括:
    获取所述后景倾斜度信息中的每个斜率信息,与当前前景倾斜度信息中对应的斜率信息的比值,得到所述当前前景倾斜度信息和所述后景倾斜度信息对应的至少一个斜率比信息;
    获取所述至少一个斜率比信息的总和,与所述至少一个斜率比信息的数量的比值,得到所述当前前景倾斜度信息和所述后景倾斜度信息对应的倾斜度差信息,从而得到所述后景倾斜度信息和所述至少一个前景倾斜度信息对应的所述至少一个倾斜度差信息。
  6. 根据权利要求1至3任一项所述的方法,其中,所述获取预设植入实体在预设待植入视频帧中的后景倾斜度信息之前,所述方法还包括:
    当接收到信息植入请求时,根据所述信息植入请求,从预设视频库中获取所述预设视频信息;
    对所述预设视频信息依据镜头进行分割,得到视频镜头信息;
    根据预设植入实体检测算法,对所述视频镜头信息的每个镜头信息中的每个视频帧进行植入实体的检测,得到所述预设植入实体和所述预设植入实体所在的目标视频镜头组合;
    从当前待植入视频镜头信息中选择视频帧,得到所述预设待植入视频帧;所述当前待植入视频镜头信息为所述目标视频镜头组合中的任一个镜头信息。
  7. 根据权利要求6所述的方法,其中,所述根据预设植入实体检测算法,对所述视频镜头信息的每个镜头信息中的每个视频帧进行植入实体的检测,得到所述预设植入实体和所述预设植入实体所在的目标视频镜头组合,包括:
    根据所述预设植入实体检测算法,对所述视频镜头信息的每个镜头信息中的每个视频帧进行植入实体的检测,得到至少一个植入实体和所述至少一个植入实体所在的至少一个待植入视频镜头组合;
    获取所述至少一个待植入视频镜头组合对应的至少一个时间信息;
    根据所述至少一个时间信息和预设植入时间信息,从所述至少一个植 入实体中确定所述预设植入实体,并从所述至少一个待植入视频镜头组合中,确定所述预设植入实体所在的所述目标视频镜头组合。
  8. 根据权利要求6所述的方法,其中,所述将所述目标多媒体信息,植入所述预设待植入视频帧的所述预设植入实体的承载面上,得到目标视频帧之后,所述方法还包括:
    根据所述目标视频帧,完成所述目标多媒体信息在所述当前待植入视频镜头信息的植入,得到已植入视频镜头信息,直至完成所述目标多媒体信息在所述目标视频镜头组合中每个镜头信息的植入,得到已植入视频镜头组合;
    根据所述已植入视频镜头组合,从所述视频镜头信息中,获取未植入视频镜头组合;所述未植入视频镜头组合为所述视频镜头信息中除所述目标视频镜头组合之外剩余的镜头信息;
    将所述已植入视频镜头组合与所述未植入视频镜头组合进行视频融合,得到目标视频信息。
  9. 根据权利要求8所述的方法,其中,所述根据所述目标视频帧,完成所述目标多媒体信息在所述当前待植入视频镜头信息的植入,得到已植入视频镜头信息,包括:
    从所述预设待植入视频帧中,确定运动参考对象;所述运动参考对象为所述预设植入实体承载面上的对象;
    获取所述运动参考对象在所述当前待植入视频镜头信息中的运动轨迹信息;
    根据所述运动轨迹信息,确定所述目标多媒体信息在至少一个未植入视频帧的至少一个目标承载位置;所述至少一个未植入视频帧为所述当前待植入视频镜头信息中除所述预设待植入视频帧之外剩余的视频帧;
    基于所述至少一个目标承载位置,将所述目标多媒体信息,植入所述至少一个未植入视频帧的所述预设植入实体的承载面上,得到所述已植入视频镜头信息。
  10. 根据权利要求1至3任一项所述的方法,其中,所述根据所述至少一个倾斜度差信息,从所述至少一个预设多媒体信息中,确定满足预设倾斜度差条件的目标多媒体信息,包括:
    根据所述预设倾斜度差条件,从所述至少一个倾斜度差信息中,选择最小倾斜度差信息;
    从所述至少一个预设多媒体信息中,确定与所述最小倾斜度差信息对应的预设多媒体信息,得到初始目标多媒体信息;
    根据所述预设待植入视频帧,对所述初始目标多媒体信息进行渲染处理,得到所述目标多媒体信息。
  11. 根据权利要求8所述的方法,其中,所述将所述已植入视频镜头组合与所述未植入视频镜头组合进行视频融合,得到目标视频信息之后, 所述方法还包括:
    当接收到视频加载请求时,根据所述视频加载请求,通过播放设备播放所述目标视频信息。
  12. 一种信息植入装置,包括:
    后景倾斜度获取部分,配置为获取预设植入实体在预设待植入视频帧中的后景倾斜度信息;所述预设待植入视频帧为预设视频信息中用于植入多媒体信息的最小单位,所述后景倾斜度信息为所述预设待植入视频帧中所述预设植入实体的承载面的倾斜度信息;
    前景倾斜度获取部分,配置为获取至少一个预设多媒体信息对应的至少一个前景倾斜度信息;所述至少一个前景倾斜度信息中的每个前景倾斜度信息为对应的预设多媒体信息的待接触面的倾斜度信息;
    倾斜度差获取部分,配置为获取所述后景倾斜度信息和所述至少一个前景倾斜度信息的倾斜度差,得到至少一个倾斜度差信息;
    目标确定部分,配置为根据所述至少一个倾斜度差信息,从所述至少一个预设多媒体信息中,确定满足预设倾斜度差条件的目标多媒体信息;
    植入部分,配置为将所述目标多媒体信息,植入所述预设待植入视频帧的所述预设植入实体的承载面上,得到目标视频帧。
  13. 一种信息植入设备,包括:
    存储器,用于存储可执行指令;
    处理器,用于执行所述存储器中存储的可执行指令时,实现权利要求1至11任一项所述的信息植入方法。
  14. 一种计算机存储介质,存储有可执行指令,用于引起处理器执行时,实现权利要求1至11任一项所述的信息植入方法。
PCT/CN2020/098462 2019-06-27 2020-06-28 一种信息植入方法、装置、设备及计算机存储介质 WO2020259676A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP20831160.5A EP3993433A4 (en) 2019-06-27 2020-06-28 INFORMATION EMBEDDING METHOD AND APPARATUS, DEVICE AND COMPUTER STORAGE MEDIUM
US17/384,516 US11854238B2 (en) 2019-06-27 2021-07-23 Information insertion method, apparatus, and device, and computer storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910569777.2 2019-06-27
CN201910569777.2A CN110213629B (zh) 2019-06-27 2019-06-27 一种信息植入方法、装置、服务器及存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/384,516 Continuation US11854238B2 (en) 2019-06-27 2021-07-23 Information insertion method, apparatus, and device, and computer storage medium

Publications (1)

Publication Number Publication Date
WO2020259676A1 true WO2020259676A1 (zh) 2020-12-30

Family

ID=67795127

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/098462 WO2020259676A1 (zh) 2019-06-27 2020-06-28 一种信息植入方法、装置、设备及计算机存储介质

Country Status (4)

Country Link
US (1) US11854238B2 (zh)
EP (1) EP3993433A4 (zh)
CN (1) CN110213629B (zh)
WO (1) WO2020259676A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110213629B (zh) 2019-06-27 2022-02-11 腾讯科技(深圳)有限公司 一种信息植入方法、装置、服务器及存储介质
CN110599605B (zh) * 2019-09-10 2021-07-13 腾讯科技(深圳)有限公司 图像处理方法及装置、电子设备和计算机可读存储介质
CN111223106B (zh) * 2019-10-28 2022-08-09 稿定(厦门)科技有限公司 全自动人像蒙版抠图方法及系统
CN111556336B (zh) * 2020-05-12 2023-07-14 腾讯科技(深圳)有限公司 一种多媒体文件处理方法、装置、终端设备及介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103974126A (zh) * 2014-05-15 2014-08-06 北京奇艺世纪科技有限公司 一种在视频中植入广告的方法及装置
CN104735465A (zh) * 2015-03-31 2015-06-24 北京奇艺世纪科技有限公司 在视频画面中植入平面图案广告的方法及装置
CN106375858A (zh) * 2015-07-23 2017-02-01 Tcl集团股份有限公司 一种在视频中植入广告的方法、装置及系统
US20170150231A1 (en) * 2015-11-19 2017-05-25 Echostar Technologies Llc Media content delivery selection
WO2017164716A1 (en) * 2016-03-25 2017-09-28 Samsung Electronics Co., Ltd. Method and device for processing multimedia information
CN110213629A (zh) * 2019-06-27 2019-09-06 腾讯科技(深圳)有限公司 一种信息植入方法、装置、服务器及存储介质

Family Cites Families (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
BR0015403A (pt) * 1999-11-08 2004-06-29 Vistas Unltd Inc Método e aparelho para a inserção em tempo real de imagens em vìdeo
CA2613998C (en) * 2005-06-08 2016-01-05 Thomson Licensing Method, apparatus and system for alternate image/video insertion
DE102006020022A1 (de) * 2006-04-26 2007-10-31 Kollin, Jörn Verfahren zur Nutzung von Sichtflächen als Werbeflächen für Luftbild- und Satellitenaufnahmen
US8281334B2 (en) * 2008-03-31 2012-10-02 Microsoft Corporation Facilitating advertisement placement over video content
US8477246B2 (en) * 2008-07-11 2013-07-02 The Board Of Trustees Of The Leland Stanford Junior University Systems, methods and devices for augmenting video content
US8752087B2 (en) * 2008-11-07 2014-06-10 At&T Intellectual Property I, L.P. System and method for dynamically constructing personalized contextual video programs
US20110292992A1 (en) * 2010-05-28 2011-12-01 Microsoft Corporation Automating dynamic information insertion into video
US20120180084A1 (en) * 2011-01-12 2012-07-12 Futurewei Technologies, Inc. Method and Apparatus for Video Insertion
WO2012121744A1 (en) * 2011-03-10 2012-09-13 Vidyo, Inc Adaptive picture rotation
US20120259712A1 (en) * 2011-04-06 2012-10-11 Avaya Inc. Advertising in a virtual environment
US9626798B2 (en) * 2011-12-05 2017-04-18 At&T Intellectual Property I, L.P. System and method to digitally replace objects in images or video
CN103258316B (zh) * 2013-03-29 2017-02-15 东莞宇龙通信科技有限公司 一种图片处理方法和装置
EP2819095A1 (en) * 2013-06-24 2014-12-31 Thomson Licensing Method and apparatus for inserting a virtual object in a video
CN105284122B (zh) * 2014-01-24 2018-12-04 Sk 普兰尼特有限公司 用于通过使用帧聚类来插入广告的装置和方法
US9432703B2 (en) * 2014-11-17 2016-08-30 TCL Research America Inc. Method and system for inserting contents into video presentations
CN105608455B (zh) * 2015-12-18 2019-06-11 浙江宇视科技有限公司 一种车牌倾斜校正方法及装置
EP3433816A1 (en) * 2016-03-22 2019-01-30 URU, Inc. Apparatus, systems, and methods for integrating digital media content into other digital media content
KR101741747B1 (ko) * 2016-06-09 2017-05-31 (주)매직비젼 실시간 광고 삽입이 가능한 영상 광고 처리 장치 및 방법
CN108076373A (zh) * 2017-02-14 2018-05-25 北京市商汤科技开发有限公司 视频图像的处理方法、装置和电子设备
CN106991641B (zh) * 2017-03-10 2020-12-29 北京小米移动软件有限公司 植入图片的方法及装置
CN107169135B (zh) 2017-06-12 2021-03-19 阿里巴巴(中国)有限公司 图像处理方法、装置和电子设备
CN107493488B (zh) * 2017-08-07 2020-01-07 上海交通大学 基于Faster R-CNN模型的视频内容物智能植入的方法
KR101915792B1 (ko) * 2017-09-01 2018-11-09 (주)비버스팩토리 얼굴인식을 이용한 광고 삽입 시스템 및 방법
US10575033B2 (en) * 2017-09-05 2020-02-25 Adobe Inc. Injecting targeted ads into videos
CN109168034B (zh) * 2018-08-28 2020-04-28 百度在线网络技术(北京)有限公司 商品信息显示方法、装置、电子设备和可读存储介质
CN109842811B (zh) * 2019-04-03 2021-01-19 腾讯科技(深圳)有限公司 一种在视频中植入推送信息的方法、装置及电子设备
CN110300316B (zh) * 2019-07-31 2022-02-11 腾讯科技(深圳)有限公司 视频中植入推送信息的方法、装置、电子设备及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103974126A (zh) * 2014-05-15 2014-08-06 北京奇艺世纪科技有限公司 一种在视频中植入广告的方法及装置
CN104735465A (zh) * 2015-03-31 2015-06-24 北京奇艺世纪科技有限公司 在视频画面中植入平面图案广告的方法及装置
CN106375858A (zh) * 2015-07-23 2017-02-01 Tcl集团股份有限公司 一种在视频中植入广告的方法、装置及系统
US20170150231A1 (en) * 2015-11-19 2017-05-25 Echostar Technologies Llc Media content delivery selection
WO2017164716A1 (en) * 2016-03-25 2017-09-28 Samsung Electronics Co., Ltd. Method and device for processing multimedia information
CN110213629A (zh) * 2019-06-27 2019-09-06 腾讯科技(深圳)有限公司 一种信息植入方法、装置、服务器及存储介质

Also Published As

Publication number Publication date
EP3993433A4 (en) 2022-11-09
US20210352343A1 (en) 2021-11-11
US11854238B2 (en) 2023-12-26
CN110213629A (zh) 2019-09-06
CN110213629B (zh) 2022-02-11
EP3993433A1 (en) 2022-05-04

Similar Documents

Publication Publication Date Title
WO2020259676A1 (zh) 一种信息植入方法、装置、设备及计算机存储介质
US10657652B2 (en) Image matting using deep learning
EP3520081B1 (en) Techniques for incorporating a text-containing image into a digital image
WO2021012837A1 (zh) 推荐信息植入位置的确定方法、装置、设备及存储介质
US10769718B1 (en) Method, medium, and system for live preview via machine learning models
CN110300316B (zh) 视频中植入推送信息的方法、装置、电子设备及存储介质
WO2020259510A1 (zh) 信息植入区域的检测方法、装置、电子设备及存储介质
US9420353B1 (en) Finding and populating spatial ad surfaces in video
KR102319423B1 (ko) 컨텍스트 기반 증강 광고
US9349053B2 (en) Method and system of identifying non-distinctive images/objects in a digital video and tracking such images/objects using temporal and spatial queues
CN111193961B (zh) 视频编辑设备和方法
CN114096986A (zh) 自动地分割和调整图像
US11978216B2 (en) Patch-based image matting using deep learning
WO2022194102A1 (zh) 图像处理方法、装置、计算机设备、存储介质及程序产品
CN112162672A (zh) 信息流的显示处理方法、装置、电子设备及存储介质
US20240095782A1 (en) Artificial intelligence machine learning system for classifying images and producing a predetermined visual output
CN112749690B (zh) 一种文本检测方法、装置、电子设备和存储介质
US8942921B1 (en) Displaying dynamic entertainment information on marquees in street-level imagery
EP2793169A1 (en) Method and apparatus for managing objects of interest
KR20230016781A (ko) 메타버스 관련 ar/vr 기술을 활용한 환경 컨텐츠 제작 방법
CN114650370A (zh) 图像拍摄方法、装置、电子设备及可读存储介质
US10803566B2 (en) Method for dynamically adjusting image clarity and image processing device using the same
US20240275934A1 (en) Information processing apparatus, management apparatus, information processing method, and control method for management apparatus
US20230315373A1 (en) System and method for automatically curating and displaying images
CN117037844A (zh) 基于全景视频的全景音频生成方法及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20831160

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2020831160

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2020831160

Country of ref document: EP

Effective date: 20220127