CN111949819A - Method and device for pushing video - Google Patents

Method and device for pushing video Download PDF

Info

Publication number
CN111949819A
CN111949819A CN201910408019.2A CN201910408019A CN111949819A CN 111949819 A CN111949819 A CN 111949819A CN 201910408019 A CN201910408019 A CN 201910408019A CN 111949819 A CN111949819 A CN 111949819A
Authority
CN
China
Prior art keywords
video
candidate push
similarity
videos
candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910408019.2A
Other languages
Chinese (zh)
Inventor
严林
乔木
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201910408019.2A priority Critical patent/CN111949819A/en
Publication of CN111949819A publication Critical patent/CN111949819A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/732Query formulation
    • G06F16/7328Query by example, e.g. a complete video frame or video sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/735Filtering based on additional data, e.g. user or group profiles

Abstract

The embodiment of the disclosure discloses a method and a device for pushing videos. One embodiment of the method comprises: receiving an image sent by a target user through corresponding terminal equipment; acquiring a candidate push video set; determining similarity of an image and a candidate pushed video in a candidate pushed video set; selecting candidate push videos from the candidate push video set according to the similarity corresponding to the candidate push videos in the candidate push video set respectively; and pushing the selected candidate push video to the terminal equipment corresponding to the target user. The implementation mode is helpful for improving the matching degree of the video pushed to the user and the user intention.

Description

Method and device for pushing video
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to a method and a device for pushing videos.
Background
At present, information push based on keywords is one of common information push modes. Specifically, the user inputs the corresponding keyword according to the search intention, the server analyzes the search intention of the user according to the keyword, and selects information according to the analysis result to push the information to the user.
Disclosure of Invention
The embodiment of the disclosure provides a method and a device for pushing videos.
In a first aspect, an embodiment of the present disclosure provides a method for pushing a video, where the method includes: receiving an image sent by a target user through corresponding terminal equipment; acquiring a candidate push video set; determining similarity of an image and a candidate pushed video in a candidate pushed video set; selecting candidate push videos from the candidate push video set according to the similarity corresponding to the candidate push videos in the candidate push video set respectively; and pushing the selected candidate push video to the terminal equipment corresponding to the target user.
In some embodiments, determining a similarity of the image to the candidate pushed video includes: extracting a first number of frames of the candidate push video; determining a similarity of the image to the first number of frames as a similarity of the image to the candidate push video.
In some embodiments, extracting the first number of frames of the candidate push video comprises: a first number of frames of the candidate push video is extracted based on the image.
In some embodiments, extracting a first number of frames of the candidate push video from the image comprises: uniformly dividing the candidate push video into a second number of sub-videos; determining the similarity between each of the sub-videos in the second number of sub-videos and the image to obtain a similarity set; for the sub-videos in the second number of sub-videos, determining a quotient value of the similarity corresponding to the sub-video divided by the sum of the similarities in the similarity set, and determining the product of the obtained quotient value and the first number as the target number corresponding to the sub-video; a target number of frames is extracted from the sub-video.
In some embodiments, selecting a candidate push video from the candidate push video set according to their respective similarities comprises: and selecting candidate push videos from the candidate push video set according to the sequence of the corresponding similarity from large to small.
In a second aspect, an embodiment of the present disclosure provides an apparatus for pushing video, the apparatus including: a receiving unit configured to receive an image transmitted by a target user through a corresponding terminal device; an acquisition unit configured to acquire a set of candidate push videos; a determination unit configured to determine, for a candidate push video of a set of candidate push videos, a similarity of an image to the candidate push video; a selecting unit configured to select candidate push videos from the candidate push video set according to respective corresponding similarities of the candidate push videos in the candidate push video set; and the pushing unit is configured to push the selected candidate pushed video to the terminal equipment corresponding to the target user.
In some embodiments, the determining unit is further configured to: extracting a first number of frames of the candidate push video; determining a similarity of the image to the first number of frames as a similarity of the image to the candidate push video.
In some embodiments, the determining unit is further configured to: a first number of frames of the candidate push video is extracted based on the image.
In some embodiments, the determining unit is further configured to: uniformly dividing the candidate push video into a second number of sub-videos; determining the similarity between each of the sub-videos in the second number of sub-videos and the image to obtain a similarity set; for the sub-videos in the second number of sub-videos, determining a quotient value of the similarity corresponding to the sub-video divided by the sum of the similarities in the similarity set, and determining the product of the obtained quotient value and the first number as the target number corresponding to the sub-video; a target number of frames is extracted from the sub-video.
In some embodiments, the selecting unit is further configured to: and selecting candidate push videos from the candidate push video set according to the sequence of the corresponding similarity from large to small.
In a third aspect, an embodiment of the present disclosure provides a server, including: one or more processors; storage means for storing one or more programs; when the one or more programs are executed by the one or more processors, the one or more processors are caused to implement the method as described in any implementation of the first aspect.
In a fourth aspect, embodiments of the present disclosure provide a computer-readable medium on which a computer program is stored, which computer program, when executed by a processor, implements the method as described in any of the implementations of the first aspect.
The method and the device for pushing the video, provided by the embodiment of the disclosure, realize the pushing of the corresponding video to the user according to the image sent by the user. Since the image transmitted by the user usually expresses the user's intention, it can be considered that the push information desired by the user has a certain similarity with the image transmitted by the user. Based on the above, by comparing the similarity between each candidate push video and the image sent by the user, and selecting the candidate push video to be pushed to the terminal device corresponding to the user according to the obtained similarity, the matching degree between the video pushed to the user and the user intention is improved, and the time for the user to receive the expected video is further reduced, that is, the number of videos required to be browsed by the user before the user receives the expected video is reduced. Therefore, the traffic consumption of the terminal device corresponding to the user and the server side for pushing the video in the process is reduced.
Drawings
Other features, objects and advantages of the disclosure will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which one embodiment of the present disclosure may be applied;
FIG. 2 is a flow diagram for one embodiment of a method for pushing video, according to the present disclosure;
FIG. 3 is a schematic diagram of one application scenario of a method for pushing video in accordance with an embodiment of the present disclosure;
FIG. 4 is a flow diagram of yet another embodiment of a method for pushing video in accordance with the present disclosure;
FIG. 5 is a schematic block diagram illustrating one embodiment of an apparatus for pushing video in accordance with the present disclosure;
FIG. 6 is a schematic structural diagram of an electronic device suitable for use in implementing embodiments of the present disclosure.
Detailed Description
The present disclosure is described in further detail below with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 shows an exemplary architecture 100 to which embodiments of the method for pushing video or the apparatus for pushing video of the present disclosure may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The terminal devices 101, 102, 103 interact with a server 105 via a network 104 to receive or send messages or the like. Various client applications may be installed on the terminal devices 101, 102, 103. Such as browser-type applications, search-type applications, information sharing-type applications, etc., social-type applications, etc.
The terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices having a display screen and supporting image and video display, including but not limited to smart phones, tablet computers, e-book readers, laptop portable computers, desktop computers, and the like. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the electronic apparatuses listed above. It may be implemented as multiple pieces of software or software modules (e.g., multiple pieces of software or software modules to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
The server 105 may be a server that provides various services, such as a backend server that pushes corresponding videos to the terminal devices according to the images sent by the terminal devices 101, 102, 103. The back-end server may determine the similarity between the image sent by the user and each candidate pushed video, and then select a candidate pushed video to push to the terminal device 101, 102, 103 according to the determined similarity.
It should be noted that the method for pushing video provided by the embodiment of the present disclosure is generally performed by the server 105, and accordingly, the apparatus for pushing video is generally disposed in the server 105.
The server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster formed by multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as multiple pieces of software or software modules (e.g., multiple pieces of software or software modules used to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to fig. 2, a flow 200 of one embodiment of a method for pushing video in accordance with the present disclosure is shown. The method for pushing the video comprises the following steps:
step 201, receiving an image sent by a target user through a corresponding terminal device.
In the present embodiment, the execution subject of the method for pushing video (e.g., the server 105 shown in fig. 1) may receive images transmitted by the terminal devices (e.g., the terminal devices 101, 102, 103 shown in fig. 1) used by the user.
The target user may be a user corresponding to any terminal device that performs communication connection with the main body as described above. The image may be an image stored in the terminal device or an image instantly photographed by the target user using the terminal device.
When the target user has the video desired to be browsed, the video desired to be browsed can be expressed by sending the image. As an example, when a user sees a Music poster in a store and wants to watch an MV (Music Video) corresponding to the Music, the user can shoot the Music poster using a terminal device and send the shot to the execution main body.
Step 202, a candidate push video set is obtained.
In this embodiment, the candidate pushed video set may be a collection of some videos that are pre-specified by a technician. The candidate push videos may also be a video set filtered out according to a preset condition.
For example, some videos in the database for performing the subject connection may be obtained as target videos, and a target video set is obtained. And then historical behavior data of the target user can be obtained, and according to the obtained historical behavior data, the click rate of the target user on each target video in the target video set is predicted by using some existing click rate prediction methods. And then, according to the set click rate threshold, selecting a target video with the click rate larger than the click rate threshold from the target video set as a candidate push video, so as to obtain a candidate push video set.
Step 203, for a candidate pushed video in the candidate pushed video set, determining the similarity between the image and the candidate pushed video.
In this embodiment, the similarity between each candidate push video and the image sent by the target user is determined, and various similarity determination methods can be flexibly selected according to different application scenes.
Alternatively, taking a candidate push video as an example, the feature vectors for characterizing the features of the candidate push video and the feature vectors for characterizing the features of the images sent by the target users can be extracted by using various existing feature extraction methods. Further, the similarity between the two extracted feature vectors may also be directly calculated as the similarity between the candidate push video and the image sent by the target user.
In some optional implementations of the present embodiment, taking a candidate push video as an example, a first number of frames of the candidate push video may be extracted first. The similarity of the image sent by the target user to the first number of frames may then be determined as the similarity of the image sent by the target user to the candidate push video.
The first number may be preset by a technician, or may be determined according to some attributes (such as a frame rate) of the candidate push video.
When extracting the frames of the candidate push video, the first number of frames of the candidate push video may be randomly extracted, or the first number of frames of the candidate push video may be extracted by using various existing key frame extraction methods.
Alternatively, the first number of frames of the candidate push video may be extracted based on an image sent by the user.
As an example, the image sent by the user may be analyzed first to identify a key object displayed in the image. Then, the frames displaying the key objects are preferentially selected from the candidate push videos. For example, if frames in the candidate push video that do not display the key object are identified, a first number of frames may be randomly selected. If the frames displaying the key object in the candidate push video are identified, the frames displaying the key object are preferentially selected.
After the first number of frames is obtained, the similarity between the image sent by the target user and each frame can be respectively determined. Then, according to the obtained similarity, the similarity between the image sent by the target user and the candidate push video can be determined.
For example, the maximum similarity may be selected as the similarity between the image sent by the target user and the candidate push video. For another example, an average of the respective similarities may be determined as the similarity of the image sent by the target user and the candidate push video.
And 204, selecting candidate push videos from the candidate push video set according to the similarity corresponding to the candidate push videos in the candidate push video set.
In this embodiment, a candidate push video with a similarity greater than a similarity threshold may be selected according to a preset similarity threshold. Then, a plurality of candidate push videos can be randomly selected from the selected candidate push videos, and a plurality of candidate push videos can be uniformly selected according to the distribution condition of the similarity corresponding to the selected candidate push videos.
The specific number of selections may be preset by a technician, or may be determined according to specific screening conditions.
In some optional implementations of this embodiment, the candidate push videos may be selected from the candidate push video set according to an order of the corresponding similarity degrees from large to small.
Step 205, the selected candidate push video is pushed to the terminal device corresponding to the target user.
With continued reference to fig. 3, fig. 3 is a schematic diagram 300 of an application scenario of the method for pushing video according to the present embodiment. In the application scenario of fig. 3, the user can take an image 302 by using the terminal device 301 and send the image 302 to the execution subject described above. The execution entity may retrieve the candidate push video set 304 from the connected database 303.
Thereafter, the similarity of each candidate push video in the set of candidate push videos 304 to the image 302 may be determined separately. As illustrated, the candidate pushed video set 304 includes three videos with respective similarities of 0.85, 0.5, and 0.4 to the image 302. Then, the candidate push video with the largest similarity may be selected to be pushed to the terminal device 301.
In the method provided by the above embodiment of the present disclosure, the similarity between each candidate push video and the image sent by the user is compared, and according to the obtained similarity, the candidate push video is selected to be pushed to the terminal device corresponding to the user. Since the image transmitted by the user usually expresses the user's intention, it can be considered that the push information desired by the user has a certain similarity with the image transmitted by the user. Therefore, the matching degree of the videos pushed to the user and the user intention can be improved in this way, and the time for the user to receive the desired videos is further reduced, namely, the number of videos which the user needs to browse before receiving the desired videos is reduced. Therefore, the traffic consumption of the terminal device corresponding to the user and the server side for pushing the video in the process is reduced.
With further reference to fig. 4, a flow 400 of yet another embodiment of a method for pushing video is shown. The flow 400 of the method for pushing video comprises the following steps:
step 401, receiving an image sent by a target user through a corresponding terminal device.
Step 402, acquiring a candidate push video set.
The specific implementation process of steps 401 and 402 can refer to the related description of steps 201 and 202 in the corresponding embodiment of fig. 2, and is not repeated herein.
In step 403, for a candidate push video in the candidate push video set, the following step 4031-4033 is performed to extract a first number of frames of the candidate push video:
step 4031, the candidate push video is evenly divided into a second number of sub-videos.
In this step, the second number may be preset by a technician. The second number may also be determined based on attributes of the candidate push video (e.g., duration, frame rate, etc.). The dividing mode can be selected according to different application scenes.
For example, the duration of each sub-video may be determined based on the duration of the candidate push video and the second number, thereby dividing the candidate push video into the second number of sub-videos. For another example, the total number of frames corresponding to the candidate push video may be determined according to the frame rate and the duration of the candidate push video. And then determining the frame number corresponding to each sub-video according to the total frame number and the second number, thereby dividing the candidate push video into the second number of sub-videos.
Step 4032, determining the similarity between each of the sub-videos in the second number of sub-videos and the image to obtain a similarity set.
In this embodiment, the method for determining the similarity between each sub-video and the image may flexibly select different modes. For example, the similarity between each sub-video and the image may be determined by referring to the method for calculating the similarity between the determined video and the image described in the corresponding embodiment of fig. 2.
Alternatively, taking a sub-video as an example, several frames may be randomly extracted from the sub-video, and then an average value or a maximum value of the similarity between each extracted frame and the image is calculated as the similarity between the sub-video and the image.
Step 4033, for the sub-video in the second number of sub-videos, determining a quotient of the similarity corresponding to the sub-video divided by the sum of the similarities in the similarity set, and determining the product of the quotient and the first number as the target number corresponding to the sub-video; a target number of frames is extracted from the sub-video.
In some cases, for example, where the candidate push video has a portion of video with high similarity to the image sent by the user and the remaining portion of video with low similarity to the image sent by the user, if the first number of frames are randomly extracted from the candidate push video, it is likely that the extracted frames are mostly frames of the portion of video with low similarity to the image sent by the user. At this time, the calculation result of the similarity between the candidate push video and the image sent by the user is calculated according to the extracted frame.
In this embodiment, the number of frames extracted from each sub-video is determined according to the similarity corresponding to each sub-video, and according to the relationship that the corresponding similarity is in direct proportion to the number of frames extracted from the sub-video. Compared with the mode of directly randomly or uniformly extracting the first number of frames from the candidate push video, the method avoids the condition of inaccurate calculation results caused by the condition that the extracted frames are not proper, and ensures the accuracy of the calculation results.
At step 404, the similarity between the image and the first number of frames is determined as the similarity between the image and the candidate push video.
Step 405, selecting candidate push videos from the candidate push video set according to the similarity corresponding to the candidate push videos in the candidate push video set.
And step 406, pushing the selected candidate pushed video to the terminal device corresponding to the target user.
The specific implementation process of the above steps 404, 405, and 406 can refer to the related descriptions of the steps 203, 204, and 205 in the corresponding embodiment of fig. 2, and will not be described herein again.
In the prior art, when matching an image with a video, the content of the image is generally analyzed and identified to know the content contained in the image. And then analyzing and identifying the content of the matched video to know the content contained in the video. For example, the text displayed in the image and video, the object and the attribute of the object displayed, the person and the attribute of the person displayed, and the like are analyzed. And then according to the recognition result, calculating the similarity between the image and the video.
As can be seen from fig. 4, in the method for pushing a video in this embodiment, a plurality of frames are respectively extracted from each sub-video divided by a candidate pushed video, and the similarity between each extracted frame and an image sent by a user is calculated as the similarity between the candidate pushed video and the image sent by the user. Compared with the prior art, the method omits the complicated image analysis and video analysis processes, namely, the calculation amount in the process of determining the similarity between the candidate push video and the image sent by the user is greatly reduced, so that the calculation pressure of the server is reduced.
With further reference to fig. 5, as an implementation of the methods shown in the above-mentioned figures, the present disclosure provides an embodiment of an apparatus for pushing a video, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 2, and the apparatus may be applied to various electronic devices in particular.
As shown in fig. 5, the apparatus 500 for pushing video provided by the present embodiment includes a receiving unit 501, an obtaining unit 502, a determining unit 503, a selecting unit 504, and a pushing unit 505. Wherein, the receiving unit 501 is configured to receive an image sent by a target user through a corresponding terminal device; the obtaining unit 502 is configured to obtain a set of candidate push videos; the determining unit 503 is configured to determine, for a candidate push video of the set of candidate push videos, a similarity of an image to the candidate push video; the selecting unit 504 is configured to select candidate push videos from the candidate push video set according to the similarities corresponding to the candidate push videos in the candidate push video set respectively; the pushing unit 505 is configured to push the selected candidate pushed video to the terminal device corresponding to the target user.
In the present embodiment, in the apparatus 500 for pushing video: the specific processing of the receiving unit 501, the obtaining unit 502, the determining unit 503, the selecting unit 504, and the pushing unit 505 and the technical effects thereof can refer to the related descriptions of step 201, step 202, step 203, step 204, and step 205 in the corresponding embodiment of fig. 2, which are not described herein again.
In some optional implementations of this embodiment, the determining unit 503 is further configured to: extracting a first number of frames of the candidate push video; determining a similarity of the image to the first number of frames as a similarity of the image to the candidate push video.
In some optional implementations of this embodiment, the determining unit 503 is further configured to: a first number of frames of the candidate push video is extracted based on the image.
In some optional implementations of this embodiment, the determining unit 503 is further configured to: uniformly dividing the candidate push video into a second number of sub-videos; determining the similarity between each of the sub-videos in the second number of sub-videos and the image to obtain a similarity set; for the sub-videos in the second number of sub-videos, determining a quotient value of the similarity corresponding to the sub-video divided by the sum of the similarities in the similarity set, and determining the product of the obtained quotient value and the first number as the target number corresponding to the sub-video; a target number of frames is extracted from the sub-video.
In some optional implementations of the present embodiment, the selecting unit 504 is further configured to: selecting candidate push videos from the candidate push video set according to the sequence of the corresponding similarity from big to small
The apparatus provided by the above embodiment of the present disclosure receives, by a receiving unit, an image sent by a target user through a corresponding terminal device; an acquisition unit acquires a candidate push video set; the determining unit determines the similarity between the image and the candidate pushed video for the candidate pushed video in the candidate pushed video set; the selection unit selects candidate push videos from the candidate push video set according to the similarity corresponding to the candidate push videos in the candidate push video set respectively; the push unit pushes the selected candidate push video to the terminal device corresponding to the target user, which is helpful for improving the matching degree between the video pushed to the user and the user intention, and further reduces the time for the user to receive the desired video, namely the number of videos required to be browsed by the user before the user receives the desired video. Therefore, the traffic consumption of the terminal device corresponding to the user and the server side for pushing the video in the process is reduced.
Referring now to FIG. 6, a schematic diagram of an electronic device (e.g., the server of FIG. 1) 600 suitable for use in implementing embodiments of the present disclosure is shown. The server shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 6, electronic device 600 may include a processing means (e.g., central processing unit, graphics processor, etc.) 601 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM603, various programs and data necessary for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 illustrates an electronic device 600 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 6 may represent one device or may represent multiple devices as desired.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or may be installed from the storage means 608, or may be installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of embodiments of the present disclosure.
It should be noted that the computer readable medium described in the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In embodiments of the present disclosure, however, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the server; or may exist separately and not be assembled into the server. The computer readable medium carries one or more programs which, when executed by the server, cause the server to: receiving an image sent by a target user through corresponding terminal equipment; acquiring a candidate push video set; determining similarity of an image and a candidate pushed video in a candidate pushed video set; selecting candidate push videos from the candidate push video set according to the similarity corresponding to the candidate push videos in the candidate push video set respectively; and pushing the selected candidate push video to the terminal equipment corresponding to the target user.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor comprises a receiving unit, an obtaining unit, a determining unit, a selecting unit and a pushing unit. Here, the names of these units do not constitute a limitation to the unit itself in some cases, and for example, the receiving unit may also be described as a "unit that receives an image transmitted by a target user through a corresponding terminal apparatus".
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (12)

1. A method for pushing video, comprising:
receiving an image sent by a target user through corresponding terminal equipment;
acquiring a candidate push video set;
determining, for a candidate pushed video in the set of candidate pushed videos, a similarity of the image and the candidate pushed video;
selecting candidate push videos from the candidate push video set according to the similarity corresponding to the candidate push videos in the candidate push video set respectively;
and pushing the selected candidate push video to the terminal equipment corresponding to the target user.
2. The method of claim 1, wherein the determining the similarity of the image to the candidate pushed video comprises:
extracting a first number of frames of the candidate push video;
determining a similarity of the image to the first number of frames as a similarity of the image to the candidate push video.
3. The method of claim 2, wherein said extracting the first number of frames of the candidate push video comprises:
a first number of frames of the candidate push video is extracted based on the image.
4. The method of claim 3, wherein said extracting a first number of frames of the candidate push video from the image comprises:
uniformly dividing the candidate push video into a second number of sub-videos;
determining the similarity between the sub-videos in the second number of sub-videos and the image respectively to obtain a similarity set;
for the sub-videos in the second number of sub-videos, determining a quotient value of the similarity corresponding to the sub-video divided by the sum of the similarities in the similarity set, and determining the product of the quotient value and the first number as a target number corresponding to the sub-video; the target number of frames is extracted from the sub-video.
5. The method according to any one of claims 1 to 4, wherein the selecting a candidate push video from the candidate push video set according to the similarity degree corresponding to each candidate push video in the candidate push video set comprises:
and selecting candidate push videos from the candidate push video set according to the sequence of the corresponding similarity from large to small.
6. An apparatus for pushing video, comprising:
a receiving unit configured to receive an image transmitted by a target user through a corresponding terminal device;
an acquisition unit configured to acquire a set of candidate push videos;
a determining unit configured to determine, for a candidate push video of the set of candidate push videos, a similarity of the image to the candidate push video;
a selecting unit configured to select candidate push videos from the candidate push video set according to respective corresponding similarities of the candidate push videos in the candidate push video set;
and the pushing unit is configured to push the selected candidate pushed video to the terminal equipment corresponding to the target user.
7. The apparatus of claim 6, wherein the determination unit is further configured to:
extracting a first number of frames of the candidate push video;
determining a similarity of the image to the first number of frames as a similarity of the image to the candidate push video.
8. The apparatus of claim 7, wherein the determination unit is further configured to:
a first number of frames of the candidate push video is extracted based on the image.
9. The apparatus of claim 8, wherein the determination unit is further configured to:
uniformly dividing the candidate push video into a second number of sub-videos;
determining the similarity between the sub-videos in the second number of sub-videos and the image respectively to obtain a similarity set;
for the sub-videos in the second number of sub-videos, determining a quotient value of the similarity corresponding to the sub-video divided by the sum of the similarities in the similarity set, and determining the product of the quotient value and the first number as a target number corresponding to the sub-video; the target number of frames is extracted from the sub-video.
10. The apparatus according to one of claims 6-9, wherein the selecting unit is further configured to:
and selecting candidate push videos from the candidate push video set according to the sequence of the corresponding similarity from large to small.
11. A server, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-5.
12. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-5.
CN201910408019.2A 2019-05-15 2019-05-15 Method and device for pushing video Pending CN111949819A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910408019.2A CN111949819A (en) 2019-05-15 2019-05-15 Method and device for pushing video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910408019.2A CN111949819A (en) 2019-05-15 2019-05-15 Method and device for pushing video

Publications (1)

Publication Number Publication Date
CN111949819A true CN111949819A (en) 2020-11-17

Family

ID=73336912

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910408019.2A Pending CN111949819A (en) 2019-05-15 2019-05-15 Method and device for pushing video

Country Status (1)

Country Link
CN (1) CN111949819A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023124793A1 (en) * 2021-12-27 2023-07-06 北京沃东天骏信息技术有限公司 Image pushing method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106686404A (en) * 2016-12-16 2017-05-17 中兴通讯股份有限公司 Video analysis platform, matching method, accurate advertisement delivery method and system
CN107404656A (en) * 2017-06-26 2017-11-28 武汉斗鱼网络科技有限公司 Live video recommends method, apparatus and server
CN108416013A (en) * 2018-03-02 2018-08-17 北京奇艺世纪科技有限公司 Video matching, retrieval, classification and recommendation method, apparatus and electronic equipment
US20180276478A1 (en) * 2017-03-24 2018-09-27 International Business Machines Corporation Determining Most Representative Still Image of a Video for Specific User
CN109388721A (en) * 2018-10-18 2019-02-26 百度在线网络技术(北京)有限公司 The determination method and apparatus of cover video frame

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106686404A (en) * 2016-12-16 2017-05-17 中兴通讯股份有限公司 Video analysis platform, matching method, accurate advertisement delivery method and system
US20180276478A1 (en) * 2017-03-24 2018-09-27 International Business Machines Corporation Determining Most Representative Still Image of a Video for Specific User
CN107404656A (en) * 2017-06-26 2017-11-28 武汉斗鱼网络科技有限公司 Live video recommends method, apparatus and server
CN108416013A (en) * 2018-03-02 2018-08-17 北京奇艺世纪科技有限公司 Video matching, retrieval, classification and recommendation method, apparatus and electronic equipment
CN109388721A (en) * 2018-10-18 2019-02-26 百度在线网络技术(北京)有限公司 The determination method and apparatus of cover video frame

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023124793A1 (en) * 2021-12-27 2023-07-06 北京沃东天骏信息技术有限公司 Image pushing method and device

Similar Documents

Publication Publication Date Title
CN108830235B (en) Method and apparatus for generating information
CN108989882B (en) Method and apparatus for outputting music pieces in video
CN108804450B (en) Information pushing method and device
US11310559B2 (en) Method and apparatus for recommending video
CN110213614B (en) Method and device for extracting key frame from video file
CN109862100B (en) Method and device for pushing information
CN108510084B (en) Method and apparatus for generating information
CN109934142B (en) Method and apparatus for generating feature vectors of video
CN111738010B (en) Method and device for generating semantic matching model
CN111897950A (en) Method and apparatus for generating information
CN109919220B (en) Method and apparatus for generating feature vectors of video
CN110188113B (en) Method, device and storage medium for comparing data by using complex expression
CN109829117B (en) Method and device for pushing information
CN109992719B (en) Method and apparatus for determining push priority information
CN112990176B (en) Writing quality evaluation method and device and electronic equipment
CN112907628A (en) Video target tracking method and device, storage medium and electronic equipment
CN111949819A (en) Method and device for pushing video
CN110765304A (en) Image processing method, image processing device, electronic equipment and computer readable medium
CN110633411A (en) Method and device for screening house resources, electronic equipment and storage medium
CN113360773B (en) Recommendation method and device, storage medium and electronic equipment
CN115801980A (en) Video generation method and device
CN111666449B (en) Video retrieval method, apparatus, electronic device, and computer-readable medium
CN111259194B (en) Method and apparatus for determining duplicate video
CN111125501B (en) Method and device for processing information
CN113837986A (en) Method, apparatus, electronic device, and medium for recognizing tongue picture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination