CN117082269A - Interactive node adjustment method and device, electronic equipment and readable storage medium - Google Patents

Interactive node adjustment method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN117082269A
CN117082269A CN202311047429.1A CN202311047429A CN117082269A CN 117082269 A CN117082269 A CN 117082269A CN 202311047429 A CN202311047429 A CN 202311047429A CN 117082269 A CN117082269 A CN 117082269A
Authority
CN
China
Prior art keywords
video
image
interaction node
target interaction
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311047429.1A
Other languages
Chinese (zh)
Inventor
刘晓丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing IQIYI Science and Technology Co Ltd
Original Assignee
Beijing IQIYI Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing IQIYI Science and Technology Co Ltd filed Critical Beijing IQIYI Science and Technology Co Ltd
Priority to CN202311047429.1A priority Critical patent/CN117082269A/en
Publication of CN117082269A publication Critical patent/CN117082269A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/65Transmission of management data between client and server
    • H04N21/658Transmission by the client directed to the server
    • H04N21/6587Control parameters, e.g. trick play commands, viewpoint selection

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The embodiment of the application provides a method and a device for adjusting an interaction node, electronic equipment and a readable storage medium, wherein the method comprises the steps of determining a target interaction node corresponding to a first video; determining a first image corresponding to the target interaction node, wherein the first image is an image corresponding to the starting moment of the target interaction node in the first video; judging whether the target interaction node is reserved in a second video or not based on the similarity between each frame of image in the second video and the first image, wherein the second video is used for replacing the first video; determining a second image from the second video under the condition that the target interaction node is reserved in the second video, wherein the similarity between the second image and the first image is larger than a first threshold; and replacing the starting moment of the target interaction node with the corresponding moment of the second image in the second video. The embodiment of the application can improve the efficiency of adjusting the starting time of the interactive node.

Description

Interactive node adjustment method and device, electronic equipment and readable storage medium
Technical Field
The present application relates to the field of interactive video technologies, and in particular, to a method and apparatus for adjusting an interactive node, an electronic device, and a readable storage medium.
Background
The interactive video is a novel video type. The interactive video generally comprises a plurality of video clips, and an interactive node appears in the playing process of a certain video clip. In order to ensure smoothness of the scenario, the interaction node needs to be matched with the video picture to appear.
After the user modifies and replaces the video clip, the time of the same picture in the video may be different, and at this time, the starting time of the interaction node needs to be adjusted correspondingly to ensure that the interaction node needs to be aligned with the content accurately. At present, the starting time of the interaction node is usually manually adjusted by a human, so that the efficiency is low.
Disclosure of Invention
An object of the embodiments of the present application is to provide a method, an apparatus, an electronic device, and a readable storage medium for adjusting an interaction node, so as to improve efficiency of adjusting an interaction node start time. The specific technical scheme is as follows:
in a first aspect of the present application, there is provided a method for adjusting an interaction node, including:
determining a target interaction node corresponding to the first video;
determining a first image corresponding to the target interaction node, wherein the first image is an image corresponding to the starting moment of the target interaction node in the first video;
judging whether the target interaction node is reserved in a second video or not based on the similarity between each frame of image in the second video and the first image, wherein the second video is used for replacing the first video;
determining a second image from the second video under the condition that the target interaction node is reserved in the second video, wherein the similarity between the second image and the first image is larger than a first threshold;
and replacing the starting moment of the target interaction node with the corresponding moment of the second image in the second video.
Optionally, the determining whether the target interaction node is reserved in the second video based on the similarity between each frame image in the second video and the first image includes:
calculating the similarity between each frame of image in the second video and the first image;
judging whether the second video comprises an image with the similarity with the first image being greater than or equal to a second threshold value or not;
and under the condition that the second video comprises the image with the similarity with the first image being greater than or equal to the second threshold value, determining that the target interaction node is reserved in the second video, otherwise, determining that the target interaction node is not reserved in the second video.
Optionally, after the determining whether the target interaction node is reserved in the second video based on the similarity between each frame image in the second video and the first image, the method further includes:
and deleting the target interaction node under the condition that the target interaction node is not reserved in the second video.
Optionally, the determining, in the case that the target interaction node is retained in the second video, a second image from the second video includes:
and under the condition that the target interaction node is reserved in the second video, determining the image with the highest similarity with the first image in the second video as a second image.
Optionally, the calculating the similarity between each frame of image in the second video and the first image includes;
calculating a first hash value of the first image and calculating a second hash value of each frame of image in the second video;
and determining the similarity between each frame of image in the second video and the first image based on the first hash value and the second hash value.
Optionally, the determining the target interaction node corresponding to the first video includes:
acquiring a video identification of a first video;
and determining a target interaction node corresponding to the first video from a script file based on the video identification, wherein the script file comprises information of the interaction node corresponding to each video in the interaction videos.
Optionally, the determining, based on the video identifier, a target interaction node corresponding to the first video from a script file includes:
determining an interval identifier of a target playing interval corresponding to the first video based on the video identifier;
and determining a target interaction node from the script file based on the interval identifier of the target playing interval, wherein the interval identifier corresponding to the target interaction node is matched with the interval identifier of the target playing interval.
In a second aspect of the present application, there is also provided an adjustment device for an interaction node, including:
the first determining module is used for determining a target interaction node corresponding to the first video;
the second determining module is used for determining a first image corresponding to the target interaction node, wherein the first image is an image corresponding to the starting moment of the target interaction node in the first video;
the judging module is used for judging whether the target interaction node is reserved in the second video or not based on the similarity between each frame of image in the second video and the first image, and the second video is used for replacing the first video;
a third determining module, configured to determine, in a case where it is determined that the target interaction node is retained in the second video, a second image from the second video, where a similarity between the second image and the first image is greater than a first threshold;
and the replacing module is used for replacing the starting moment of the target interaction node with the corresponding moment of the second image in the second video.
In a third aspect of the present application, there is also provided an electronic device, including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
and the processor is used for realizing the step of the interactive node adjustment method when executing the program stored in the memory.
In a fourth aspect of the present application, there is also provided a readable storage medium having a program stored thereon, which when executed by a processor, implements the steps of the above-mentioned method for adjusting an interaction node.
In the embodiment of the application, after the first video is replaced by the second video, the corresponding second image in the second video can be determined according to the similarity of the corresponding first image in the first video and each frame image in the second video of the target interaction node, and the starting moment of the target interaction node is correspondingly adjusted based on the moment of the second image in the second video, so that the target interaction node can still be aligned with the video image after the source video is replaced, and the efficiency of adjusting the starting moment of the interaction node is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below.
FIG. 1 is a flow chart of a method for adjusting an interaction node according to an embodiment of the application;
FIG. 2 is a schematic diagram of an organization structure of an interactive video according to an embodiment of the present application;
FIG. 3 is a schematic structural diagram of an adjusting device for an interaction node according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Before discussing exemplary embodiments in more detail, it should be mentioned that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart depicts steps as a sequential process, many of the steps may be implemented in parallel, concurrently, or with other steps. Furthermore, the order of the steps may be rearranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figures. The processes may correspond to methods, functions, procedures, subroutines, and the like.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the application may be practiced otherwise than as specifically illustrated or described herein. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
Referring to fig. 1, an embodiment of the present application provides a method for adjusting an interaction node, which is used for adjusting an interaction node generated in an interaction video. In an exemplary embodiment, the interactive video includes a plurality of videos, and after a source video of a certain video is replaced, an initial time of an interactive node on the video can be automatically adjusted based on the interactive node provided by the embodiment of the application.
As shown in fig. 1, an embodiment of the present application provides a method for adjusting an interaction node, where the method specifically includes the following steps:
step 101, determining a target interaction node corresponding to the first video.
Step 102, determining a first image corresponding to the target interaction node, where the first image is an image corresponding to the start time of the target interaction node in the first video.
And step 103, judging whether the target interaction node is reserved in the second video or not based on the similarity between each frame of image in the second video and the first image, wherein the second video is used for replacing the first video.
Step 104, determining a second image from the second video under the condition that the target interaction node is reserved in the second video, wherein the similarity between the second image and the first image is larger than a first threshold.
And 105, replacing the starting time of the target interaction node with the corresponding time of the second image in the second video.
The interactive video comprises a plurality of videos or video clips, wherein part of videos are correspondingly provided with interactive nodes. According to the actual content of the interactive video, the same video or video clip may appear multiple times in one interactive video, so the number of interactive nodes corresponding to a certain video is not limited herein.
The script file of the interactive video is a script file set of the interactive video content, the organization structure and the parameter description of the interactive video. The script file includes information of the interaction node in the interaction video, for example, a video segment corresponding to the interaction node, a starting time of the interaction node, identification information of the interaction node, and the like. In the specific implementation, the starting time of the target interaction node is replaced by the corresponding time of the second image in the second video by modifying the parameters of the target interaction node in the script file.
It should be noted that, when the number of the target interaction nodes corresponding to the first video is multiple, the first images corresponding to each target interaction node may be the same or different, and it is required to determine the first image corresponding to each target interaction node, determine the second image corresponding to the target interaction node, and replace the starting time of each target interaction node with the time corresponding to the second image corresponding to the starting time in the second video. Referring to fig. 2, for convenience of description, the interactive video shown in fig. 2 is taken as an example. The interactive video comprises a plurality of videos, the video D is assumed to be a first video, 2 times of video D occur in the interactive video, and the 2 times of video D are all provided with interactive nodes, so that the interactive nodes corresponding to the video D comprise an interactive node 3 and an interactive node 4. For video D, the start time of interaction node 3 may or may not be the same as the start time of interaction node 4.
It should be understood that the specific manner of determining the target interaction node corresponding to the first video is not limited herein. Optionally, in some embodiments, the step 101 includes:
acquiring a video identification of a first video;
and determining a target interaction node corresponding to the first video from a script file based on the video identification, wherein the script file comprises information of the interaction node corresponding to each video in the interaction videos.
The video identifier of the first video is used for identifying the first video, and in a specific implementation, the video identifier of the first video can be obtained from a script file of the interactive video. Each video included in the interactive video corresponds to a unique video identifier. It should be understood that, in order to facilitate management of a video in an interactive video, after a script file of the interactive video is generated, replacing a source file of a certain video does not change a video identifier of the video.
In this embodiment, the first video is a video to be replaced, a video identifier of the first video is obtained, and the video identifier is searched in the script file based on the video identifier, so as to determine all target interaction nodes corresponding to the first video. By the method, convenience and accuracy of determining the target interaction node are improved.
It should be understood that, in a specific implementation, the specific manner in which the target interaction node is determined from the script file based on the video identification is different according to the structure of the script file. Optionally, in some embodiments, the determining, based on the video identifier, a target interaction node corresponding to the first video from a script file includes:
determining an interval identifier of a target playing interval corresponding to the first video based on the video identifier;
and determining a target interaction node from the script file based on the interval identifier of the target playing interval, wherein the interval identifier corresponding to the target interaction node is matched with the interval identifier of the target playing interval.
In this embodiment, the script file includes a playing interval corresponding to each video identifier. The playing interval is used for representing the video clips in the interactive video and the set of related attribute information thereof. Each playing interval corresponds to a unique interval identifier. And determining a target playing interval corresponding to the first video based on the video identification, and further inquiring the script file based on the interval identification of the target playing interval, thereby determining a target interaction node.
The second video is a video used for replacing the first video, and in particular implementation, the content of the second video is generally higher in similarity with the content of the first video. Illustratively, in some embodiments, the second video is the first video that has been re-clipped. In other embodiments, the second video is the watermarked first video, etc.
And determining a first image corresponding to the starting moment of the target interaction node in the first video, and searching an image corresponding to the first image from the second video. Based on the similarity between each frame of image in the second video and the first image, firstly judging whether a target interaction node is reserved in the second video.
Optionally, in some embodiments, the step 103 includes:
calculating the similarity between each frame of image in the second video and the first image;
judging whether the second video comprises an image with the similarity with the first image being greater than or equal to a second threshold value or not;
and under the condition that the second video comprises the image with the similarity with the first image being greater than or equal to the second threshold value, determining that the target interaction node is reserved in the second video, otherwise, determining that the target interaction node is not reserved in the second video.
Optionally, the calculating the similarity between each frame of image in the second video and the first image includes;
calculating a first hash value of the first image and calculating a second hash value of each frame of image in the second video;
and determining the similarity between each frame of image in the second video and the first image based on the first hash value and the second hash value.
A first hash value of a first image is calculated based on any image hash algorithm (such as a mean hash algorithm, a difference hash algorithm and the like), a second hash value of each frame image in a second video is calculated based on the same image hash algorithm, and the similarity between each frame image in the second video and the first image is determined based on the first hash value and the second hash value.
The similarity of each frame image in the second video to the first image may be determined based on the similarity of the first hash value and the second hash value. In a specific implementation, the format of the obtained hash value is different according to the difference of the adopted image hash algorithm, and the specific mode for calculating the similarity of the two hash values may be different.
For ease of understanding, the following will exemplify. In this embodiment, based on the mean hash algorithm, the hash value of each frame image in the second video and the hash value of the first image are determined through the steps of downsizing, image gray processing, calculating the average value of pixels, comparing the gray level of each pixel with the average value, generating the hash value, and the like. In this embodiment, the hash value obtained by the mean hash algorithm is an integer with a fixed number of bits (for example, 64 bits), and the similarity between each frame image in the second video and the first image is determined based on the hamming distance between the hash value of each frame image in the second video and the hash value of the first image. In this embodiment, the similarity between each frame image in the second video and the first image is determined based on the first hash value and the second hash value. The similarity between the images is judged through the hash value, so that the judging efficiency and accuracy can be improved.
In practical applications, the starting time of the interaction node is usually required to be aligned with the video picture, for example, a person in the video is walking, the interaction node appears when walking to the intersection, and the user is prompted to select left or right turn, so that when the person just walks to the intersection, the starting time of the interaction node and the accurate image content in the video can achieve better interactive video playing effect Ji Cai.
For example, when a person in the first video walks to the intersection, an interaction node appears, the user is prompted to select left or right, and different video interactions are played according to the selection of the user, so that the first image corresponding to the starting moment of the interaction node is the image when the person just walks to the intersection.
The second video is a video obtained by editing the first video. In one case, the pictures of people walking to the intersection are deleted from the second video, and in this case, all images in the second video have smaller similarity with the first image. From a logical point of view of the video content, the person does not walk to the intersection and thus the user does not need to choose to turn left or right either, so that in this case there is no need to keep the target interaction node in the second video.
Optionally, in some embodiments, after the step 103, the method further includes:
and deleting the target interaction node under the condition that the target interaction node is not reserved in the second video.
In another case, in the first video, the person walks to the intersection at the 3 rd minute 40 second of the first video, so the starting time of the interaction node is 3 minutes 40 seconds. The pictures of people walking to the road junction are reserved in the second video, so that the target interaction node needs to be reserved in the second video. However, since the first video is clipped in the second video, the person walks to the intersection in the second video at the 3 rd minute 15 seconds, and at this time, the starting time of the interaction node needs to be adjusted to ensure that the starting time of the interaction node is aligned with the image content in the video.
And under the condition that the target interaction node is reserved in the second video, determining a second image from the second video, wherein the similarity between the second image and the first image is larger than a first threshold value. The first threshold value can be set and adjusted according to actual requirements.
In some embodiments, the number of images in the second video having a similarity to the first image greater than the first threshold is a plurality. As an alternative embodiment, from the images in the second video, the similarity between the images and the first image is greater than the first threshold value, randomly confirming that one image is the second image. As another alternative embodiment, an image having the earliest appearance time is determined as the second image from among the images having a similarity with the first image greater than the first threshold in the second video.
Optionally, as another optional embodiment, the step 104 includes:
and under the condition that the target interaction node is reserved in the second video, determining the image with the highest similarity with the first image in the second video as a second image.
In this embodiment, under the condition that the target interaction node is reserved in the second video, the image with the highest similarity with the first image in the second video is determined to be the second image, so that the matching degree between the starting time of the target interaction node and the video picture is further improved, and the accuracy of adjusting the interaction node is improved.
In the embodiment of the application, after the first video is replaced by the second video, the corresponding second image in the second video can be determined according to the similarity of the corresponding first image in the first video and each frame image in the second video of the target interaction node, and the starting moment of the target interaction node is correspondingly adjusted based on the moment of the second image in the second video, so that the target interaction node can still be aligned with the video image after the source video is replaced, and the efficiency of adjusting the starting moment of the interaction node is improved.
For ease of understanding, a specific example is described below. In this embodiment, after the interactive video is online, the interactive video is set to include a video clip A0, and a corresponding video identifier (id) is tvid0, and the video material is edited to be replaced by A1 from A0, so that the video id will not change and is still tvid0.
Step (Step) 1: and searching a script file of the interactive video, searching a playing interval with the video id of tvid0, and setting the playing interval id as playblock0.
Step2: and searching a script file of the interactive video, traversing all the interactive nodes to find out the interactive node with the corresponding playing interval id of playblock0, namely searching all the interactive nodes appearing in the segment A0, and marking the interactive node as the interactive block0.
Step3: the starting time of the interactive block0 interactive node is found and is recorded as starttime0.
Step4: this frame image found in the original segment A0 at starttime0 is set to image0.
Step5: the hash value of each frame image in the new segment A1 is calculated based on image hash algorithms such as a mean hash algorithm, a difference hash algorithm and the like, and the hash value of image0 is calculated. When an image having a similarity to image0 larger than the second threshold value exists in the new segment A1, searching for a frame of image closest to image0 in the new segment A1 based on the hash value, and recording the frame as image1.
Step6: and filling the time1 of the image1 serving as a new starting time of the interaction node in the interaction block0 into a script file of the interaction video.
In the specific implementation, if a plurality of video clips are replaced in the interactive video, steps 1 to 6 are repeated to sequentially adjust the corresponding interactive node of each video, and the script file of the adjusted interactive video is produced again and issued to the system.
In this embodiment, a new video segment is analyzed through an image hash algorithm, image content corresponding to the starting time of the interaction node is found from the new video segment, and the starting time of the interaction node is automatically adjusted, so that a new interaction script is generated. By the method, the adjustment cost and the labor investment for replacing the interactive video material are reduced, and the efficiency of adjusting the interactive nodes is improved.
Referring to fig. 3, fig. 3 is a block diagram of an adjusting apparatus 300 for an interactive node according to an embodiment of the application. As shown in fig. 3, the present embodiment provides an adjustment apparatus 300 for an interaction node, including:
the first determining module 301 is configured to determine a target interaction node corresponding to the first video;
a second determining module 302, configured to determine a first image corresponding to the target interaction node, where the first image is an image corresponding to a start time of the target interaction node in the first video;
a judging module 303, configured to judge whether the target interaction node is reserved in a second video based on a similarity between each frame image and the first image in the second video, where the second video is used to replace the first video;
a third determining module 304, configured to determine, in a case where it is determined that the target interaction node is retained in the second video, a second image from the second video, where a similarity between the second image and the first image is greater than a first threshold;
and a replacing module 305, configured to replace the starting time of the target interaction node with the corresponding time of the second image in the second video.
Optionally, the judging module 303 includes:
the computing unit is used for computing the similarity between each frame of image in the second video and the first image;
a judging unit configured to judge whether an image having a similarity with the first image greater than or equal to a second threshold value is included in the second video;
and the first determining unit is used for determining that the target interaction node is reserved in the second video under the condition that the second video comprises the image with the similarity with the first image being greater than or equal to the second threshold value, otherwise, determining that the target interaction node is not reserved in the second video.
Optionally, the adjusting device 300 of the interaction node further includes:
and the deleting module is used for deleting the target interaction node under the condition that the target interaction node is not reserved in the second video.
Optionally, the third determining module 304 is specifically configured to:
and under the condition that the target interaction node is reserved in the second video, determining the image with the highest similarity with the first image in the second video as a second image.
Optionally, the computing unit is specifically configured to;
calculating a first hash value of the first image and calculating a second hash value of each frame of image in the second video;
and determining the similarity between each frame of image in the second video and the first image based on the first hash value and the second hash value.
Optionally, the first determining module 301 includes:
the acquisition unit is used for acquiring the video identification of the first video;
and the second determining unit is used for determining a target interaction node corresponding to the first video from a script file based on the video identification, wherein the script file comprises information of the interaction node corresponding to each video in the interaction videos.
Optionally, the second determining unit is specifically configured to:
determining an interval identifier of a target playing interval corresponding to the first video based on the video identifier;
and determining a target interaction node from the script file based on the interval identifier of the target playing interval, wherein the interval identifier corresponding to the target interaction node is matched with the interval identifier of the target playing interval.
The adjustment device 300300 for an interaction node provided in the embodiment of the present application can implement each process implemented by the above method embodiment, and in order to avoid repetition, details are not repeated here.
The embodiment of the application also provides an electronic device, as shown in fig. 4, which comprises a processor 401, a communication interface 402, a memory 403 and a communication bus 404, wherein the processor 401, the communication interface 402 and the memory 403 complete communication with each other through the communication bus 404,
a memory 403 for storing a program;
the processor 401, when executing the program stored in the memory 403, implements the following steps:
determining a target interaction node corresponding to the first video;
determining a first image corresponding to the target interaction node, wherein the first image is an image corresponding to the starting moment of the target interaction node in the first video;
judging whether the target interaction node is reserved in a second video or not based on the similarity between each frame of image in the second video and the first image, wherein the second video is used for replacing the first video;
determining a second image from the second video under the condition that the target interaction node is reserved in the second video, wherein the similarity between the second image and the first image is larger than a first threshold;
and replacing the starting moment of the target interaction node with the corresponding moment of the second image in the second video.
Optionally, the processor 401 is further configured to execute the program stored on the memory 403, thereby implementing the following steps:
calculating the similarity between each frame of image in the second video and the first image;
judging whether the second video comprises an image with the similarity with the first image being greater than or equal to a second threshold value or not;
and under the condition that the second video comprises the image with the similarity with the first image being greater than or equal to the second threshold value, determining that the target interaction node is reserved in the second video, otherwise, determining that the target interaction node is not reserved in the second video.
Optionally, the processor 401 is further configured to execute the program stored on the memory 403, thereby implementing the following steps:
and deleting the target interaction node under the condition that the target interaction node is not reserved in the second video.
Optionally, the processor 401 is further configured to execute the program stored on the memory 403, thereby implementing the following steps:
and under the condition that the target interaction node is reserved in the second video, determining the image with the highest similarity with the first image in the second video as a second image.
Optionally, the processor 401 is further configured to execute the program stored on the memory 403, thereby implementing the following steps:
calculating a first hash value of the first image and calculating a second hash value of each frame of image in the second video;
and determining the similarity between each frame of image in the second video and the first image based on the first hash value and the second hash value.
Optionally, the processor 401 is further configured to execute the program stored on the memory 403, thereby implementing the following steps:
acquiring a video identification of a first video;
and determining a target interaction node corresponding to the first video from a script file based on the video identification, wherein the script file comprises information of the interaction node corresponding to each video in the interaction videos.
Optionally, the processor 401 is further configured to execute the program stored on the memory 403, thereby implementing the following steps:
determining an interval identifier of a target playing interval corresponding to the first video based on the video identifier;
and determining a target interaction node from the script file based on the interval identifier of the target playing interval, wherein the interval identifier corresponding to the target interaction node is matched with the interval identifier of the target playing interval.
The communication bus mentioned by the above terminal may be a peripheral component interconnect standard (Peripheral Component Interconnect, abbreviated as PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, abbreviated as EISA) bus, etc. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
The communication interface is used for communication between the terminal and other devices.
The memory may include random access memory (Random Access Memory, RAM) or non-volatile memory (non-volatile memory), such as at least one disk memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU for short), a network processor (Network Processor, NP for short), etc.; but also digital signal processors (Digital Signal Processing, DSP for short), application specific integrated circuits (Application Specific Integrated Circuit, ASIC for short), field-programmable gate arrays (Field-Programmable Gate Array, FPGA for short) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
In yet another embodiment of the present application, a readable storage medium is provided, where instructions are stored, which when executed on a processor, cause the processor to perform the method for adjusting an interaction node according to any of the above embodiments.
In yet another embodiment of the present application, a computer program product comprising instructions, which when executed on a computer, causes the computer to perform the method for adjusting an interaction node according to any of the above embodiments is also provided.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, by wired (e.g., coaxial cable, optical fiber, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), etc.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In this specification, each embodiment is described in a related manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing description is only of the preferred embodiments of the present application and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application are included in the protection scope of the present application.

Claims (10)

1. A method for adjusting an interaction node, comprising:
determining a target interaction node corresponding to the first video;
determining a first image corresponding to the target interaction node, wherein the first image is an image corresponding to the starting moment of the target interaction node in the first video;
judging whether the target interaction node is reserved in a second video or not based on the similarity between each frame of image in the second video and the first image, wherein the second video is used for replacing the first video;
determining a second image from the second video under the condition that the target interaction node is reserved in the second video, wherein the similarity between the second image and the first image is larger than a first threshold;
and replacing the starting moment of the target interaction node with the corresponding moment of the second image in the second video.
2. The method of claim 1, wherein the determining whether the target interaction node is retained in the second video based on the similarity of each frame image in the second video to the first image comprises:
calculating the similarity between each frame of image in the second video and the first image;
judging whether the second video comprises an image with the similarity with the first image being greater than or equal to a second threshold value or not;
and under the condition that the second video comprises the image with the similarity with the first image being greater than or equal to the second threshold value, determining that the target interaction node is reserved in the second video, otherwise, determining that the target interaction node is not reserved in the second video.
3. The method of claim 2, wherein after determining whether the target interaction node is retained in the second video based on the similarity of each frame of image in the second video to the first image, the method further comprises:
and deleting the target interaction node under the condition that the target interaction node is not reserved in the second video.
4. The method of claim 2, wherein determining a second image from the second video with the target interaction node retained in the second video comprises:
and under the condition that the target interaction node is reserved in the second video, determining the image with the highest similarity with the first image in the second video as a second image.
5. The method of claim 2, wherein said calculating the similarity of each frame of image in the second video to the first image comprises;
calculating a first hash value of the first image and calculating a second hash value of each frame of image in the second video;
and determining the similarity between each frame of image in the second video and the first image based on the first hash value and the second hash value.
6. The method of any one of claims 1-5, wherein determining the target interaction node to which the first video corresponds comprises:
acquiring a video identification of a first video;
and determining a target interaction node corresponding to the first video from a script file based on the video identification, wherein the script file comprises information of the interaction node corresponding to each video in the interaction videos.
7. The method of claim 6, wherein determining the target interaction node corresponding to the first video from the script file based on the video identification comprises:
determining an interval identifier of a target playing interval corresponding to the first video based on the video identifier;
and determining a target interaction node from the script file based on the interval identifier of the target playing interval, wherein the interval identifier corresponding to the target interaction node is matched with the interval identifier of the target playing interval.
8. An adjustment device for an interactive node, comprising:
the first determining module is used for determining a target interaction node corresponding to the first video;
the second determining module is used for determining a first image corresponding to the target interaction node, wherein the first image is an image corresponding to the starting moment of the target interaction node in the first video;
the judging module is used for judging whether the target interaction node is reserved in the second video or not based on the similarity between each frame of image in the second video and the first image, and the second video is used for replacing the first video;
a third determining module, configured to determine, in a case where it is determined that the target interaction node is retained in the second video, a second image from the second video, where a similarity between the second image and the first image is greater than a first threshold;
and the replacing module is used for replacing the starting moment of the target interaction node with the corresponding moment of the second image in the second video.
9. The electronic equipment is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
a memory for storing a computer program;
a processor for implementing the steps of the method according to any one of claims 1-7 when executing a program stored on a memory.
10. A readable storage medium, on which a program is stored, characterized in that the program, when being executed by a processor, implements the steps of the method according to any of claims 1-7.
CN202311047429.1A 2023-08-18 2023-08-18 Interactive node adjustment method and device, electronic equipment and readable storage medium Pending CN117082269A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311047429.1A CN117082269A (en) 2023-08-18 2023-08-18 Interactive node adjustment method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311047429.1A CN117082269A (en) 2023-08-18 2023-08-18 Interactive node adjustment method and device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN117082269A true CN117082269A (en) 2023-11-17

Family

ID=88701782

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311047429.1A Pending CN117082269A (en) 2023-08-18 2023-08-18 Interactive node adjustment method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN117082269A (en)

Similar Documents

Publication Publication Date Title
Ma et al. Stage-wise salient object detection in 360 omnidirectional image via object-level semantical saliency ranking
CN108924586B (en) Video frame detection method and device and electronic equipment
CN1538351B (en) Method and computer for generating visually representative video thumbnails
CN111522996B (en) Video clip retrieval method and device
CN110728294A (en) Cross-domain image classification model construction method and device based on transfer learning
CN104216956B (en) The searching method and device of a kind of pictorial information
WO2022105608A1 (en) Rapid face density prediction and face detection method and apparatus, electronic device, and storage medium
JP2014503095A (en) Method and apparatus for comparing pictures
US9799099B2 (en) Systems and methods for automatic image editing
CN111062426A (en) Method, device, electronic equipment and medium for establishing training set
US10902049B2 (en) System and method for assigning multimedia content elements to users
CN111246287A (en) Video processing method, video publishing method, video pushing method and devices thereof
CN111031359B (en) Video playing method and device, electronic equipment and computer readable storage medium
JP2014506366A (en) Method and apparatus for comparing pictures
CN112419132A (en) Video watermark detection method and device, electronic equipment and storage medium
CN112383824A (en) Video advertisement filtering method, device and storage medium
CN114880513A (en) Target retrieval method and related device
CN112884866B (en) Coloring method, device, equipment and storage medium for black-and-white video
CN116958267B (en) Pose processing method and device, electronic equipment and storage medium
CN117459662A (en) Video playing method, video identifying method, video playing device, video playing equipment and storage medium
CN117082269A (en) Interactive node adjustment method and device, electronic equipment and readable storage medium
CN111294613A (en) Video processing method, client and server
CN110019951B (en) Method and equipment for generating video thumbnail
CN115269910A (en) Audio and video auditing method and system
CN113781491A (en) Training of image segmentation model, image segmentation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination