CN111711856B - Interactive video production method, device, terminal, storage medium and player - Google Patents

Interactive video production method, device, terminal, storage medium and player Download PDF

Info

Publication number
CN111711856B
CN111711856B CN202010837707.3A CN202010837707A CN111711856B CN 111711856 B CN111711856 B CN 111711856B CN 202010837707 A CN202010837707 A CN 202010837707A CN 111711856 B CN111711856 B CN 111711856B
Authority
CN
China
Prior art keywords
interactive
segment
frame
video
specific object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN202010837707.3A
Other languages
Chinese (zh)
Other versions
CN111711856A (en
Inventor
唐亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Aggregation Big Data Development Co ltd
Original Assignee
Shenzhen Diantong Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Diantong Information Technology Co ltd filed Critical Shenzhen Diantong Information Technology Co ltd
Priority to CN202010837707.3A priority Critical patent/CN111711856B/en
Publication of CN111711856A publication Critical patent/CN111711856A/en
Application granted granted Critical
Publication of CN111711856B publication Critical patent/CN111711856B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention discloses a method and a device for making an interactive video, which comprise the following steps: receiving a plurality of video clips which are recorded in advance and transmitted by a user and characteristic information of a specific object; identifying a specific object in the video clip according to the characteristic information to obtain all frame elements of the specific object appearing in the video clip; dividing all frame elements from the video clip to obtain a plurality of frame element clips containing specific objects; adding an interactive element to a specific object in a plurality of frame meta-segments to obtain a new frame meta-segment with the interactive element; and inserting the new frame element segment into the time axis position of the corresponding frame element segment of the video segment to obtain the interactive video with the interactive elements. The method and the device realize the effect of adding the dynamic interactive elements to the specific object in the video clip, enrich the interactive means of the interactive video and improve the practicability of making the interactive video.

Description

Interactive video production method, device, terminal, storage medium and player
Technical Field
The present invention relates to the field of video production technologies, and in particular, to a method, an apparatus, a terminal, a storage medium, and a player for producing an interactive video.
Background
In order to improve the sense of participation of users when watching videos, many video playing software adopt an interactive playing mode. The method and the system have the advantages that the user can make some choices when playing videos, and the playing software can play scenarios with the same time line but different scenario trends according to the choices of the user, so that different outcomes can be played. However, in the existing interactive video production, it is not yet good to add interactive elements to specific objects in a video clip, so that the interactive elements are operated, moved, deformed, and added with more interactive types in a display interface along with the specific objects, and thus, when a user watches a video, the user can select the specific elements at will for interaction, and better experience is obtained.
In view of the foregoing, it is desirable to provide a method, an apparatus, a terminal, a storage medium, and a player for producing an interactive video to overcome the above-mentioned drawbacks.
Disclosure of Invention
The invention aims to provide a method, a device, a terminal, a storage medium and a player for making an interactive video, aiming at solving the problem that the existing interactive video making method is simpler in design of interactive elements of a specific object and improving the practicability of interactive video making.
In order to achieve the above object, the present invention first provides a method for making an interactive video, comprising the following steps:
receiving a plurality of video clips which are recorded in advance and transmitted by a user and characteristic information of a specific object; wherein, the step of receiving the characteristic information of a plurality of video clips and specific objects which are recorded in advance and sent by the user comprises the following steps: receiving a plurality of video clips sent by a user, wherein the plurality of video clips comprise a plurality of preorder plot clips and a plurality of subsequent plot clips of parallel time axes which are connected with each preorder plot clip; receiving characteristic information of a specific object sent by a user, wherein the characteristic information comprises each view information and/or each part local characteristic information of the specific object;
identifying a specific object in the video clip according to the characteristic information to obtain all frame elements of the specific object appearing in the video clip;
dividing all the frame elements from the video clip to obtain a plurality of frame element clips containing specific objects;
adding an interactive element to a specific object in the frame meta-segments to obtain a new frame meta-segment with the interactive element;
inserting the new frame element segment into a time axis position of the video segment corresponding to the frame element segment to obtain an interactive video with interactive elements;
the method for making the interactive video further comprises the following steps:
acquiring a pre-recorded hidden subsequent scenario segment sent by a user;
acquiring a corresponding weight value of each pre-arranged scenario segment and each subsequent scenario segment preset by a user, acquiring a selection result of the pre-arranged scenario segment and the subsequent scenario segment in the interactive video sent by the client side by the user at the client side, and calculating a corresponding total weight value;
and judging whether the total weight value exceeds a preset weight threshold value, and if so, inserting the hidden subsequent plot segments on a time axis corresponding to the interactive video.
In a preferred embodiment, the method for making the interactive video further comprises the steps of:
acquiring the selection times of each preorder plot segment and each subsequent plot segment in the interactive video by a plurality of clients;
extracting the preorder scenario segments and the subsequent scenario segments with the largest selection times in each selection of the interactive video;
and combining the pre-sequence plot segments with the maximum selected times with the subsequent plot segments to generate a video of a fixed plot.
In a preferred embodiment, the step of adding an interactive element to a specific object in a plurality of frame fragments to obtain a new frame fragment with an interactive element includes:
receiving an instruction for adding interactive elements to the specified frame element segment;
extracting each frame of picture of the appointed frame meta-fragment, and adding an interactive floating layer at the position of a specific object;
establishing object connection between the interactive floating layer and the interactive elements or implanting the interactive elements; when the interactive floating layer is selected on the display interface, switching to display as the interactive element;
and combining each frame of picture added with the interactive floating layer to generate the corresponding new frame meta-segment.
In a preferred embodiment, the step of adding an interactive element to a specific object in a plurality of frame fragments to obtain a new frame fragment with an interactive element includes:
receiving a plurality of groups of state frame fragments sent by a user; each group of state frame meta-segments comprises a first state frame meta-segment and a second state frame meta-segment which are respectively different plot trends of a specific object in the same event;
extracting each frame of picture of the first state frame meta-fragment, and adding an interactive floating layer at the position of a specific object;
establishing object connection between the interactive floating layer and the second state frame element fragment; when the interactive floating layer is selected on a display interface, switching to the second state frame meta-segment;
and combining each frame of picture of the first state frame meta-segment added with the interactive floating layer and the second state frame meta-segment to generate the corresponding new frame meta-segment.
In a preferred embodiment, the step of adding an interactive element to a specific object in a plurality of frame fragments to obtain a new frame fragment with an interactive element includes;
receiving a replacement graph and/or a replacement sound source sent by a user;
extracting each frame picture and/or audio track information of the specified frame meta-segment;
replacing or overlaying the specific object of each frame picture with the replacement graph and/or replacing the sound source of the specific object in the audio track information with the replacement sound source;
and recombining the information of each frame picture replaced or covered by the replacement graph and/or the sound track replaced by the replacement sound source to generate the corresponding new frame meta-segment.
Another aspect of the present invention provides an apparatus for generating an interactive video, including:
the receiving module is used for receiving a plurality of video clips which are recorded in advance and transmitted by a user and the characteristic information of a specific object; the receiving module comprises a first receiving unit and a second receiving unit; the first receiving unit is used for receiving a plurality of video clips sent by a user; the plurality of video clips comprise a plurality of preorder plot clips and a plurality of subsequent plot clips of parallel time axes which connect each preorder plot clip; the second receiving unit is used for receiving the characteristic information of the specific object sent by the user; wherein the characteristic information comprises each view information and/or each part local characteristic information of a specific object;
the identification module is used for identifying a specific object in the video clip according to the characteristic information to obtain all frame elements of the specific object appearing in the video clip;
the separating module is used for separating all the frame elements from the video clip to obtain a plurality of frame element clips containing specific objects;
the interaction module is used for adding interaction elements to specific objects in the frame meta-segments to obtain new frame meta-segments with the interaction elements;
the generating module is used for inserting the new frame element segment into the position of a time axis of the video segment corresponding to the frame element segment to obtain an interactive video with interactive elements;
wherein, the making device of the interactive video further comprises:
the first acquisition module is used for acquiring pre-recorded hidden subsequent scenario segments sent by a user;
the second acquisition module is used for acquiring a corresponding weight value of each preorder plot segment and each subsequent plot segment preset by a user;
a third obtaining module, configured to obtain a selection result of the preorder scenario segment and the subsequent scenario segment in the interactive video, sent by the client, of the user at the client, and calculate a corresponding total weight value;
and the judging module is used for judging whether the total weight value exceeds a preset weight threshold value, and if so, inserting the hidden subsequent plot segments on a time axis corresponding to the interactive video. .
In a preferred embodiment, the apparatus for producing interactive video further comprises:
a fourth obtaining module, configured to obtain the number of times that each of the preorder scenario segments and the subsequent scenario segments are selected by multiple clients in the interactive video;
the extraction module is used for extracting the preorder scenario segments and the subsequent scenario segments which are selected most frequently in each selection of the interactive video;
and the combination module is used for combining the preorder plot segments with the maximum selected times with the subsequent plot segments to generate a video with a fixed plot.
In a preferred embodiment, the interaction module comprises:
the instruction receiving unit is used for receiving an instruction for adding an interactive element to the specified frame element segment;
the first interactive unit is used for extracting each frame of picture of the appointed frame element fragment and adding an interactive floating layer at the position of a specific object;
the interactive implantation unit is used for establishing object connection between the interactive floating layer and the interactive elements or implanting the interactive elements; when the interactive floating layer is selected on the display interface, switching to display as the interactive element;
and the first generating unit is used for combining each frame of picture added with the interactive floating layer to generate the corresponding new frame meta-fragment.
In a preferred embodiment, the interaction module comprises:
the segment receiving unit is used for receiving a plurality of groups of state frame element segments sent by a user; each group of state frame meta-segments comprises a first state frame meta-segment and a second state frame meta-segment which are respectively different plot trends of a specific object in the same event;
the second interaction unit is used for extracting each frame of picture of the first state frame meta-fragment and adding an interaction floating layer at the position of a specific object;
the interaction generating unit is used for establishing object connection between the interaction floating layer and the second state frame element fragment; when the interactive floating layer is selected on a display interface, switching to the second state frame meta-segment;
and the second generating unit is used for combining each frame of picture of the first state frame meta-segment added with the interactive floating layer and the second state frame meta-segment to generate the corresponding new frame meta-segment.
In a preferred embodiment, the interaction module comprises;
the sound and picture receiving unit is used for receiving the replacement graphics and/or the replacement sound source sent by the user;
a sound-picture extracting unit for extracting each frame picture and/or track information of the specified frame meta-segment;
the sound and picture replacing unit is used for replacing or overlaying the specific object of each frame picture into the replacing graph and/or replacing the sound source of the specific object in the sound track information into the replacing sound source;
and the third generating unit is used for recombining each frame picture replaced or covered by the replacement graph and/or the audio track information replaced by the replacement audio source to generate the corresponding new frame meta-segment.
A further aspect of the present invention provides a terminal, which includes a memory, a processor, and a program for making an interactive video stored in the memory and executable on the processor, wherein the program for making an interactive video, when executed by the processor, implements the steps of the method for making an interactive video according to any one of the above embodiments.
The present invention further provides a computer-readable storage medium, in which a program for making an interactive video is stored, and when being executed by a processor, the program for making an interactive video realizes the steps of the method for making an interactive video according to any one of the above embodiments.
The invention also provides a player, which comprises a playing module; the playing module is used for playing the interactive video made by the interactive video making method in any one of the above embodiments.
According to the method and the device, the specific object is identified in the video clip, the frame meta-clip containing the specific object is extracted, the interactive elements are added to the specific object in the frame meta-clip, the frame meta-clip is recombined to generate a new frame meta-clip, and the original frame meta-clip is replaced, so that the effect of adding the dynamic interactive elements to the specific object in the video clip is realized, the interactive means of the interactive video is enriched, and the practicability of making the interactive video is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
FIG. 1 is a flow chart of a method for producing an interactive video according to the present invention;
FIG. 2 is a flow chart of a first interactive implementation of the method for making the interactive video shown in FIG. 1;
FIG. 3 is a flow chart of a second interactive implementation of the method for making the interactive video shown in FIG. 1;
FIG. 4 is a flow chart of a third interactive implementation of the method for making the interactive video shown in FIG. 1;
FIG. 5 is a flowchart of another embodiment of a method for making the interactive video shown in FIG. 1;
FIG. 6 is a flow chart of yet another embodiment of a method for making the interactive video shown in FIG. 1;
FIG. 7 is a block diagram of an apparatus for producing interactive video according to the present invention;
FIG. 8 is a block diagram of an apparatus for producing an interactive video shown in FIG. 7 according to another embodiment;
FIG. 9 is a block diagram of an apparatus for producing an interactive video shown in FIG. 7 according to another embodiment;
FIG. 10 is a block diagram of an interactive module in an embodiment of the apparatus for producing an interactive video shown in FIG. 7;
FIG. 11 is a block diagram of an interactive module in another embodiment of the apparatus for producing interactive video shown in FIG. 7;
fig. 12 is a block diagram of an interactive module in another embodiment of the apparatus for producing an interactive video shown in fig. 7.
Detailed Description
In order to make the objects, technical solutions and advantageous effects of the present invention more clearly apparent, the present invention is further described in detail below with reference to the accompanying drawings and the detailed description. It should be understood that the detailed description and specific examples, while indicating the preferred embodiment of the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention.
The invention provides a method for making an interactive video, which is used for providing a video making platform, so that a video maker (namely a user) can add interactive elements to a pre-recorded video clip in the making platform to generate the interactive video containing the interactive elements. It can be understood that the interactive video produced by the method can be played on the video production platform at the same time, and the production platform can also perform statistics according to interactive behavior data such as the number of times of interaction, selection results and the like of a playing user on the platform to obtain big data played by a plurality of clients.
As shown in FIG. 1, the method includes the following steps S11-S15.
In step S11, the feature information of the plurality of pre-recorded video clips and the specific object transmitted by the user is received.
In this step, the user needs to photograph a plurality of video clips according to the scenario, where the video clips include a plurality of pre-ordered scenario clips and a plurality of subsequent scenario clips of parallel timeline connecting each pre-ordered scenario clip. The method includes the steps that a section of scenario segment needs to be shot in advance, then a plurality of subsequent scenario development trends of the scenario segment exist, a user shoots a section of video for some important scenario trends respectively according to needs, the video segments based on different development trends of the same thing can be called subsequent scenario segments of a parallel time axis, and only one of the subsequent scenario segments can be selected to be played at the same time during playing. It can be understood that each subsequent scenario segment can continue to generate a plurality of new subsequent scenario segments, so as to form a tree structure of scenarios. Wherein each point on the tree structure is a subsequent scenario segment or a preceding scenario segment. It can be understood that the so-called preamble scenario segment and subsequent scenario segment are only for convenience of describing a temporal connection relationship of video segments; the head of a subsequent plot segment is connected with the tail of a preceding plot segment, and the tail can also be connected with another subsequent plot segment or the head of another preceding plot segment, and is not used as the limitation of the connection relationship.
In addition, the user selects a specific object in the video segment, and the specific object can be a vehicle, a person, an animal or any other object in the video segment. For example, if an interactive element needs to be added to a vehicle in a video clip, the user can upload feature information of a specific object on the video platform, which facilitates the platform to identify the specific object. Specifically, the feature information includes each view information and/or partial feature information of each part of the specific object; in the process of video shooting, a user can respectively shoot and sample each view (such as six views, axonometric diagrams and the like) of the specific object and perform highlighting amplification sampling on each obvious part of the specific object, so that the characteristic information of each view and each part of the specific object is obtained, and the better identification of a video platform is facilitated. It should be noted that, one or more specific objects in the same video segment may be provided, and different interactive elements may be added to different specific objects.
In step S12, specific object recognition is performed in the video segment according to the feature information, and all frame elements of the video segment in which the specific object appears are obtained.
The specific identification mode may include the following two types:
first, a video clip is divided into frames for processing, feature information of a specific object is subjected to pattern matching across all frames, and if a matching value exceeds a certain value, the specific object is considered to exist in the frame, thereby acquiring all frame elements (frame elements are a frame) containing the specific object. The feature information includes a surface feature, a point feature, and the like of the specific object. Of course, in order to avoid interference from other similar patterns, individual frame elements or a short segment of consecutive frame elements in which a suspected specific object appears may be regarded as invalid identifications, i.e., the number of frame elements in which a specific object appears continuously in a segment exceeds a preset threshold value, and then the frame elements are regarded as valid frame elements.
Secondly, extracting the contour of the specific object in the characteristic information, carrying out region point drawing, and forming the contour of the specific object by uniformly distributing a plurality of point drawing. And then representing the contour posture change of the specific object by eight quantized values in eight directions, representing the contour of the specific object in each frame picture by a vector of direction values, clustering and identifying contour vectors, thereby identifying the behavior of the specific object in each frame picture, namely, representing the posture of the specific object in each frame by using the contour map, and identifying the dynamic state of the specific object by using a vector of the distance from a point on the gait contour map to the centroid vertical line as a characteristic vector. The method is suitable for identifying non-rigid and articulated objects, and can tolerate certain visual angle change, frame rate change and lens distance.
It will be appreciated that there are other ways in which a particular object in a video segment may be identified, and the invention is not limited thereto.
In step S13, all frame elements are segmented from the video segment to obtain a plurality of frame element segments containing specific objects.
In this step, all valid frame elements containing a specific object are separated to form frame element fragments with a certain number of frame elements. In the separation process, the corresponding time point of each frame element segment on the time axis of the whole video segment is recorded. Furthermore, for the situation that the user does not want to make interactive elements for some of the frame element segments, a function of selecting one frame element segment is provided for the user, and then the interactive elements are added only for some of the selected frame element segments.
In step S14, an interactive element is added to a specific object in the plurality of frame fragments, and a new frame fragment with the interactive element is obtained.
In this step, adding interactive elements to the selected multiple frame element segments with the specific object as a base point, and when the finally generated interactive video is played, and the specific object is selected through any one of mouse selection, AR/VR focus operation selection and voice input, popping up or switching to the interactive elements on the display interface of the client. It can be understood that the new frame meta-segment includes both the original video storyline content and the interactive element, i.e. the interactive floating layer.
Specifically, at least the following three interactive implementations may be included.
First, in the first interactive implementation, as shown in fig. 2, the following steps S1411 to S1414 are included.
In step S1411, an instruction to add an interactive element to a specified frame element segment is received. And receiving related instructions and interactive elements operated by a user on the specified frame element segment. The interactive elements comprise moderate interactive elements (such as finding stubbles, prop exploration, gobang/chess/go, and the like) and severe interactive elements (such as operating automobiles, airplanes, and the like). The interactive elements can be uploaded by a user or downloaded through a network through a production platform.
In step S1412, each frame of the frame meta-segment is extracted, and an interactive floating layer is added at the position where the specific object is located. It will be appreciated that the position of the particular object in each frame element image of the frame element segment may be dynamically changed, for example, continuously moving from left to right, adding an interactive floating layer at the position of the particular object in each frame element. When all the frame elements are played continuously, the interactive floating layer is continuously moved along with the specific object in the display picture of the frame element segment. The interactive floating layer can be a semitransparent button or a transparent layer covering the whole outline or part of the specific object, and particularly, the interactive floating layers can be set independently for different parts of the specific object.
Specifically, the style of the interactive floating layer may be a template provided by the production platform, or a user-defined pattern, or a corresponding outline style generated by the production platform according to the outline of the specific object, and of course, when the views of the specific object in different frame elements are different, the corresponding generated outline styles are also in one-to-one correspondence. The specific generation process of the interactive floating layer may refer to the prior art, and the present invention is not limited herein.
In step S1413, an interactive floating layer is established to connect with an object of the interactive element or implant the interactive element; when the interactive floating layer is selected on the display interface, the switching display is the interactive element. The specific principle and implementation of object connection between the interactive floating layer and the interactive element refer to the prior art, and the present invention is not limited herein. For example, the specific object is a piece of go in a certain video segment, the interactive floating layer can be arranged at the position of the go along with the go, then the small game of the go is in object connection with the interactive floating layer, when the client clicks the position of the go on the display interface, the interactive floating layer is triggered, the playing module of the manufacturing platform sends out a control instruction to enable the client to be displayed in a split screen mode or in a switching mode as the go game, and the video watching immersion feeling is increased. Furthermore, a plurality of subsequent plot segments can be connected to the time axis of the frame element segment, then a plurality of key falling points are set on the go game, each key falling point corresponds to one subsequent plot segment, when the selection of the go game at the client side is different, the user can switch to the corresponding subsequent plot segment immediately, the interactivity of video playing is enhanced, and a better playing effect is generated.
In step S1414, each frame of picture with the interactive floating layer added is combined to generate a corresponding new frame meta-segment. In this step, after adding the interactive element to the specific object of each frame in the selected frame meta-segment, a new frame meta-segment is generated. It can be understood that the time axis information and the audio information of the new frame meta clip and the original frame meta clip are not changed.
Secondly, in the second interactive implementation, as shown in fig. 3, the following steps S1421-S1424 are included.
In step S1421, a plurality of groups of status frame fragments sent by the user are received; each group of state frame meta-segments comprises a first state frame meta-segment and a second state frame meta-segment which are respectively different plot trends of the specific object in the same event. The first state frame meta-segment and the second state frame meta-segment are two branch scenarios arranged in parallel by a time axis, the starting points of the scenarios are the same, but the development processes are different. For example, the two are segments of a vehicle running on a road, the first state frame element segment shows a normal running state, the second state frame element segment shows a state after midway rollover, and the time point when the time lines of the two start to generate branches is the starting point of the interactive floating layer setting.
In step S1422, each frame of the first status frame meta-segment is extracted, and an interactive floating layer is added at the position where the specific object is located. The first state frame meta-segment is set as a default playing picture, and if a request that a user clicks the interactive floating layer at the client is not received, the client is controlled to display the plot content of the first state frame meta-segment all the time. Of course, the second status framer fragment may be used as a default play fragment.
In step S1423, an object connection between the interactive floating layer and the second status frame meta-segment is established; and when the interactive floating layer is selected on the display interface, switching to the second state frame meta-segment. Specifically, the manufacturing platform receives a request of a user for selecting an interactive floating layer at a client, and then displays a second state frame meta-fragment at the client; at this time, the content of the first status frame meta-segment may not be played, and the content in the remaining frame meta-segment is directly skipped over to the content of the second status frame meta-segment. It can be understood that, because the time axes of the first and second status frame meta-segments are parallel, the time frames of the first and second status frame meta-segments are synchronized at the client, wherein only the first status frame meta-segment is displayed, when the user selects the interactive floating layer at a certain time point, the content of the first status frame meta-segment after the time point is skipped over, and then the content of the second status frame meta-segment after the time point is switched to be displayed, thereby enhancing the interactive effect.
In step S1424, each frame of the first status frame meta-segment added with the interactive floating layer and the second status frame meta-segment are combined to generate a corresponding new frame meta-segment. For the specific implementation of this step, refer to step S1414, which is not described herein.
Finally, in a third interactive implementation, as shown in FIG. 4, the following steps S1431-S1434 are included.
In step S1431, the replacement graphics and/or the replacement sound source sent by the user are received. And receiving a prepared replacement graph and a replacement sound source uploaded by a user according to the requirements of changing faces or changing sounds of the user. The replacement graphics may include the user's own facial image, and in order to avoid violating the privacy of others, real-name and facial verification is performed on the user.
In step S1432, each frame picture and/or track information of the designated frame meta-segment is extracted.
In step S1433, the specific object of each frame is replaced or overlaid with a replacement graphic, and/or the sound source of the specific object in the track information is replaced with a replacement sound source. Specifically, a replacement request of a user for a specific object is received, a replacement graph is covered on the specific object in each frame of picture or the picture of the specific object is directly scratched off, and the replacement graph is filled at the scratched-off position, so that graph replacement is completed. It should be noted that, in this step, reference may be made to the prior art for specific embodiments of graphics replacement and audio source replacement, and the present invention is not limited herein.
In step S1434, the audio track information of each frame picture and/or each audio track replaced with the replacement graphics and/or replaced with the replacement audio source are recombined to generate a corresponding new frame meta-segment. Namely, each frame of picture is recombined, and then the replaced audio track information is combined with the replaced frame meta-segment to form a new frame meta-segment, so that automatic graphics and/or sound source replacement is realized, and the quality of the interactive video is improved.
In the above-mentioned interactive element adding step, the above three embodiments are only a part of them, so as to facilitate understanding of the step content, and the invention is not limited to only the above three embodiments.
It should be noted that, a selected trigger mechanism may be added to the interactive floating layer, and the selected number of times of the interactive floating layer is performed in a certain period of time as the activation selection of the interactive element. For example, a plurality of branch scenario videos are connected to an interactive floating layer object of a specific object, the interactive floating layer object corresponds to different selected times intervals, the selected times of a user at a client sent by the client are received, the number interval to which the user belongs is judged, and then the client is controlled to be switched to a corresponding subsequent scenario segment immediately or in a delayed manner.
It is to be understood that steps between implementations may overlap and refer to each other, where the content set forth in each step is important, and a part that is not described or illustrated in a certain embodiment may refer to the related description of other embodiments, and any combination or division of the above steps by those skilled in the art is within the scope of the present invention.
In step S15, the new frame element segment is inserted into the time axis position of the corresponding frame element segment of the video segment, so as to obtain the interactive video with interactive elements.
Specifically, since the addition of the interactive elements is performed on each frame element in the selected frame element clip, the time axis length of the recombined new frame element clip is the same as the time axis length of the original frame element clip, and the new frame element clip is inserted into the time axis of the corresponding original frame element clip, so that the content of the video clip is not split or overlapped. The generated interactive video can be integrated into an applet or an independent APP for storage and playing. It can be understood that the production platform provided by the invention is provided with a matched interactive playing module for modulating and decoding the interactive video and then playing and displaying the interactive video at the client.
Further, in one embodiment, as shown in FIG. 5, the method further includes steps S161-S64.
In step S161, a pre-recorded hidden subsequent scenario segment sent by the user is obtained.
In step S162, a weight value corresponding to each of the preorder scenario segments and the subsequent scenario segments preset by the user is obtained. The dividing criteria set by the weight value may be, for example, factors that may affect the character of a certain person to be better or worse; the plurality of subsequent scenario segments include respective selections of a specific object in the film, for example, a selection with a better character is set to a high weight value, so that after a result with a better character is selected for a plurality of times, the total weight value is high, and the high total weight value corresponds to a hidden subsequent scenario segment how the good character of the specific object develops. Of course, the selection of a particular object (e.g., hero) in a profession may also be possible, and so on.
In step S163, the selection results of the preorder scenario segment and the follow-up scenario segment in the interactive video sent by the user at the client are obtained, and the corresponding total weight value is calculated. Specifically, the selection result of the user sent by the client in each subsequent plot segment is obtained, and the total weight value is counted once every time the selection is performed.
In step S164, it is determined whether the total weight value exceeds a preset weight threshold, and if so, a hidden subsequent scenario segment is inserted on the time axis corresponding to the interactive video. When the total weight value after being selected at a time is judged to exceed the weight threshold value, whether the selected time node is before the time node for hiding the subsequent plot segments is judged again, and if the judgment result is before, the hidden branch plot is played normally; if the judgment is later, the hidden branch scenario is not played any more, so that the situations of scenario dislocation and incoherence are avoided.
Specifically, a video of a fixed scenario which does not need to be selected again is regenerated according to the selection results of all subsequent scenario segments sent by the client.
Further, in one embodiment, as shown in FIG. 6, steps S171-S173 are also included.
In step S171, the selection times of each pre-sequence scenario segment and subsequent scenario segments in the interactive video by the multiple clients are obtained.
The subsequent plot clips of the parallel time axes correspond to preset intervals with different selection numbers. When a plurality of persons watch the interactive video at the same time, the selection results of the plurality of clients in the interactive floating layer of a certain subsequent plot segment can be obtained, namely, the number of the clients selecting the interactive floating layer is obtained, and then the interval in which the selection number is positioned is judged, so that the corresponding subsequent plot segment is displayed. For example, a court trial scene exists in the interactive video, each client represents a co-trial group member, each client performs selective voting on an interactive floating layer of a judicial mallet (i.e. a specific object), and different voting results correspondingly show different subsequent scenario segments. Therefore, the effect of providing multi-person interaction for a plurality of clients is achieved. In addition, in other embodiments, the playing module may further count the selection results of the multiple clients for the multiple subsequent scenario segments, and then automatically play a certain subsequent scenario segment selected by the most clients at all clients.
In step S172, the preceding scenario segment and the subsequent scenario segment that are selected the most times in each selection of the interactive video are extracted. And selecting the preorder scenario segment with the largest selection times and the subsequent scenario segments independently in each selection. It can be understood that the selected clips just cover the time axis of the whole interactive video.
In step S173, the preceding scenario segment selected the largest number of times is combined with the subsequent scenario segments to generate a video of a fixed scenario. At this time, the video of the fixed scenario is the result of the selection by the most clients, thereby indicating that each subsequent scenario segment contained in the video is most approved and accepted by the most people. Therefore, the user can shoot some important parts of the plots which can generate larger different development trends in the movie scenario respectively to generate different subsequent plot segments of the scenario, and other scenarios are the same; and then the different subsequent plot segments are sent to a plurality of clients through a production platform to be played simultaneously, so that a most popular video with a fixed plot is generated, and the subsequent video can be shown for the public (such as a cinema) to obtain the optimal movie production effect. The production platform provided by the invention plays roles in interactive video production, playing and video regeneration, and improves the quality of video expression.
In summary, the invention identifies the specific object in the video segment, extracts the frame meta-segment containing the specific object, adds the interactive element to the specific object in the frame meta-segment, recombines to generate a new frame meta-segment, and replaces the original frame meta-segment, thereby realizing the effect of adding the dynamic interactive element to the specific object in the video segment, enriching the interactive means of the interactive video, and improving the practicability of making the interactive video.
Another aspect of the present invention is to provide an interactive video generating apparatus 100, and the operation principle and the specific implementation steps of the apparatus provided by the present invention are consistent with the method, and therefore, are not described again.
As shown in fig. 7, the interactive video producing apparatus 100 includes:
a receiving module 10, configured to receive feature information of a plurality of pre-recorded video clips and a specific object sent by a user;
the identification module 20 is configured to perform specific object identification in the video segment according to the feature information to obtain all frame elements of the specific object appearing in the video segment;
a dividing module 30, configured to divide all frame elements from the video segment to obtain a plurality of frame element segments containing a specific object;
an interaction module 40, configured to add an interaction element to a specific object in the multiple frame meta-segments, to obtain a new frame meta-segment with the interaction element;
and the generating module 50 is configured to insert the new frame element segment into the time axis position where the corresponding frame element segment of the video segment is located, so as to obtain an interactive video with interactive elements.
Further, the receiving module 10 includes:
a first receiving unit (not shown in the figure) for receiving a plurality of video clips sent by a user; the plurality of video clips comprise a plurality of preorder plot clips and a plurality of subsequent plot clips of parallel time axes which connect each preorder plot clip;
a second receiving unit (not shown in the figure) for receiving the characteristic information of the specific object sent by the user; wherein the feature information includes respective view information and/or partial feature information of respective portions of the specific object.
Further, in an embodiment, as shown in fig. 8, the apparatus 100 for producing an interactive video further includes:
the first obtaining module 61 is configured to obtain a pre-recorded hidden subsequent scenario segment sent by a user;
a second obtaining module 62, configured to obtain a weight value corresponding to each pre-ordered scenario segment and each subsequent scenario segment preset by the user;
a third obtaining module 63, configured to obtain a selection result of the preorder scenario segment and a subsequent scenario segment in the interactive video, sent by the client, of the user at the client, and calculate a corresponding total weight value;
and the judging module 64 is configured to judge whether the total weight value exceeds a preset weight threshold, and if so, insert a hidden subsequent scenario clip on a time axis corresponding to the interactive video.
Further, in an embodiment, as shown in fig. 9, the apparatus 100 for producing an interactive video further includes:
a fourth obtaining module 71, configured to obtain the number of times that each pre-sequence scenario segment and subsequent scenario segments are selected by multiple clients in the interactive video;
the extraction module 72 is configured to extract the preorder scenario segment and the subsequent scenario segment that are selected most frequently in each selection of the interactive video;
and the combination module 73 is used for combining the preorder scenario segment with the largest number of selected times with the subsequent scenario segment to generate a video with a fixed scenario.
Further, in one embodiment, as shown in fig. 10, the interaction module 40 includes:
an instruction receiving unit 411, configured to receive an instruction to add an interactive element to a specified frame element segment;
a first interaction unit 412, configured to extract each frame of picture of the specified frame meta-segment, and add an interaction floating layer at a position where a specific object is located;
the interactive implantation unit 413 is used for establishing an interactive floating layer and object connection of the interactive elements or implanting the interactive elements; when the interactive floating layer is selected on the display interface, switching and displaying the interactive floating layer into interactive elements;
the first generating unit 414 is configured to combine each frame of picture to which the interactive floating layer is added, and generate a corresponding new frame fragment.
Further, in one embodiment, as shown in fig. 11, the interaction module 40 includes:
a segment receiving unit 421, configured to receive multiple sets of status frame meta-segments sent by a user; each group of state frame meta-segments comprises a first state frame meta-segment and a second state frame meta-segment which are respectively different plot trends of the specific object in the same event;
the second interaction unit 422 is configured to extract each frame of picture of the first state frame meta-segment, and add an interaction floating layer at a position where a specific object is located;
an interaction generating unit 423, configured to establish an object connection between the interaction floating layer and the second state frame meta-segment; when the interactive floating layer is selected on the display interface, switching to a second state frame meta-segment;
the second generating unit 424 is configured to combine each frame of the first status frame meta-segment added with the interactive floating layer with the second status frame meta-segment to generate a corresponding new frame meta-segment.
Further, in one embodiment, as shown in FIG. 12, the interaction module 40 includes;
a sound and picture receiving unit 431, which is used for receiving the replacement graphics and/or the replacement sound source sent by the user;
a sound picture extraction unit 432 for extracting each frame picture and/or track information of the specified frame meta-segment;
a sound picture replacing unit 433 for replacing or overlaying the specific object of each frame picture with a replacement graphic, and/or replacing the sound source of the specific object in the track information with a replacement sound source;
the third generating unit 434 is configured to recombine each frame of picture replaced or overlaid by the replacement graphics and/or the audio track information replaced by the replacement audio source to generate a corresponding new frame segment.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The invention also provides a player, comprising a playing module (not shown in the figure); the playing module is used for playing the interactive video manufactured by the interactive video manufacturing method in any one of the above embodiments.
In another aspect, the present invention provides a terminal, where the terminal includes a memory, a processor, and a program for making an interactive video stored in the memory and executable on the processor, and when the program for making an interactive video is executed by the processor, the method for making an interactive video according to any one of the above embodiments is implemented.
The present invention further provides a computer-readable storage medium, in which a program for making an interactive video is stored, and when being executed by a processor, the program for making an interactive video realizes the steps of the method for making an interactive video according to any one of the above embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and method steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The invention is not limited solely to that described in the specification and embodiments, and additional advantages and modifications will readily occur to those skilled in the art, so that the invention is not limited to the specific details, representative apparatus, and illustrative examples shown and described herein, without departing from the spirit and scope of the general concept as defined by the appended claims and their equivalents.

Claims (13)

1. A method for making an interactive video, comprising the steps of:
receiving a plurality of video clips which are recorded in advance and transmitted by a user and characteristic information of a specific object; wherein, the step of receiving the characteristic information of a plurality of video clips and specific objects which are recorded in advance and sent by the user comprises the following steps: receiving a plurality of video clips sent by a user, wherein the plurality of video clips comprise a plurality of preorder plot clips and a plurality of subsequent plot clips of parallel time axes which are connected with each preorder plot clip; receiving characteristic information of a specific object sent by a user, wherein the characteristic information comprises each view information and/or each part local characteristic information of the specific object;
identifying a specific object in the video clip according to the characteristic information to obtain all frame elements of the specific object appearing in the video clip;
dividing all the frame elements from the video clip to obtain a plurality of frame element clips containing specific objects;
adding an interactive element to a specific object in the frame meta-segments to obtain a new frame meta-segment with the interactive element;
inserting the new frame element segment into a time axis position of the video segment corresponding to the frame element segment to obtain an interactive video with interactive elements;
the method for making the interactive video further comprises the following steps:
acquiring a pre-recorded hidden subsequent scenario segment sent by a user;
acquiring a corresponding weight value of each pre-arranged scenario segment and each subsequent scenario segment preset by a user, acquiring a selection result of the pre-arranged scenario segment and the subsequent scenario segment in the interactive video sent by the client side by the user at the client side, and calculating a corresponding total weight value;
and judging whether the total weight value exceeds a preset weight threshold value, and if so, inserting the hidden subsequent plot segments on a time axis corresponding to the interactive video.
2. The interactive video production method according to claim 1, wherein the interactive video production method further comprises the steps of:
acquiring the selection times of each preorder plot segment and each subsequent plot segment in the interactive video by a plurality of clients;
extracting the preorder scenario segments and the subsequent scenario segments with the largest selection times in each selection of the interactive video;
and combining the pre-sequence plot segments with the maximum selected times with the subsequent plot segments to generate a video of a fixed plot.
3. The method for producing interactive video according to claim 1, wherein the step of adding an interactive element to a specific object in a plurality of frame fragments to obtain a new frame fragment with an interactive element comprises:
receiving an instruction for adding interactive elements to the specified frame element segment;
extracting each frame of picture of the appointed frame meta-fragment, and adding an interactive floating layer at the position of a specific object;
establishing object connection between the interactive floating layer and the interactive elements or implanting the interactive elements; when the interactive floating layer is selected on the display interface, switching to display as the interactive element;
and combining each frame of picture added with the interactive floating layer to generate the corresponding new frame meta-segment.
4. The method for producing interactive video according to claim 1, wherein the step of adding an interactive element to a specific object in a plurality of frame fragments to obtain a new frame fragment with an interactive element comprises:
receiving a plurality of groups of state frame fragments sent by a user; each group of state frame meta-segments comprises a first state frame meta-segment and a second state frame meta-segment which are respectively different plot trends of a specific object in the same event;
extracting each frame of picture of the first state frame meta-fragment, and adding an interactive floating layer at the position of a specific object;
establishing object connection between the interactive floating layer and the second state frame element fragment; when the interactive floating layer is selected on a display interface, switching to the second state frame meta-segment;
and combining each frame of picture of the first state frame meta-segment added with the interactive floating layer and the second state frame meta-segment to generate the corresponding new frame meta-segment.
5. The method for producing an interactive video according to claim 1, wherein the step of adding an interactive element to a specific object in a plurality of said frame fragments to obtain a new frame fragment having an interactive element comprises;
receiving a replacement graph and/or a replacement sound source sent by a user;
extracting each frame picture and/or audio track information of the specified frame meta-segment;
replacing or overlaying the specific object of each frame picture with the replacement graph and/or replacing the sound source of the specific object in the audio track information with the replacement sound source;
and recombining the information of each frame picture replaced or covered by the replacement graph and/or the sound track replaced by the replacement sound source to generate the corresponding new frame meta-segment.
6. An apparatus for producing an interactive video, comprising:
the receiving module is used for receiving a plurality of video clips which are recorded in advance and transmitted by a user and the characteristic information of a specific object; the receiving module comprises a first receiving unit and a second receiving unit; the first receiving unit is used for receiving a plurality of video clips sent by a user; the plurality of video clips comprise a plurality of preorder plot clips and a plurality of subsequent plot clips of parallel time axes which connect each preorder plot clip; the second receiving unit is used for receiving the characteristic information of the specific object sent by the user; wherein the characteristic information comprises each view information and/or each part local characteristic information of a specific object;
the identification module is used for identifying a specific object in the video clip according to the characteristic information to obtain all frame elements of the specific object appearing in the video clip;
the separating module is used for separating all the frame elements from the video clip to obtain a plurality of frame element clips containing specific objects;
the interaction module is used for adding interaction elements to specific objects in the frame meta-segments to obtain new frame meta-segments with the interaction elements;
the generating module is used for inserting the new frame element segment into the position of a time axis of the video segment corresponding to the frame element segment to obtain an interactive video with interactive elements;
wherein, the making device of the interactive video further comprises:
the first acquisition module is used for acquiring pre-recorded hidden subsequent scenario segments sent by a user;
the second acquisition module is used for acquiring a corresponding weight value of each preorder plot segment and each subsequent plot segment preset by a user;
a third obtaining module, configured to obtain a selection result of the preorder scenario segment and the subsequent scenario segment in the interactive video, sent by the client, of the user at the client, and calculate a corresponding total weight value;
and the judging module is used for judging whether the total weight value exceeds a preset weight threshold value, and if so, inserting the hidden subsequent plot segments on a time axis corresponding to the interactive video.
7. The interactive video production apparatus according to claim 6, wherein the interactive video production apparatus further comprises:
a fourth obtaining module, configured to obtain the number of times that each of the preorder scenario segments and the subsequent scenario segments are selected by multiple clients in the interactive video;
the extraction module is used for extracting the preorder scenario segments and the subsequent scenario segments which are selected most frequently in each selection of the interactive video;
and the combination module is used for combining the preorder plot segments with the maximum selected times with the subsequent plot segments to generate a video with a fixed plot.
8. The interactive video production apparatus according to claim 6, wherein the interactive module comprises:
the instruction receiving unit is used for receiving an instruction for adding an interactive element to the specified frame element segment;
the first interactive unit is used for extracting each frame of picture of the appointed frame element fragment and adding an interactive floating layer at the position of a specific object;
the interactive implantation unit is used for establishing object connection between the interactive floating layer and the interactive elements or implanting the interactive elements; when the interactive floating layer is selected on the display interface, switching to display as the interactive element;
and the first generating unit is used for combining each frame of picture added with the interactive floating layer to generate the corresponding new frame meta-fragment.
9. The interactive video production apparatus according to claim 6, wherein the interactive module comprises:
the segment receiving unit is used for receiving a plurality of groups of state frame element segments sent by a user; each group of state frame meta-segments comprises a first state frame meta-segment and a second state frame meta-segment which are respectively different plot trends of a specific object in the same event;
the second interaction unit is used for extracting each frame of picture of the first state frame meta-fragment and adding an interaction floating layer at the position of a specific object;
the interaction generating unit is used for establishing object connection between the interaction floating layer and the second state frame element fragment; when the interactive floating layer is selected on a display interface, switching to the second state frame meta-segment;
and the second generating unit is used for combining each frame of picture of the first state frame meta-segment added with the interactive floating layer and the second state frame meta-segment to generate the corresponding new frame meta-segment.
10. The interactive video production apparatus according to claim 6, wherein the interactive module includes;
the sound and picture receiving unit is used for receiving the replacement graphics and/or the replacement sound source sent by the user;
a sound-picture extracting unit for extracting each frame picture and/or track information of the specified frame meta-segment;
the sound and picture replacing unit is used for replacing or overlaying the specific object of each frame picture into the replacing graph and/or replacing the sound source of the specific object in the sound track information into the replacing sound source;
and the third generating unit is used for recombining each frame picture replaced or covered by the replacement graph and/or the audio track information replaced by the replacement audio source to generate the corresponding new frame meta-segment.
11. A terminal, characterized in that the terminal comprises a memory, a processor and a production program of an interactive video stored in the memory and executable on the processor, the production program of an interactive video implementing the steps of the production method of an interactive video according to any one of claims 1 to 5 when executed by the processor.
12. A computer-readable storage medium, in which a production program of an interactive video is stored, and the production program of the interactive video realizes the steps of the production method of an interactive video according to any one of claims 1 to 5 when executed by a processor.
13. A player is characterized by comprising a playing module; the playing module is used for playing the interactive video produced by the interactive video production method of any one of claims 1 to 5.
CN202010837707.3A 2020-08-19 2020-08-19 Interactive video production method, device, terminal, storage medium and player Expired - Fee Related CN111711856B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010837707.3A CN111711856B (en) 2020-08-19 2020-08-19 Interactive video production method, device, terminal, storage medium and player

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010837707.3A CN111711856B (en) 2020-08-19 2020-08-19 Interactive video production method, device, terminal, storage medium and player

Publications (2)

Publication Number Publication Date
CN111711856A CN111711856A (en) 2020-09-25
CN111711856B true CN111711856B (en) 2020-12-01

Family

ID=72547270

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010837707.3A Expired - Fee Related CN111711856B (en) 2020-08-19 2020-08-19 Interactive video production method, device, terminal, storage medium and player

Country Status (1)

Country Link
CN (1) CN111711856B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112261481B (en) * 2020-10-16 2022-03-08 腾讯科技(深圳)有限公司 Interactive video creating method, device and equipment and readable storage medium
CN112261482B (en) * 2020-10-16 2022-07-22 腾讯科技(深圳)有限公司 Interactive video playing method, device and equipment and readable storage medium
CN113938712B (en) * 2021-10-13 2023-10-10 北京奇艺世纪科技有限公司 Video playing method and device and electronic equipment

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101127867A (en) * 2006-08-16 2008-02-20 任峰 Video playing method for selectable scenario
CN102129346A (en) * 2011-03-03 2011-07-20 亿度慧达教育科技(北京)有限公司 Video interaction method and device
CN102833490A (en) * 2011-06-15 2012-12-19 新诺亚舟科技(深圳)有限公司 Method and system for editing and playing interactive video, and electronic learning device
CN103929669A (en) * 2014-04-30 2014-07-16 成都理想境界科技有限公司 Interactive video generator, player, generating method and playing method
CN106096062A (en) * 2016-07-15 2016-11-09 乐视控股(北京)有限公司 video interactive method and device
CN106662920A (en) * 2014-10-22 2017-05-10 华为技术有限公司 Interactive video generation
CN108124187A (en) * 2017-11-24 2018-06-05 互影科技(北京)有限公司 The generation method and device of interactive video
CN109889921A (en) * 2019-04-02 2019-06-14 北京蓦然认知科技有限公司 A kind of audio-video creation, playback method and device having interactive function
US10460766B1 (en) * 2018-10-10 2019-10-29 Bank Of America Corporation Interactive video progress bar using a markup language
CN110677707A (en) * 2019-09-26 2020-01-10 林云帆 Interactive video generation method, generation device, equipment and readable medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101127867A (en) * 2006-08-16 2008-02-20 任峰 Video playing method for selectable scenario
CN102129346A (en) * 2011-03-03 2011-07-20 亿度慧达教育科技(北京)有限公司 Video interaction method and device
CN102833490A (en) * 2011-06-15 2012-12-19 新诺亚舟科技(深圳)有限公司 Method and system for editing and playing interactive video, and electronic learning device
CN103929669A (en) * 2014-04-30 2014-07-16 成都理想境界科技有限公司 Interactive video generator, player, generating method and playing method
CN106662920A (en) * 2014-10-22 2017-05-10 华为技术有限公司 Interactive video generation
CN106096062A (en) * 2016-07-15 2016-11-09 乐视控股(北京)有限公司 video interactive method and device
CN108124187A (en) * 2017-11-24 2018-06-05 互影科技(北京)有限公司 The generation method and device of interactive video
US10460766B1 (en) * 2018-10-10 2019-10-29 Bank Of America Corporation Interactive video progress bar using a markup language
CN109889921A (en) * 2019-04-02 2019-06-14 北京蓦然认知科技有限公司 A kind of audio-video creation, playback method and device having interactive function
CN110677707A (en) * 2019-09-26 2020-01-10 林云帆 Interactive video generation method, generation device, equipment and readable medium

Also Published As

Publication number Publication date
CN111711856A (en) 2020-09-25

Similar Documents

Publication Publication Date Title
CN111711856B (en) Interactive video production method, device, terminal, storage medium and player
CN112348969B (en) Display method and device in augmented reality scene, electronic equipment and storage medium
CN106789991B (en) Multi-person interactive network live broadcast method and system based on virtual scene
CN111372661B (en) Enhancing virtual reality video games with friendly avatar
CN107852476B (en) Moving picture playback device, moving picture playback method, moving picture playback system, and moving picture transmission device
CN106200918B (en) A kind of information display method based on AR, device and mobile terminal
CN111953910B (en) Video processing method and device based on artificial intelligence and electronic equipment
JP7303754B2 (en) Method and system for integrating user-specific content into video production
CN104469179A (en) Method for combining dynamic pictures into mobile phone video
US10617945B1 (en) Game video analysis and information system
CA2915582A1 (en) Image processing apparatus, image processing system, image processing method and storage medium
CN106730815A (en) The body-sensing interactive approach and system of a kind of easy realization
CN112426712A (en) Method and device for generating live broadcast picture of game event
CN110162667A (en) Video generation method, device and storage medium
JP2007300562A (en) Image processing apparatus and image processing method
CN109104619B (en) Image processing method and device for live broadcast
CN101346162A (en) Game machine, game machine control method, and information storage medium
CN113490004A (en) Live broadcast interaction method and related device
CN110062163B (en) Multimedia data processing method and device
CN106101808A (en) A kind of net cast method and device
CN108459808A (en) With the display relevant augmented reality interaction systems of image theme and its method for running
KR102177854B1 (en) System and method for generating of personalized highlights video
CN109005441B (en) Virtual competition completion scene playing method and device, terminal and server
CN107133561A (en) Event-handling method and device
CN115035220A (en) 3D virtual digital person social contact method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20220331

Address after: 510220 room 1703, No. 100, Jiangnan Avenue middle, Haizhu District, Guangzhou City, Guangdong Province

Patentee after: Guangzhou aggregation big data Development Co.,Ltd.

Address before: 518000 Room C, 17th floor, west block, Changxing Times Square, Taoyuan Road, Nantou street, Nanshan District, Shenzhen, Guangdong

Patentee before: SHENZHEN DIANTONG INFORMATION TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20201201

CF01 Termination of patent right due to non-payment of annual fee