CN112218102B - Video content package making method, client and system - Google Patents

Video content package making method, client and system Download PDF

Info

Publication number
CN112218102B
CN112218102B CN202010890993.XA CN202010890993A CN112218102B CN 112218102 B CN112218102 B CN 112218102B CN 202010890993 A CN202010890993 A CN 202010890993A CN 112218102 B CN112218102 B CN 112218102B
Authority
CN
China
Prior art keywords
video
information
text
splitting
scenario
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010890993.XA
Other languages
Chinese (zh)
Other versions
CN112218102A (en
Inventor
马宇尘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Liangming Technology Development Co Ltd
Original Assignee
Shanghai Liangming Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Liangming Technology Development Co Ltd filed Critical Shanghai Liangming Technology Development Co Ltd
Priority to CN202010890993.XA priority Critical patent/CN112218102B/en
Publication of CN112218102A publication Critical patent/CN112218102A/en
Application granted granted Critical
Publication of CN112218102B publication Critical patent/CN112218102B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Abstract

The invention provides a video content package making method, a client and a system, and relates to the technical field of Internet. A video content package making method comprises the following steps: collecting set video scenario information; splitting the video episodes to obtain a plurality of episode clip information; for each episode information, video segment production invitation information is sent out at the network platform. After splitting the video scenario, the invention sends out video segment production invitation information on the network platform, thereby realizing the package production of video content and meeting the requirements of users for producing abundant video content.

Description

Video content package making method, client and system
Technical Field
The invention relates to the technical field of Internet.
Background
In the system market, a class of systems has been attracting attention, and people have unique interests and demands for the class of systems, namely social systems. According to incomplete statistics, currently, thousands of SNS (Social Networking Services) related electronic products exist in China, and the main SNS systems have the following relatively large types: campus life type, the user is mainly students; professional commerce type users mainly adopt white collars; friend-making, wherein the user prefers to men and women young with proper age; the open type user has a lower entrance threshold, and is convenient to communicate. At present, a video online social system and a video online social method thereof are being pursued by people of all ages, and video social platforms such as tremble sound, watermelon video, volcanic small video and the like become common social tools in people's life. The user can watch all kinds of objects and drop events recorded by people in different places through the lens through the video social platform.
On the other hand, the micro-movies which are on the network at present greatly enrich the entertainment life of people, but because of the professional nature of the micro-movie production and the large manpower and material resource consumption for shooting the movies, the individual independent production of the movies with rich episodes is still difficult. For example, when a user wants to complete a recording of a european tour, he may need to take a photograph of a different field scene that goes to multiple european countries with a photographing tool, and some shots often take only tens of seconds, because these tens of seconds consume a lot of manpower, material resources, and financial resources.
How to fully utilize the lens data of mass users widely distributed in various places in the current network platform so as to meet the requirements of different users for making video contents with rich contents is also a problem to be solved at present.
Disclosure of Invention
The invention aims at: the defects of the prior art are overcome, and a video content package making method, a client and a system are provided. After splitting the video scenario, the invention sends out video segment production invitation information on the network platform, thereby realizing the package production of video content, meeting the requirement of users for producing abundant film and television content and improving user experience.
In order to achieve the above object, the present invention provides the following technical solutions.
A video content package making method comprises the following steps: collecting set video scenario information; splitting the video episodes to obtain a plurality of episode clip information; for each episode information, video segment production invitation information is sent out at the network platform.
Further, the method also comprises the steps of,
collecting the contractor user information for receiving the video segment production invitation information;
acquiring video segment information produced by a contractor,
and combining the video segments according to the corresponding scenario segments to form a composite video.
Further, the method also comprises the steps of,
analyzing the episode information to determine whether the episode contains content corresponding to a future time;
in the event that a determination is made to contain content corresponding to a future time, future video segment production invitation information is triggered.
Preferably, when the content corresponding to the future time contains geographic position information, searching is performed based on the geographic position information, and future video segment production invitation information is sent to a target object meeting the geographic position information condition.
Further, the step of splitting the video scenario is that,
analyzing and learning the existing video in the network platform or the related network platform by using a machine learning model;
acquiring a plot splitting rule through analysis and learning;
splitting the video scenario according to the scenario splitting rule.
Further, the steps of splitting the video scenario are as follows,
acquiring text content of video plot information, and metering the word number of the text content;
dividing the word number into N segments, wherein N is an integer greater than or equal to 2, and each segment corresponds to a plot segment;
and recording the segment number corresponding to each episode.
Further, the steps of splitting the video scenario are as follows,
acquiring text information of video scenario information;
carrying out semantic analysis on the text, and dividing the text into a plurality of text blocks according to semantic information;
each text block corresponds to a plot, and the text block corresponding to each plot and the position in the text are recorded.
Preferably, the plurality of text blocks are divided based on a splitting method, specifically comprising the steps of,
dividing a text into 2 text blocks according to semantic information, acquiring a keyword vocabulary and word frequency information of each text block, and constructing a text feature vector;
comparing the difference degree of the text feature vectors of the 2 text blocks;
if the difference degree reaches a threshold value, representing that the primary splitting is successful, performing secondary splitting on 2 text blocks, wherein each text block is split into 2 secondary text blocks;
obtaining keyword vocabulary and word frequency information of each secondary text block, constructing secondary text feature vectors, and comparing the difference degrees of the text feature vectors of the adjacent 2 secondary text blocks;
if the difference degree reaches a threshold value, the second-level splitting is successful, and the third-level splitting is carried out on the second-level text block; and the same is done until the difference degree of the corresponding N-level text blocks is smaller than the threshold value, the current-level splitting is canceled, and the splitting is terminated.
The invention also provides a video social client, which comprises the following structures:
the information acquisition module is used for acquiring the set video scenario information;
the information processing module splits the video episodes to obtain a plurality of episode clip information;
the video publishing module is used for sending out video segment production invitation information on the network platform according to the information of each episode.
The invention also provides a video social system, which comprises a user client side and a server side,
the user client comprises an information acquisition module which is used for acquiring set video scenario information;
the server side comprises the following structures:
the information processing module splits the video episodes to obtain a plurality of episode clip information;
the video publishing module is used for sending out video segment production invitation information on the network platform according to the information of each episode.
Compared with the prior art, the invention adopts the technical proposal, and has the following advantages and positive effects by way of example and not limitation: after splitting the video plot, sending video segment production invitation information on the network platform, so that the package production of video content is realized, the requirement of users for producing video content rich in content is met, and the user experience is improved.
Drawings
Fig. 1 is a flowchart of a method for producing a package of video content according to an embodiment of the present invention.
Fig. 2 to fig. 7 are operation example diagrams of a video packet according to an embodiment of the present invention.
Fig. 8 is a block diagram of a client according to an embodiment of the present invention.
Fig. 9 is a schematic structural diagram of a system according to an embodiment of the present invention.
The labels in the figures are as follows:
a user 100;
an intelligent terminal 200, a display structure 210, document editing windows 220, 230;
the system comprises a client 300, an information acquisition module 310, an information processing module 320 and a video publishing module 330;
system 400, user client 410, server 420.
Detailed Description
The method, the client and the system for producing the video content package provided by the invention are further described in detail below with reference to the accompanying drawings and the specific embodiments. It should be noted that the technical features or combinations of technical features described in the following embodiments should not be regarded as being isolated, and they may be combined with each other to achieve a better technical effect. In the drawings of the embodiments described below, like reference numerals appearing in the various drawings represent like features or components and are applicable to the various embodiments.
It should be noted that the structures, proportions, sizes and the like shown in the drawings attached to the present specification are for understanding and reading only in conjunction with the disclosure, and are not intended to limit the applicable scope of the invention, and any structural modification, change in proportion or adjustment of size should fall within the scope of the disclosure without affecting the efficacy and achievement of the present invention. The scope of the preferred embodiments of the present invention includes additional implementations in which functions may be performed in a substantially simultaneous manner or in an opposite order, not in the order shown or discussed, including in accordance with the functions involved, as would be understood by those skilled in the art to which the embodiments of the present invention pertains.
Examples
Referring to fig. 1, a method for producing a video content package includes the following steps:
s100, collecting set video scenario information.
The packet issuing party sets video scenario information in the network platform and collects the video scenario information. The network platform can be various live broadcast platforms, small video platforms or video playing platforms.
The contractor is a party sending video segment making invitation information to the contractor, and the contractor comprises natural people and organizations. Specifically, for example, the sender may be a third person registered in the network platform; or a certain registration organization; the network platform can also be, by way of example and not limitation, such as a live video platform that sends a scenario message to be produced into a network movie to an audience based on its own platform, where the live video platform is the sender.
S200, splitting the video episodes to obtain a plurality of episode clip information.
In this embodiment, the foregoing video scenario may be split based on a machine learning model (algorithm), which may specifically include the following steps:
analyzing and learning the existing video in the network platform or the related network platform by using a machine learning model;
acquiring a plot splitting rule through analysis and learning;
splitting the video scenario according to the scenario splitting rule.
Preferably, the machine learning model is a deep learning model. The deep learning is a method for carrying out characterization learning on data in machine learning, can simulate the mechanism of human brain to interpret the data, carries out learning and analysis on images, sounds and texts, and can obtain the rules or rules of the data based on the learning. The deep learning model may generally include an image acquisition module, a sound acquisition module, an image recognition module, a voice recognition module, a machine translation module, and the like.
The deep learning model is used for learning a large number of existing videos, so that plot setting rules of the videos can be obtained, and plot splitting rules are set according to the plot setting rules.
For example, through learning, most of plot conversion of video is associated with a scene, 80% of plot conversion is accompanied by scene conversion, and then the set plot splitting rule may be: segmentation is performed according to scene information in the video. By way of example and not limitation, if the video includes 4 scenes, according to the time axis of the video, an indoor scene one, a sea scene, a city night scene and an indoor scene two in turn, the video may be split into 4 scenario pieces, including an indoor scene one, a sea scene piece, a city night scene piece and an indoor scene two piece in turn.
In another embodiment of the present embodiment, the video scenario may also be split based on the number of words of the text content of the scenario information set. Specifically, the method comprises the following steps:
acquiring text content of video plot information, and metering the word number of the text content;
dividing the word number into N segments, wherein N is an integer greater than or equal to 2, and each segment corresponds to a plot segment;
and recording the segment number corresponding to each episode.
The method is particularly suitable for the situation that the plot is simpler and the text content corresponding to the plot is more neat. By way of example and not limitation, such as when a user sets video scenario information, a poem form is adopted (or existing ancient poems are directly utilized as video scenarios), that is, the scenario splitting method is applicable to the mode.
In another embodiment of the present embodiment, the foregoing video scenario may also be split based on semantic analysis. The method specifically comprises the following steps:
acquiring text information of video scenario information;
carrying out semantic analysis on the text, and dividing the text into a plurality of text blocks according to semantic information;
each text block corresponds to a plot, and the text block corresponding to each plot and the position in the text are recorded.
Preferably, dividing the plurality of text blocks based on the splitting method includes the steps of:
dividing the text into 2 text blocks according to semantic information, acquiring keyword vocabulary and word frequency information of each text block, and constructing a text feature vector. By way of example, 2 text blocks are numbered text block a and text block B. When dividing the text, the text may be divided randomly based on the period or the paragraph symbol, or may be divided based on the sentence structure of the text, for example, the first 100 sentences are one text block and the second text block.
The degree of difference of the text feature vectors of the text block a and the text block B is compared.
If the difference degree reaches the threshold value, the primary splitting is successful, and the secondary splitting is carried out on 2 text blocks, wherein each text block is split into 2 secondary text blocks. By way of example and not limitation, if the threshold is set to 60%, and the difference between the text block a and the text block B is greater than 60%, it means that the first-level splitting is successful, and the second-level splitting is continued, where the text block a is split into the second-level text block A1 and the second-level text block A2, and where the text block B is split into the second-level text block B1 and the second-level text block B2.
And obtaining keyword vocabulary and word frequency information of each secondary text block, constructing secondary text feature vectors, and comparing the difference degrees of the text feature vectors of the adjacent 2 secondary text blocks. And respectively comparing text feature vectors of the secondary text block A1 and the secondary text block A2, and acquiring corresponding difference degrees by the text feature vectors of the secondary text block B1 and the secondary text block B2.
If the difference degree reaches a threshold value, representing that the secondary splitting is successful, and performing tertiary splitting on the secondary text blocks which are successfully split; and the same is done until the difference degree of the corresponding N-level text blocks is smaller than the threshold value, the current-level splitting is canceled, and the splitting is terminated. When the difference degree of the text feature vectors of the secondary text blocks A1 and A2 is larger than 60%, the splitting is successful, the secondary text blocks A1 and A2 are continuously split respectively, the secondary text block A1 is split into A11 and A12, the secondary text block A2 is split into A21 and A22, and then the text feature vectors are respectively constructed and compared.
When the difference degree of the text feature vectors of the secondary text blocks B1 and B2 is smaller than 60%, the secondary segmentation is represented as failure, and the secondary segmentation is cancelled, namely the restored text block B (called stable text block) is not split any more.
And the like, until the obtained text block cannot be split successfully, and the text block belongs to a stable text block, the completion of splitting is represented. The obtained stable text block number is the number of the split text blocks. For example, if a11 and a12 split fails, A1 is called a stable text block, if a21 and a22 split fails, A2 is called a stable text block, and the aforementioned video scenario split results are: the text block A1, the text block A2 and the text block B are sequentially arranged, and each text block corresponds to one plot segment.
S300, for each episode information, video segment production invitation information is sent out on the network platform.
And aiming at the split obtained scenario segments, sending out invitation information of video segment production on a network platform, namely, sending out a package.
After step S300, the method further includes the steps of:
collecting the contractor user information for receiving the video segment production invitation information; and acquiring video segment information produced by the contractor, and combining the video segments according to the corresponding scenario segments to form a composite video.
In the following, a person user, zhang san, is taken as an example to describe how Zhang san is used for making video package in a video social platform such as network video/live broadcast, so as to make video.
And a client of the live broadcast platform is installed in the intelligent terminal of Zhang III. The intelligent terminal can be a mobile phone, a tablet computer, a telephone, a notebook computer or a wearable intelligent terminal.
The client may include a user management module that may be used for management of user identity information such as user registration, login, and information maintenance. By way of example and not limitation, such as when a user registers, identity information, such as facial image data, may be uploaded by the user management module as standard identity information, and upon subsequent login, the user may log into the client through a facial recognition function.
After Zhang III enters the client, referring to FIG. 2, the short video output on the social platform of the video can be browsed, viewed, criticized and praise, and the video can also be uploaded and made.
With continued reference to fig. 2, the client of the video social platform further provides a video package making trigger option "package making", and after triggering the option, the client enters a video package making interface, as shown in fig. 3.
And in the video parcel making interface, prompting the third step to set video episodes.
By way of example and not limitation, the setting of the video episodes may be based on a default template, or the user's own authoring concept, or local settings based on the user's current geographic location.
Taking a default template as an example, after the user triggers the "available template" option, the user may enter a default video scenario interface. In the interface, the user can select various settings of theme, role, scene, style, tone, prop, special effect, dubbing and the like of the script. By way of example and not limitation, the subject may include swordsman, science fiction, metropolitan, juvenile, quadratic element, etc., the character may include superman, alien, special-shaped, monster, etc., the scene may include kitchen, courtyard, field, urban night scene, etc., the style may include freshness, dynamic, vitality, science fiction, etc., the hue may include vivid, dusk, deep, etc., the prop may include virtual pets, virtual equipment, etc., the special effect may include photoelectric special effect, universe special effect, cloudy special effect, etc., and the dubbing may include girl, boy, girl, young man, etc.
Or the user triggers "i am about to compose" to compose a video episode, specifically, for example, edit the text content of the video episode on the web platform through the online document editing window 220, see fig. 4. After editing is completed, the next step is triggered to enter the package sending stage.
In the parcel stage, firstly, the video scenario information set by the user is collected, and then the video scenario is split to obtain a plurality of scenario piece information. Referring to fig. 5, by way of example and not limitation, a video episode such as a Zhang San setting is split into 3 episode pieces, the split 3 episode pieces are output by way of an online document, and a separate document editing window 230 is set for each episode piece for a user to edit a document, such as viewing, modifying, copying, pasting, etc. Of course, the user can also increase or decrease the episode as needed, and can specifically be implemented by outputting a trigger button for increasing and decreasing the episode at the window.
After the user edits, the user can click on the confirm package sending button to complete package making. Preferably, before sending the package sending information, the user may also be prompted to set rights on the package to select an object that can be subjected to the package, as shown in fig. 6, for example, the user may choose to open the package authority to any user (all users) of the network platform.
After the user selects to complete, the hair pack making is completed, as shown in fig. 7. To facilitate the user's withdrawal operation or tracking of the contractual information, withdrawal operation buttons and tracking operation buttons may also be provided so that the user makes a selection as desired.
In another mode of this embodiment, the method further includes the step of,
analyzing the episode information to determine whether the episode contains content corresponding to a future time; the content corresponding to the future time refers to content which does not occur so far. By way of example and not limitation, such as when a user sets video scenario information, a scenario involving an olympic meeting at hand that is to be held in a city is referred to.
In the event that a determination is made to contain content corresponding to a future time, future video segment production invitation information is triggered.
The future video segment production invitation information is produced for the video segment production at the future time.
In particular, when the content corresponding to the future time contains geographic position information, searching is performed based on the geographic position information, and future video segment production invitation information is sent to a target object meeting the geographic position information condition.
Taking the scenario of the olympic games about to be held in a city as an example, if the video needs to obtain video information related to the olympic games, then the user who is located in the city can send out production invitation information, because the user is more conditional to obtain the video information related to the olympic games in future time (on the day of the games); or based on web searches, such as through microblogs, circle of friends, to obtain users who may be located in the aforementioned city on the day of the opening of the screen. By way of example and not limitation, as Li Lan publishes information in a microblog that she has subscribed to an olympic movie on the day to hold city games Li Lanke as a target.
Referring to fig. 8, in another embodiment of the present invention, a video social client is provided.
The client 300 includes:
an information acquisition module 310 for acquiring the set video scenario information;
the information processing module 320 splits the video scenario to obtain a plurality of scenario piece information;
the video publishing module 330 is configured to send out video segment creation invitation information on the network platform for each episode information.
The information collection module is configured to collect video scenario information set in the client 300 by the sender. The client 300 may be a variety of live clients, small video clients, or video playback clients.
The contractor is a party sending video segment making invitation information to the contractor, and the contractor comprises natural people and organizations. Specifically, for example, the sender may be a third person registered in the network platform; or a certain registration organization; the network platform can also be, by way of example and not limitation, such as a live video platform that sends a scenario message to be produced into a network movie to an audience based on its own platform, where the live video platform is the sender.
Preferably, the information processing module splits the video scenario based on semantic analysis. The method specifically comprises the following steps:
acquiring text information of video scenario information;
carrying out semantic analysis on the text, and dividing the text into a plurality of text blocks according to semantic information;
each text block corresponds to a plot, and the text block corresponding to each plot and the position in the text are recorded.
Preferably, dividing the plurality of text blocks based on the splitting method includes the steps of:
dividing the text into 2 text blocks according to semantic information, acquiring keyword vocabulary and word frequency information of each text block, and constructing a text feature vector. The degree of difference of the text feature vectors of the text block a and the text block B is compared.
If the difference degree reaches the threshold value, the primary splitting is successful, and the secondary splitting is carried out on 2 text blocks, wherein each text block is split into 2 secondary text blocks. And obtaining keyword vocabulary and word frequency information of each secondary text block, constructing secondary text feature vectors, and comparing the difference degrees of the text feature vectors of the adjacent 2 secondary text blocks. If the difference degree reaches a threshold value, representing that the secondary splitting is successful, and performing tertiary splitting on the secondary text blocks which are successfully split; and the same is done until the difference degree of the corresponding N-level text blocks is smaller than the threshold value, the current-level splitting is canceled, and the splitting is terminated.
For the scenario segment obtained by the splitting, the video publishing module 330 sends out the invitation information of video segment production on the network platform, i.e. sends out the package.
The client 300 may further include a video composition module, configured to obtain video segment information produced by the contractor, and combine the video segments according to their corresponding scenario segments to form a composite video.
Other technical features are referred to the previous embodiments and will not be described here again.
Referring to fig. 9, in another embodiment of the present invention, a video social system 400 is provided, which includes a user client 410 and a server 420.
The user client 410 includes an information collection module for collecting video scenario information set up;
the server 420 includes the following structure:
the information processing module splits the video episodes to obtain a plurality of episode clip information;
the video publishing module is used for sending out video segment production invitation information on the network platform according to the information of each episode.
In this embodiment, the user client 410 is preferably a live client or a small video client.
The user client 410 and the server 420 are connected by a communication network, and the network is generally the internet, or may be a local internet or a local area network.
The server 420 includes a hardware server, and the hardware server may generally include the following structures: one or more processors for performing computational processing; the storage is specifically memory, external memory and network storage, and is used for storing data required by calculation and an executable program; a network interface for connecting to a network; the hardware units are connected through a computer bus (bus) or a signal line.
Other technical features are referred to the previous embodiments and will not be described here again.
In the above description, although all components of aspects of the present disclosure may be interpreted as being assembled or operatively connected as one module, the present disclosure is not intended to limit itself to these aspects. Rather, the components may be selectively and operatively combined in any number within the scope of the present disclosure. Each of these components may also be implemented by itself as hardware, while the individual components may be partially or selectively combined together in general and implemented as a computer program having program modules for executing the functions of the hardware equivalents. The code or code segments to construct such a program may be readily derived by those skilled in the art. Such a computer program may be stored in a computer readable medium that can be run to implement aspects of the present disclosure. The computer readable medium may include magnetic recording media, optical recording media, and carrier wave media.
In addition, terms like "comprising," "including," and "having" should be construed by default as inclusive or open-ended, rather than exclusive or closed-ended, unless expressly defined to the contrary. All technical, scientific, or other terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. Common terms found in dictionaries should not be too idealized or too unrealistically interpreted in the context of the relevant technical document unless the present disclosure explicitly defines them as such.
Although the exemplary aspects of the present disclosure have been described for illustrative purposes, those skilled in the art will appreciate that the foregoing description is merely illustrative of preferred embodiments of the invention and is not intended to limit the scope of the invention in any way, including additional implementations in which functions may be performed other than in the order shown or discussed. Any alterations and modifications of the present invention, which are made by those of ordinary skill in the art based on the above disclosure, are intended to be within the scope of the appended claims.

Claims (8)

1. A method for producing video content package is characterized by comprising the following steps:
collecting video scenario information set in a network platform by a packet sender; the package sending party is a party sending out video segment making invitation information;
splitting the video episodes to obtain a plurality of episode clip information;
for each episode information, sending out video segment production invitation information on a network platform;
wherein, still include the step: analyzing the episode information to determine whether the episode contains content corresponding to a future time, the content corresponding to the future time being content that has not occurred so far; triggering future video segment production invitation information in the case that the content corresponding to the future time is judged to be contained, wherein the future video segment production invitation information is produced for the video segment at the future time;
and when the content corresponding to the future time contains geographic position information, searching based on the geographic position information, and sending out future video segment production invitation information to a target object conforming to the geographic position information condition.
2. The method according to claim 1, characterized in that: the method also comprises the steps of,
collecting the contractor user information for receiving the video segment production invitation information;
acquiring video segment information produced by a contractor,
and combining the video segments according to the corresponding scenario segments to form a composite video.
3. The method according to claim 1, characterized in that: the step of splitting the video scenario is,
analyzing and learning the existing video in the network platform or the related network platform by using a machine learning model;
acquiring a plot splitting rule through analysis and learning;
splitting the video scenario according to the scenario splitting rule.
4. The method according to claim 1, characterized in that: the steps of splitting the aforementioned video scenario are as follows,
acquiring text content of video plot information, and metering the word number of the text content;
dividing the word number into N segments, wherein N is an integer greater than or equal to 2, and each segment corresponds to a plot segment;
and recording the segment number corresponding to each episode.
5. The method according to claim 1, characterized in that: the steps of splitting the aforementioned video scenario are as follows,
acquiring text information of video scenario information;
carrying out semantic analysis on the text, and dividing the text into a plurality of text blocks according to semantic information;
each text block corresponds to a plot, and the text block corresponding to each plot and the position in the text are recorded.
6. The method according to claim 5, wherein: dividing a plurality of text blocks based on a splitting method, specifically comprising the following steps,
dividing a text into 2 text blocks according to semantic information, acquiring a keyword vocabulary and word frequency information of each text block, and constructing a text feature vector;
comparing the difference degree of the text feature vectors of the 2 text blocks;
if the difference degree reaches a threshold value, representing that the primary splitting is successful, performing secondary splitting on 2 text blocks, wherein each text block is split into 2 secondary text blocks;
obtaining keyword vocabulary and word frequency information of each secondary text block, constructing secondary text feature vectors, and comparing the difference degrees of the text feature vectors of the adjacent 2 secondary text blocks;
if the difference degree reaches a threshold value, representing that the secondary splitting is successful, and performing tertiary splitting on the secondary text blocks which are successfully split; and the same is done until the difference degree of the corresponding N-level text blocks is smaller than the threshold value, the current-level splitting is canceled, and the splitting is terminated.
7. A video social client according to the method of claim 1, comprising:
the information acquisition module is used for acquiring the set video scenario information;
the information processing module splits the video episodes to obtain a plurality of episode clip information;
the video publishing module is used for sending out video segment production invitation information on the network platform according to the information of each episode.
8. A video social system in accordance with the method of claim 1, wherein: comprises a user client side and a server side,
the user client comprises an information acquisition module which is used for acquiring set video scenario information;
the server side comprises the following structures:
the information processing module splits the video episodes to obtain a plurality of episode clip information;
the video publishing module is used for sending out video segment production invitation information on the network platform according to the information of each episode.
CN202010890993.XA 2020-08-29 2020-08-29 Video content package making method, client and system Active CN112218102B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010890993.XA CN112218102B (en) 2020-08-29 2020-08-29 Video content package making method, client and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010890993.XA CN112218102B (en) 2020-08-29 2020-08-29 Video content package making method, client and system

Publications (2)

Publication Number Publication Date
CN112218102A CN112218102A (en) 2021-01-12
CN112218102B true CN112218102B (en) 2024-01-26

Family

ID=74059211

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010890993.XA Active CN112218102B (en) 2020-08-29 2020-08-29 Video content package making method, client and system

Country Status (1)

Country Link
CN (1) CN112218102B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101390032A (en) * 2006-01-05 2009-03-18 眼点公司 System and methods for storing, editing, and sharing digital video
CN103384311A (en) * 2013-07-18 2013-11-06 博大龙 Method for generating interactive videos in batch mode automatically
CN103905742A (en) * 2014-04-10 2014-07-02 北京数码视讯科技股份有限公司 Video file segmentation method and device
CN105122789A (en) * 2012-12-12 2015-12-02 克劳德弗里克公司 Digital platform for user-generated video synchronized editing
CN105794213A (en) * 2013-11-26 2016-07-20 谷歌公司 Collaborative video editing in cloud environment
CN105868292A (en) * 2016-03-23 2016-08-17 中山大学 Video visualization processing method and system
CN106649713A (en) * 2016-12-21 2017-05-10 中山大学 Movie visualization processing method and system based on content
CN108933970A (en) * 2017-05-27 2018-12-04 北京搜狗科技发展有限公司 The generation method and device of video
CN109194887A (en) * 2018-10-26 2019-01-11 北京亿幕信息技术有限公司 A kind of cloud cuts video record and clipping method and plug-in unit
CN111277905A (en) * 2020-03-09 2020-06-12 新华智云科技有限公司 Online collaborative video editing method and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9106804B2 (en) * 2007-09-28 2015-08-11 Gracenote, Inc. Synthesizing a presentation of a multimedia event
US8341525B1 (en) * 2011-06-03 2012-12-25 Starsvu Corporation System and methods for collaborative online multimedia production

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101390032A (en) * 2006-01-05 2009-03-18 眼点公司 System and methods for storing, editing, and sharing digital video
CN105122789A (en) * 2012-12-12 2015-12-02 克劳德弗里克公司 Digital platform for user-generated video synchronized editing
CN103384311A (en) * 2013-07-18 2013-11-06 博大龙 Method for generating interactive videos in batch mode automatically
CN105794213A (en) * 2013-11-26 2016-07-20 谷歌公司 Collaborative video editing in cloud environment
CN103905742A (en) * 2014-04-10 2014-07-02 北京数码视讯科技股份有限公司 Video file segmentation method and device
CN105868292A (en) * 2016-03-23 2016-08-17 中山大学 Video visualization processing method and system
CN106649713A (en) * 2016-12-21 2017-05-10 中山大学 Movie visualization processing method and system based on content
CN108933970A (en) * 2017-05-27 2018-12-04 北京搜狗科技发展有限公司 The generation method and device of video
CN109194887A (en) * 2018-10-26 2019-01-11 北京亿幕信息技术有限公司 A kind of cloud cuts video record and clipping method and plug-in unit
CN111277905A (en) * 2020-03-09 2020-06-12 新华智云科技有限公司 Online collaborative video editing method and device

Also Published As

Publication number Publication date
CN112218102A (en) 2021-01-12

Similar Documents

Publication Publication Date Title
US11477268B2 (en) Sharing digital media assets for presentation within an online social network
CN112188117B (en) Video synthesis method, client and system
EP3488618B1 (en) Live video streaming services with machine-learning based highlight replays
EP3475848B1 (en) Generating theme-based videos
CN103412951A (en) Individual-photo-based human network correlation analysis and management system and method
CN104394437B (en) A kind of online live method and system that start broadcasting
CN102945276A (en) Generation and update based on event playback experience
CN111279709B (en) Providing video recommendations
CN111368141B (en) Video tag expansion method, device, computer equipment and storage medium
CN106462810A (en) Connecting current user activities with related stored media collections
JP5418565B2 (en) Image display system, image display apparatus, server, image display method and program
WO2022078167A1 (en) Interactive video creation method and apparatus, device, and readable storage medium
US20190034536A1 (en) Cue data model implementation for adaptive presentation of collaborative recollections of memories
CN102455906B (en) Method and system for changing player skin
US20230156245A1 (en) Systems and methods for processing and presenting media data to allow virtual engagement in events
CN112218102B (en) Video content package making method, client and system
JP2010251841A (en) Image extraction program and image extraction device
CN112804273B (en) Multimedia content recommendation and interaction system and method under ubiquitous scene
US11330307B2 (en) Systems and methods for generating new content structures from content segments
CN112188116A (en) Video synthesis method, client and system based on object
CN109905766A (en) A kind of dynamic video poster generation method, system, device and storage medium
JP6830634B1 (en) Information processing method, information processing device and computer program
Lough Two days, twenty outfits: Coachella attendees’ visual presentation of self and experience on Instagram
US11317132B2 (en) Systems and methods for generating new content segments based on object name identification
Abdallah et al. The Effect of Mobile Journalism (MOJO) on Spreading News through Social Media Platforms

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant