TW201939322A - Film and television works production method, apparatus, and device - Google Patents

Film and television works production method, apparatus, and device Download PDF

Info

Publication number
TW201939322A
TW201939322A TW107147329A TW107147329A TW201939322A TW 201939322 A TW201939322 A TW 201939322A TW 107147329 A TW107147329 A TW 107147329A TW 107147329 A TW107147329 A TW 107147329A TW 201939322 A TW201939322 A TW 201939322A
Authority
TW
Taiwan
Prior art keywords
television
film
script
actor
user
Prior art date
Application number
TW107147329A
Other languages
Chinese (zh)
Other versions
TWI713965B (en
Inventor
邵帥
Original Assignee
香港商阿里巴巴集團服務有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 香港商阿里巴巴集團服務有限公司 filed Critical 香港商阿里巴巴集團服務有限公司
Publication of TW201939322A publication Critical patent/TW201939322A/en
Application granted granted Critical
Publication of TWI713965B publication Critical patent/TWI713965B/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

Disclosed are a film and television works production method, apparatus, and device. The method comprises: analyzing, by means of a pre-trained analysis model, film and television elements input by a user to determine feature attributes corresponding to the film and television elements; then, filtering, from pre-collected video materials, video materials matching the determined feature attributes corresponding to the film and television elements; and further editing the filtered video materials by means of a pre-trained production model and the determined feature attributes corresponding to the film and television elements, thereby obtaining film and television works.

Description

一種影視作品的製作方法、裝置及設備Production method, device and equipment of film and television works

本說明書涉及電腦技術領域,尤其涉及一種影視作品的製作方法、裝置及設備。This specification relates to the field of computer technology, and in particular, to a method, a device, and a device for making a film and television work.

當前,影視行業的發展十分迅速,大量的國內外影視佳作進入到在全國各地的電影院、電視臺中進行播放,從而給人們帶來了豐富、多樣的觀影體驗。
在實際應用中,一部影視作品的製作流程較為複雜,從編劇、演員選取、拍攝到後期製作,通常需要經過較長的時間才能完成。並且,一部影視作品的影視風格、表演風格、服飾特點等往往都是由拍攝該影視作品的導演、編劇、演員等決定的,而由於不同的使用者所喜愛的導演、演員有所不同,因此,一部影視作品往往也只能受到一部分使用者的青睞。
基於現有技術,需要更為有效的影視作品的製作方式。
At present, the film and television industry is developing very fast, and a large number of domestic and foreign film and television masterpieces have entered the movie theaters and television stations throughout the country to play, thereby bringing people a rich and diverse viewing experience.
In practical applications, the production process of a film and television work is more complicated. It usually takes a long time to complete from screenwriter, actor selection, shooting to post-production. In addition, the film and television style, performance style, and costume characteristics of a film and television work are often determined by the director, screenwriter, and actor who filmed the film and television work, and because different users and different directors and actors like it, Therefore, a film and television work is often only favored by some users.
Based on the existing technology, a more effective production method of film and television works is needed.

本說明書提供一種影視作品的製作方法,用以解決現有技術的影視作品的製作方式成本較高、效率低下,無法滿足使用者需求的問題。
本說明書提供了一種影視作品的製作方法,包括:
通過預先訓練的分析模型,對使用者輸入的影視元素進行分析,判定所述影視元素對應的特徵屬性;
從預先收集的各視頻素材中篩選與所述特徵屬性相匹配的視頻素材;
通過預先訓練的製作模型以及所述特徵屬性,對篩選出的視頻素材進行編排,得到影視作品。
本說明書提供一種影視作品的製作裝置,用以解決現有技術的影視作品的製作方式成本較高、效率低下,無法滿足使用者需求的問題。
本說明書提供了一種影視作品的製作裝置,包括:
分析模組,通過預先訓練的分析模型,對使用者輸入的影視元素進行分析,判定所述影視元素對應的特徵屬性;
篩選模組,從預先收集的各視頻素材中篩選與所述特徵屬性相匹配的視頻素材;
編排模組,通過預先訓練的製作模型以及所述特徵屬性,對篩選出的視頻素材進行編排,得到影視作品。
本說明書提供一種影視作品的製作設備,用以解決現有技術的影視作品的製作方式成本較高、效率低下,無法滿足使用者需求的問題。
本說明書提供了一種影視作品的製作設備,包括一個或多個記憶體以及處理器,所述記憶體儲存程式,並且被配置成由所述一個或多個處理器執行以下步驟:
通過預先訓練的分析模型,對使用者輸入的影視元素進行分析,判定所述影視元素對應的特徵屬性;
從預先收集的各視頻素材中篩選與所述特徵屬性相匹配的視頻素材;
通過預先訓練的製作模型以及所述特徵屬性,對篩選出的視頻素材進行編排,得到影視作品。
本說明書採用的上述至少一個技術方案能夠達到以下有益效果:
在本說明書一個或多個實施例中,可以通過預先訓練的分析模型,對使用者輸入的影視元素進行分析,以判定出影視元素對應的特徵屬性,而後,從預先收集的各視頻素材中篩選出與判定出的影視元素對應的特徵屬性相匹配的視頻素材,進而通過預先訓練出的製作模型以及判定出的影視元素對應的特徵屬性,對篩選出的視頻素材進行編排,得到影視作品。
從上述方法中可以看出,由於使用者可以通過自行選擇的影視元素,通過預先訓練的分析模型以及製作模型,完成影視作品的製作,這樣不僅極大的降低了影視作品的製作成本,提高了影視作品的製作效率,同時,用於製作出的影視作品是基於使用者選擇的各影視元素而完成的,所以,製作出的影視作品能夠很好迎合使用者的需求,從而在一定程度上提高了使用者的觀影體驗。
This specification provides a method for producing a film and television work, which is used to solve the problems of high cost, low efficiency, and inability to meet the needs of users in the production of film and television works in the prior art.
This manual provides a method for making film and television works, including:
Analyze the movie and television elements input by the user through a pre-trained analysis model to determine the characteristic attributes corresponding to the movie and television elements;
Filtering video materials that match the characteristic attributes from each of the previously collected video materials;
The screened video material is arranged through a pre-trained production model and the feature attributes to obtain a film and television work.
The present specification provides a production device for a film and television work, which is used to solve the problems of high cost, low efficiency, and inability to meet the needs of users in the production of film and television works in the prior art.
This manual provides a production device for film and television works, including:
The analysis module analyzes the movie and television elements input by the user through a pre-trained analysis model to determine the feature attributes corresponding to the movie and television elements;
A screening module that screens video materials that match the characteristic attributes from each of the previously collected video materials;
The orchestration module arranges the screened video materials through a pre-trained production model and the feature attributes to obtain film and television works.
This specification provides a production equipment for film and television works, which is used to solve the problems of high cost, low efficiency, and inability to meet the needs of users in the production of film and television works in the prior art.
This specification provides a production equipment for a film and television work, which includes one or more memories and a processor, the memory stores programs, and is configured to perform the following steps by the one or more processors:
Analyze the movie and television elements input by the user through a pre-trained analysis model to determine the characteristic attributes corresponding to the movie and television elements;
Filtering video materials that match the characteristic attributes from each of the previously collected video materials;
The screened video material is arranged through a pre-trained production model and the feature attributes to obtain a film and television work.
The above at least one technical solution adopted in this specification can achieve the following beneficial effects:
In one or more embodiments of the present specification, the movie and television elements input by the user may be analyzed through a pre-trained analysis model to determine the feature attributes corresponding to the movie and television elements, and then filtered from each previously collected video material The video material that matches the characteristic attributes corresponding to the determined film and television elements is generated, and then the screened video materials are arranged through the pre-trained production model and the characteristic attributes corresponding to the determined film and television elements to obtain a film and television work.
As can be seen from the above methods, since users can complete the production of film and television works through pre-trained analysis models and production models through the self-selected film and television elements, this not only greatly reduces the production cost of film and television works, but also improves film and television The production efficiency of the work, meanwhile, the film and television works used for the production are completed based on the various film and television elements selected by the user. Therefore, the produced film and TV works can well meet the needs of the users, thereby improving to a certain extent The viewing experience of the user.

在本說明書中,使用者可以基於自身的喜好、需求,製作所需的影視作品,如圖1所示。
圖1為本說明書提供的結合判定出的各項特徵屬性進行影視作品製作的示意圖。
從圖1中可以看出,可以通過預先訓練好的分析模型,判定出導演的拍攝風格(如圖1中所示的黑色幽默、暴力美學等),演員的面部表情、肢體動作等特徵屬性、以及劇本的作品類型、時代背景等特徵屬性,而後,可以通過預先訓練出的製作模型,對判定出的這些特徵屬性進行聚合,以製作出能夠體現出這些特徵屬性的影視作品,供使用者進行觀看。
其中,執行上述影視作品製作方法的執行主體可以是諸如電腦等終端,也可以是伺服器,使用者可以在諸如手機、平板電腦等終端中輸入製作影視作品所需的影視元素,並通過該終端發送給伺服器。伺服器可以通過終端發送的影視元素,利用預先訓練出的分析模型以及製作模型,進行影視作品的製作。為了方便後續描述,下面將僅以伺服器為執行主體,對本說明書提供的影視作品的製作方法進行說明。
為了使本技術領域的人員更好地理解本說明書一個或多個實施例中的技術方案,下面將結合本說明書一個或多個實施例中的附圖,對本說明書一個或多個實施例中的技術方案進行清楚、完整地描述,顯然,所描述的實施例僅僅是本說明書一部分實施例,而不是全部的實施例。基於本說明書中的實施例,本領域普通技術人員在沒有作出創造性勞動前提下所獲得的所有其他實施例,都應當屬於本說明書保護的範圍。
圖2為本說明書提供的影視作品的製作過程示意圖,具體包括以下步驟:
S200:通過預先訓練的分析模型,對使用者輸入的影視元素進行分析,判定所述影視元素對應的特徵屬性。
在本說明書中,使用者可以在自己所持有的終端中,輸入影視元素,終端可將這些影視元素髮送給伺服器,以使伺服器通過使用者輸入的這些影視元素,製作出使用者心儀的影視作品。其中,這裡提到的終端可以是指諸如手機、平板電腦、臺式電腦等設備。
終端可以將使用者在終端介面中填寫的影視元素,判定為使用者輸入的影視元素,也可以將使用者在終端介面中所選擇的影視元素,判定為使用者輸入的影視元素。由於在實際應用中,通常需要劇本、拍攝影視作品的導演以及演員,才能拍攝出一部影視作品,因此,這裡提到的影視元素可以包括使用者選擇的劇本和使用者選擇的影視人選。而這裡提到的影視人選可以是使用者選擇的導演,以及使用者選擇至少一個演員,其中,這裡提到的演員可以是指真實的演員,也可以是使用者選擇的虛擬演員(如某動畫中的角色人物)。
在實際應用中,由於使用者喜歡的導演有所不同,喜愛的演員也有所不同,所以,針對同一劇本來說,伺服器通過使用者選擇出的不同導演、演員,可以製作出適合不同使用者,符合不同使用者胃口的各影視作品,從而在一定程度上,極大的滿足了使用者的觀影需求,實現了能夠為使用者的需求量身定制影視作品。
在本說明書中,伺服器中可以包含有兩種分析模型,一種分析模型可以用於對劇本進行分析,該分析模型可以稱之為第一分析模型,而用於分析影視人選的特徵屬性的分析模型,可以稱之為是第二分析模型。
基於此,伺服器判定出使用者選擇的劇本後,可以通過預先訓練的分析模型,對該劇本的文本內容進行分析,以判定出該劇本的特徵屬性。其中,終端可以將使用者輸入的劇本的名稱發送給伺服器,伺服器可以通過該劇本的名稱,從網路中獲取該劇本的文本內容,當然,終端也可以將使用者上傳的該劇本的文字檔發送給伺服器,伺服器可以通過該第一分析模型,對該劇本的文字檔中所包含的文本內容進行的分析,以判定出該劇本的特徵屬性。
這裡提到的劇本的特徵屬性可以包括該劇本的時代背景、環境、作品類型、人物關係、人物特徵、角色地位(所謂的角色地位是指在該劇本中哪些角色是主角、哪些是配角),劇情氣氛等。伺服器後續可以通過判定出的這些特徵屬性,進行視頻素材的篩選,以進行影視作品的製作。
同理,在本說明書中,伺服器可以通過上述第二分析模型,對使用者選擇的導演進行分析,以判定出該導演對應的特徵屬性。具體的,伺服器判定出使用者選擇的導演後,可以從網路中,獲取該導演的影視作品。而後,伺服器可以通過該第二分析模型,對該導演的影視作品進行分析,以判定出該導演對應的特徵屬性。其中,這裡提到的該導演對應的特徵屬性可以包括:拍攝風格、敘事手段等。
需要說明的是,在實際應用中,一個導演拍攝的各影視作品所體現出拍攝風格、採用的敘事手段可能不盡相同。因此,伺服器可以通過第二分析模型,對該導演的各個影視作品進行分析,以分別判定出該導演拍攝各影視作品所採用的敘事手段以及體現出的拍攝風格。而後,伺服器可以統計出不同拍攝風格所對應的影視作品的數量,以及採用不同敘事手段拍攝出的影視作品的數量,進而判定出該導演最常用的敘事手段以及該導演最明顯的拍攝風格。
在本說明書中,伺服器在判定出使用者選擇的至少一個演員後,可以通過上述第二分析模型,判定出各演員對應的特徵屬性。具體的,伺服器在判定出使用者選擇的各演員,可以針對每個演員,從網路中獲取該演員的各影視作品,進而將這些影視作品輸入到該第二分析模型中,以判定出該演員對應的特徵屬性。
其中,這裡提到的該演員對應的特徵屬性包括:肢體特徵、擅長角色類型、聲紋特徵、面部表情參數(該面部表情參數用於表徵演員在不同表情下的面部特徵)等。
需要說明的是,在本說明書中,伺服器在使用上述分析模型(包括第一分析模型和第二分析模型)之前,需要對該分析模型進行訓練。具體的,訓練該分析模型的工作人員可以預先收集一些影視元素(包括各種劇本、導演、演員等)作為樣本影視元素,並通過人工的方式,標記出的樣本影視元素所對應的特徵屬性。而後,可以將這些樣本影視元素輸入到伺服器中包含的待訓練的分析模型中,以結合標記出的樣本影視元素所對應的特徵屬性,對該分析模型進行訓練。
S202:從預先收集的各視頻素材中篩選與所述特徵屬性相匹配的視頻素材。
在本說明書中,伺服器中包含有預先收集的大量的視頻素材,基於此,伺服器判定出使用者選擇的影視元素所對應的特徵屬性後,可以根據判定出的特徵屬性,從預先收集的各視頻素材中,篩選出與判定出的特徵屬性相匹配的視頻素材,以在後續過程中,通過篩選出的視頻素材,進行影視作品的製作。
其中,對於一部影視作品來說,其包含的各種背景、環境等應是與劇本本身存在緊密聯繫關係的。因此,伺服器可以基於判定出的劇本的特徵屬性,從預先收集的大量視頻素材中,判定出與該劇本的特徵屬性(如時代背景、環境、劇情氣氛)相匹配的各視頻素材。
當然,在本說明書中,伺服器也可以根據判定出的演員對應的特徵屬性,從預先收集的大量視頻素材中篩選出與演員對應的特徵屬性相匹配的視頻素材,以在後續過程中,通過判定出的這些視頻素材所包含的演員的視頻圖像,製作影視作品。
S204:通過預先訓練的製作模型以及所述特徵屬性,對篩選出的視頻素材進行編排,得到影視作品。
伺服器可以通過預先訓練的製作模型,基於判定出的各影視元素對應的特徵屬性,對篩選出的各視頻素材進行編排,以製作影視作品。具體的,對於一個劇本來說,由於不同導演的敘事手段、拍攝風格有所不同,所以,最終拍攝出的影視作品在敘述劇本中劇情的順序、拍攝重點也將存在差異。
基於此,在本說明書中,可以通過該製作模型、判定出的劇本對應的特徵屬性以及該導演對應的特徵屬性,判定出該劇本的內容編排模型,進而根據判定出的該劇本的內容編排模式,對篩選出的視頻素材進行編排,得到第一影視作品。
換句話說,伺服器通過該製作模型,結合該劇本對應的特徵屬性以及該導演對應的特徵屬性,可以判定出該劇本的劇情發展順序、內容側重點等應該以何種形式呈現。這裡提到的該劇本的劇情發展順序、內容側重點即為這裡提到的該劇本的內容編排模式。
伺服器判定出上述第一影視作品後,可以進一步按照該劇本的內容編排模式,將判定出的至少一個演員的視頻圖像添加在該第一影視作品中,得到第二影視作品。其中,該至少一個演員的視頻圖像可以是伺服器之前根據判定出的每個演員所對應的特徵屬性,從預先收集的大量視頻素材中篩選出的。
一部影視作品中各角色會隨著該影視作品的劇情發展順序,將在該影視作品中的不同時刻出現。而由於一個劇本經導演拍攝成影視作品後,其劇情發展順序,內容側重點等相對于該劇本原本的劇情發展順序、內容側重點等將發生變化,相應的,該劇本中各角色在該劇本中出現的時刻隨之也將發生變化。
因此,伺服器可以通過上述第一分析模型,針對該劇本中的每個角色,判定出在該劇本中該角色出現的各時刻。而後,伺服器可以將判定出的該劇本中各角色出現的各時刻作為該劇本的特徵屬性,結合判定出的使用者選擇的各演員與該劇本中各角色的對應關係,通過該製作模型,判定出使用者選擇的各演員應在該內容編排模式下的劇本中所出現的各時刻。
其中,這裡提到的使用者選擇的各演員與該劇本中各角色的對應關係,可以是伺服器根據使用者選擇的各演員與該劇本中各角色的對應關係判定出的。伺服器在通過上述第一分析模型分析該劇本後,可以通過終端向使用者展示劇本中各角色、各角色之間的人物關係以及各角色的角色地位(如主角、配角等)。使用者可以基於伺服器通過終端向使用者展示的各角色的角色地位,在終端中輸入各角色所對應的演員,以使伺服器判定出的使用者選擇出的各演員與該劇本中各角色的對應關係。
伺服器可以根據判定出使用者選擇的各演員在該內容編排模式下的劇本中所出現的各時刻,將各演員所對應的視頻圖像添加在該第一影視作品中,以得到第二影視作品。也即,在得到第一影視作品的基礎上,伺服器可以針對每個演員,按照該劇本的劇情發展順序,將該演員的視頻圖像添加在該第一影視作品中該演員對應的角色出現的各時刻上。
當然,在本說明書中,伺服器也可以在判定出該劇本的內容編排模式後,進一步判定出該內容編排模式下的新劇本,並將該新劇本輸入到上述第一分析模型中,以判定出的該新劇本中各角色所出現的時刻,進而判定出各演員在該新劇本所對應的第一影視作品(由於該新劇本和第一影視作品均是以該內容編排模式為基礎得到的,所以,該第一影視作品與該新劇本相對應)中出現的各時刻。
對於一部影視作品來說,在該影視作品的不同劇情氣氛下,演員的肢體動作、面部表情等也會有所不同。因此,在本說明書中,伺服器針對每個演員,可以根據判定出的該演員在該劇本中對應的角色,該演員對應的角色在該內容編排模式下的劇本中所出現的各時刻,以及該演員對應的角色出現在該劇本中各時刻所處於的形貌狀態,判定出該演員出現在該影視作品中各時刻的形貌狀態。其中,這裡提到的形貌狀態用於表徵演員在該影視作品中的一個時刻應處於什麼樣的肢體動作、面部表情等。而伺服器可以通過上述第一分析模型,對該劇本進行分析,並將判定出的該劇本中各角色在各時刻的形貌狀態作為該劇本的特徵屬性。
例如,假設判定在該第二影視作品的一段中,劇情氣氛為緊張的狀態,則可以判定出該演員所對應的角色在這一段所應展現出的形貌狀態應為表情緊張、凝重的狀態。
伺服器判定出各演員出現在該第二影視作品中各時刻的形貌狀態後,可以根據判定出的各演員所對應的特徵屬性,對各演員的視頻圖像進行形貌調整。
其中,伺服器在此之前,通過上述第二分析模型,可以判定出各演員在不同情緒下的面部表情參數,進而通過該製作模型,按照判定出的各演員在不同情緒下的面部表情參數,對各演員在該第二影視作品中出現時的面部進行調整,以適應劇情氣氛。同理,對於演員在該第二影視作品中不同劇情氣氛下的肢體動作,也可以基於判定出的各演員的肢體特徵進行調整,如圖3所示。
圖3為本說明書提供的伺服器對各演員的視頻圖像進行調整的示意圖。
假設,伺服器判定出演員A所對應的角色A在第二影視作品中的時刻A出現,則可以將該演員A的一幀視頻圖像添加在該第二影視作品的時刻A中。後,伺服器可以根據判定出的該劇本的劇情氣氛以及劇情發展順序,判定出在時刻A時演員A應處於表情氣憤、大步向前的形貌狀態,進而基於判定出的該演員A所對應的面部表情參數、肢體特徵等,對該演員A在時刻A的這一幀視頻圖像進行調整,得到調整後的該演員A的視頻圖像。
伺服器對第二影視作品中各演員在各時刻出現時的形貌狀態進行調整後,可以對調整後的第二影視作品進行諸如配樂、添加字幕等處理,進而得到最終的影視作品,並返回給使用者進行觀看。
其中,伺服器可以通過上述第一分析模型,將該劇本的文本內容翻譯成使用者所選語言對應的文字,並判定出各角色所對應的臺詞。而後,可以按照各角色在該第二影視作品(或第一影視作品)中出現的各時刻,判定出各角色對應的臺詞在該第二影視作品(或第一影視作品)中出現的各時刻,進而將各臺詞轉換成字幕,添加在該第二影視作品(或第一影視作品)相應的各時刻中。
伺服器可以根據判定出的該劇本中包含的各劇情氣氛,從預先收集的各配樂素材中判定出與各劇情氣氛相匹配的配樂素材,進而將各配樂素材按照各劇情氣氛在該第二影視作品中所出現的各時刻,添加在該第二影視作品中。
從上述方法中可以看出,由於使用者可以通過自行選擇的影視元素,通過預先訓練的分析模型以及製作模型,完成影視作品的製作,這樣不僅極大的降低了影視作品的製作成本,提高了影視作品的製作效率,同時,用於製作出的影視作品是基於使用者選擇的各影視元素而完成的,所以,製作出的影視作品能夠很好迎合使用者的需求,從而在一定程度上提高了使用者的觀影體驗。
需要說明的是,上述提到的製作模型可以是訓練該製作模型的工作人員,預先將標記出的各樣本影視元素對應的各樣本特徵屬性以及收集到的各樣本影視元素對應的標準影視作品,對該製作模型進行訓練。例如,假設訓練該製作模型工作人員通過人工的方式,判定出導演A、演員B、C、D對應的特徵屬性,並將從網路中查找到的導演A拍攝的、演員B、C、D參演的影視作品作為標準影視作品。工作人員可以基於判定出的這些特徵屬性以及該標準影視作品,對該製作模型進行訓練。
由於劇本中可能涉及大量的人物,如軍隊、鬧市中的百姓等,因此,伺服器可以從預先收集的各人物素材中,按照該劇本的特徵屬性篩選出一些人物素材,繼而將這些人物素材所對應的視頻圖像添加在該第二影視作品(或第一影視作品)中合適的各時刻中。
以上為本說明書的一個或多個實施例提供的影視作品的製作方法,基於同樣的思路,本說明書還提供了相應的影視作品的製作裝置,如圖4所示。
圖4為本說明書提供的一種影視作品的製作裝置示意圖,具體包括:
分析模組401,通過預先訓練的分析模型,對使用者輸入的影視元素進行分析,判定所述影視元素對應的特徵屬性;
篩選模組402,從預先收集的各視頻素材中篩選與所述特徵屬性相匹配的視頻素材;
編排模組403,通過預先訓練的製作模型以及所述特徵屬性,對篩選出的視頻素材進行編排,得到影視作品。
所述裝置還包括:
第一訓練模組404,通過預先收集的各樣本影視元素以及標記出的所述各樣本影視元素對應的特徵屬性,對所述分析模型進行訓練。
所述使用者輸入的影視元素包括:使用者選擇的劇本和使用者選擇的影視人選;所述影視人選包括:使用者選擇的導演、使用者選擇的至少一個演員中的至少一種;
所述分析模型包括:第一分析模型和第二分析模型。
所述分析模組401,當所述影視元素為所述使用者選擇的劇本時,則通過所述第一分析模型,對所述劇本進行分析,以判定所述劇本對應的特徵屬性;當所述影視元素為所述使用者選擇的導演時,判定所述導演的影視作品,通過所述第二分析模型,對所述導演的影視作品進行分析,以判定所述導演對應的特徵屬性;當所述影視元素為所述使用者選擇的至少一個演員時,則針對所述使用者選擇的每個演員,判定該演員的影視作品,通過所述第二分析模型,對該演員的影視作品進行分析,以判定該演員對應的特徵屬性。
所述編排模組403,通過所述製作模型、所述劇本對應的特徵屬性以及所述導演對應的特徵屬性,判定所述劇本的內容編排模式;根據所述劇本的內容編排模式,對篩選出的視頻素材進行編排,得到第一影視作品;按照所述劇本的內容編排模式,將判定出的所述至少一個演員的視頻圖像添加在所述第一影視作品中,得到第二影視作品;針對所述第二影視作品中每個演員的視頻圖像,根據所述劇本的內容編排模式以及判定出的該演員對應的特徵屬性,對該演員的視頻圖像進行調整,並將調整後的第二影視作品作為製作的影視作品。
所述劇本對應的特徵屬性包括:所述劇本中各角色出現的各時刻;
所述編排模組403,針對每個演員,根據判定出的所述使用者選擇的所述至少一個演員與所述劇本中各角色的對應關係,判定該演員在所述劇本中對應的角色;根據判定出的所述劇本中各角色出現的各時刻,以及所述至少一個演員在所述劇本中對應的各角色,通過所述製作模型,判定所述至少一個演員在所述內容編排模式下的劇本中所出現的各時刻;根據判定出的所述至少一個演員在所述內容編排模式下的劇本中所出現的各時刻,將所述至少一個演員的視頻圖像添加在所述第一影視作品中,得到第二影視作品。
所述劇本對應的特徵屬性還包括:所述劇本中各角色在各時刻的形貌狀態;
所述編排模組403,針對每個演員,根據判定出的該演員在所述劇本中對應的角色、所述劇本中各角色在所述內容編排模式下的劇本中所出現的各時刻,以及所述劇本中各角色在各時刻的形貌狀態,判定該演員出現在所述第二影視作品中各時刻的形貌狀態;根據判定出的該演員在所述第二影視作品中各時刻的形貌狀態,以及該演員對應的特徵屬性,對該演員的視頻圖像進行形貌調整。
所述裝置還包括:
第二訓練模組405,通過預先標記出的各樣本影視元素對應的各樣本特徵屬性以及收集到的所述各樣本影視元素對應的標準影視作品,對所述製作模型進行訓練。
其中,上述第一訓練模組404和第二訓練模組405也可以是一個模組,用於對待訓練的模型(包括分析模型和製作模型)進行訓練。
基於上述說明的影視作品的製作方法,本說明書還對應提供了一種用於影視作品的製作設備,如圖5所示。該設備包括一個或多個記憶體以及處理器,所述記憶體儲存程式,並且被配置成由所述一個或多個處理器執行以下步驟:
通過預先訓練的分析模型,對使用者輸入的影視元素進行分析,判定所述影視元素對應的特徵屬性;
從預先收集的各視頻素材中篩選與所述特徵屬性相匹配的視頻素材;
通過預先訓練的製作模型以及所述特徵屬性,對篩選出的視頻素材進行編排,得到影視作品。
在本說明書的一個或多個實施例中,可以通過預先訓練的分析模型,對使用者輸入的影視元素進行分析,以判定出影視元素對應的特徵屬性,而後,從預先收集的各視頻素材中篩選出與判定出的影視元素對應的特徵屬性相匹配的視頻素材,進而通過預先訓練出的製作模型以及判定出的影視元素對應的特徵屬性,對篩選出的視頻素材進行編排,得到影視作品。
從上述方法中可以看出,由於使用者可以通過自行選擇的影視元素,通過預先訓練的分析模型以及製作模型,完成影視作品的製作,這樣不僅極大的降低了影視作品的製作成本,提高了影視作品的製作效率,同時,用於製作出的影視作品是基於使用者選擇的各影視元素而完成的,所以,製作出的影視作品能夠很好迎合使用者的需求,從而在一定程度上提高了使用者的觀影體驗。
在20世紀90年代,對於一個技術的改進可以很明顯地區分是硬體上的改進(例如,對二極體、電晶體、開關等電路結構的改進)還是軟體上的改進(對於方法流程的改進)。然而,隨著技術的發展,當今的很多方法流程的改進已經可以視為硬體電路結構的直接改進。設計人員幾乎都通過將改進的方法流程程式設計到硬體電路中來得到相應的硬體電路結構。因此,不能說一個方法流程的改進就不能用硬體實體模組來實現。例如,可程式設計邏輯裝置(Programmable Logic Device, PLD)(例如現場可程式設計閘陣列(Field Programmable Gate Array,FPGA))就是這樣一種積體電路,其邏輯功能由使用者對裝置程式設計來判定。由設計人員自行程式設計來把一個數位系統“集成”在一片PLD上,而不需要請晶片製造廠商來設計和製作專用的積體電路晶片。而且,如今,取代手工地製作積體電路晶片,這種程式設計也多半改用“邏輯編譯器(logic compiler)”軟體來實現,它與程式開發撰寫時所用的軟體編譯器相類似,而要編譯之前的原始代碼也得用特定的程式設計語言來撰寫,此稱之為硬體描述語言(Hardware Description Language,HDL),而HDL也並非僅有一種,而是有許多種,如ABEL(Advanced Boolean Expression Language)、AHDL(Altera Hardware Description Language)、Confluence、CUPL(Cornell University Programming Language)、HDCal、JHDL(Java Hardware Description Language)、Lava、Lola、MyHDL、PALASM、RHDL (Ruby Hardware Description Language)等,目前最普遍使用的是VHDL(Very-High-Speed Integrated Circuit Hardware Description Language)與Verilog。本領域技術人員也應該清楚,只需要將方法流程用上述幾種硬體描述語言稍作邏輯程式設計並程式設計到積體電路中,就可以很容易得到實現該邏輯方法流程的硬體電路。
控制器可以按任何適當的方式實現,例如,控制器可以採取例如微處理器或處理器以及儲存可由該(微)處理器執行的電腦可讀程式碼(例如軟體或韌體)的電腦可讀媒體、邏輯閘、開關、專用積體電路(Application Specific Integrated Circuit,ASIC)、可程式設計邏輯控制器和嵌入微控制器的形式,控制器的例子包括但不限於以下微控制器:ARC 625D、Atmel AT91SAM、Microchip PIC18F26K20 以及Silicone Labs C8051F320,記憶體控制器還可以被實現為記憶體的控制邏輯的一部分。本領域技術人員也知道,除了以純電腦可讀程式碼方式實現控制器以外,完全可以通過將方法步驟進行邏輯程式設計來使得控制器以邏輯閘、開關、專用積體電路、可程式設計邏輯控制器和嵌入微控制器等的形式來實現相同功能。因此這種控制器可以被認為是一種硬體組件,而對其內包括的用於實現各種功能的裝置也可以視為硬體組件內的結構。或者甚至,可以將用於實現各種功能的裝置視為既可以是實現方法的軟體模組又可以是硬體組件內的結構。
上述實施例闡明的系統、裝置、模組或單元,具體可以由電腦晶片或實體實現,或者由具有某種功能的產品來實現。一種典型的實現設備為電腦。具體的,電腦例如可以為個人電腦、膝上型電腦、蜂窩電話、相機電話、智慧型電話、個人數位助理、媒體播放機、導航設備、電子郵件設備、遊戲控制台、平板電腦、可穿戴設備或者這些設備中的任何設備的組合。
為了描述的方便,描述以上裝置時以功能分為各種單元分別描述。當然,在實施本說明書時可以把各單元的功能在同一個或多個軟體和/或硬體中實現。
本領域內的技術人員應明白,本說明書的實施例可提供為方法、系統、或電腦程式產品。因此,本說明書可採用完全硬體實施例、完全軟體實施例、或結合軟體和硬體方面的實施例的形式。而且,本說明書可採用在一個或多個其中包含有電腦可用程式碼的電腦可用儲存媒體(包括但不限於磁碟記憶體、CD-ROM、光學記憶體等)上實施的電腦程式產品的形式。
本說明書是參照根據本說明書一個或多個實施例的方法、設備(系統)、和電腦程式產品的流程圖和/或方塊圖來描述的。應理解可由電腦程式指令實現流程圖和/或方塊圖中的每一流程和/或方塊、以及流程圖和/或方塊圖中的流程和/或方塊的結合。可提供這些電腦程式指令到通用電腦、專用電腦、嵌入式處理機或其他可程式設計資料處理設備的處理器以產生一個機器,使得通過電腦或其他可程式設計資料處理設備的處理器執行的指令產生用於實現在流程圖一個流程或多個流程和/或方塊圖一個方塊或多個方塊中指定的功能的裝置。
這些電腦程式指令也可儲存在能引導電腦或其他可程式設計資料處理設備以特定方式工作的電腦可讀記憶體中,使得儲存在該電腦可讀記憶體中的指令產生包括指令裝置的製造品,該指令裝置實現在流程圖一個流程或多個流程和/或方塊圖一個方塊或多個方塊中指定的功能。
這些電腦程式指令也可裝載到電腦或其他可程式設計資料處理設備上,使得在電腦或其他可程式設計設備上執行一系列操作步驟以產生電腦實現的處理,從而在電腦或其他可程式設計設備上執行的指令提供用於實現在流程圖一個流程或多個流程和/或方塊圖一個方塊或多個方塊中指定的功能的步驟。
在一個典型的配置中,計算設備包括一個或多個處理器(CPU)、輸入/輸出介面、網路介面和記憶體。
記憶體可能包括電腦可讀媒體中的非永久性記憶體,隨機存取記憶體(RAM)和/或非揮發性記憶體等形式,如唯讀記憶體(ROM)或快閃記憶體(flash RAM)。記憶體是電腦可讀媒體的示例。
電腦可讀媒體包括永久性和非永久性、可移動和非可移動媒體可以由任何方法或技術來實現資訊儲存。資訊可以是電腦可讀指令、資料結構、程式的模組或其他資料。電腦的儲存媒體的例子包括,但不限於相變記憶體(PRAM)、靜態隨機存取記憶體(SRAM)、動態隨機存取記憶體(DRAM)、其他類型的隨機存取記憶體(RAM)、唯讀記憶體(ROM)、電可擦除可程式設計唯讀記憶體(EEPROM)、快閃記憶體或其他記憶體技術、唯讀光碟唯讀記憶體(CD-ROM)、數位多功能光碟(DVD)或其他光學儲存、磁盒式磁帶,磁帶磁磁片儲存或其他磁性存放裝置或任何其他非傳輸媒體,可用於儲存可以被計算設備訪問的資訊。按照本文中的界定,電腦可讀媒體不包括暫存電腦可讀媒體(transitory media),如調製的資料信號和載波。
還需要說明的是,術語“包括”、“包含”或者其任何其他變體意在涵蓋非排他性的包含,從而使得包括一系列要素的過程、方法、商品或者設備不僅包括那些要素,而且還包括沒有明確列出的其他要素,或者是還包括為這種過程、方法、商品或者設備所固有的要素。在沒有更多限制的情況下,由語句“包括一個……”限定的要素,並不排除在包括所述要素的過程、方法、商品或者設備中還存在另外的相同要素。
本說明書可以在由電腦執行的電腦可執行指令的一般上下文中描述,例如程式模組。一般地,程式模組包括執行特定任務或實現特定抽象資料類型的常式、程式、物件、元件、資料結構等等。也可以在分散式運算環境中實踐本說明書的一個或多個實施例,在這些分散式運算環境中,由通過通信網路而被連接的遠端處理設備來執行任務。在分散式運算環境中,程式模組可以位於包括存放裝置在內的本地和遠端電腦儲存媒體中。
本說明書中的各個實施例均採用遞進的方式描述,各個實施例之間相同相似的部分互相參見即可,每個實施例重點說明的都是與其他實施例的不同之處。尤其,對於系統實施例而言,由於其基本相似於方法實施例,所以描述的比較簡單,相關之處參見方法實施例的部分說明即可。
上述對本說明書特定實施例進行了描述。其它實施例在所附申請專利範圍的範圍內。在一些情況下,在申請專利範圍中記載的動作或步驟可以按照不同於實施例中的順序來執行並且仍然可以實現期望的結果。另外,在附圖中描繪的過程不一定要求示出的特定順序或者連續順序才能實現期望的結果。在某些實施方式中,多工處理和並行處理也是可以的或者可能是有利的。
以上所述僅為本說明書的一個或多個實施例而已,並不用於限制本說明書。對於本領域技術人員來說,本說明書的一個或多個實施例可以有各種更改和變化。凡在本說明書的一個或多個實施例的精神和原理之內所作的任何修改、等同替換、改進等,均應包含在本說明書的申請專利範圍之內。
In this manual, users can make required film and television works based on their own preferences and needs, as shown in Figure 1.
FIG. 1 is a schematic diagram of the production of a film and television work by combining the determined various characteristic attributes provided in the present specification.
As can be seen from Figure 1, the director's shooting style (such as black humor, violent aesthetics, etc.), actor's facial expressions, body movements and other characteristic attributes can be determined through a pre-trained analysis model. And the script's feature type, era background, and other characteristic attributes. Then, the determined characteristic attributes can be aggregated through a pre-trained production model to produce a film and television work that can reflect these characteristic attributes for users to perform. Watch.
Wherein, the execution subject of the method for producing the above-mentioned film and television works may be a terminal such as a computer or a server, and a user may input the film and television elements required for the production of a film and television in a terminal such as a mobile phone, a tablet computer, and the like through the terminal. Send to server. The server can use the pre-trained analysis elements and production models to produce film and television works through the film and television elements sent by the terminal. In order to facilitate the subsequent description, the following will only use the server as the main body to explain the production method of the film and television works provided in this manual.
In order to enable those skilled in the art to better understand the technical solutions in one or more embodiments of the present specification, the drawings in one or more embodiments of the present specification will be combined below with reference to the drawings in one or more embodiments of the present specification. The technical solution is clearly and completely described. Obviously, the described embodiments are only a part of the embodiments of the specification, but not all the embodiments. Based on the embodiments in this specification, all other embodiments obtained by a person of ordinary skill in the art without creative efforts should fall within the protection scope of this specification.
Figure 2 is a schematic diagram of the production process of the film and television works provided in this specification, which specifically includes the following steps:
S200: Analyze the movie and television elements input by the user through a pre-trained analysis model to determine the feature attributes corresponding to the movie and television elements.
In this manual, the user can enter the movie and television elements in the terminal he / she owns, and the terminal can send these movie and television elements to the server, so that the server can create the user's favorite through these movie and television elements input by the user. Film and television works. The terminal mentioned here may refer to devices such as a mobile phone, a tablet computer, and a desktop computer.
The terminal may determine the film and television elements filled in by the user in the terminal interface as the film and television elements entered by the user, and may also determine the film and television elements selected by the user in the terminal interface as the film and television elements entered by the user. Since in actual applications, a script, a director and an actor shooting a film and television work are usually required to shoot a film and television work, the film and television elements mentioned here may include a script selected by the user and a film and television candidate selected by the user. The film and television candidates mentioned here may be the director selected by the user and the user selected at least one actor. The actor mentioned here may be a real actor or a virtual actor selected by the user (such as an animation Characters).
In practical applications, because the user likes different directors and favorite actors, so for the same script, the server can produce different directors and actors selected by the user, which can be suitable for different users. , In accordance with the appetite of different users of various film and television works, to a certain extent, greatly meet the user's viewing needs, to achieve the user's needs can be customized film and television works.
In this specification, the server may include two analysis models. One analysis model can be used to analyze the script. The analysis model can be called the first analysis model, and it is used to analyze the feature attributes of the film and television candidate. The model can be called the second analysis model.
Based on this, after the server determines the script selected by the user, it can analyze the text content of the script through a pre-trained analysis model to determine the characteristic attributes of the script. The terminal may send the name of the script entered by the user to the server, and the server may obtain the text content of the script from the network through the name of the script. Of course, the terminal may also upload the script ’s content uploaded by the user. The text file is sent to the server, and the server can analyze the text content contained in the text file of the script through the first analysis model to determine the characteristic attributes of the script.
The characteristic attributes of the script mentioned here can include the era's background, environment, type of work, character relationship, character characteristics, and role status (the so-called role status refers to which characters are the protagonists and supporting actors in the script), The atmosphere of the plot and so on. The server can subsequently filter the video materials based on these determined attribute attributes to produce film and television works.
Similarly, in this specification, the server may analyze the director selected by the user through the second analysis model to determine the feature attribute corresponding to the director. Specifically, after the server determines the director selected by the user, the server can obtain the film and television works of the director from the network. Then, the server can analyze the director's film and television works through the second analysis model to determine the characteristic attributes corresponding to the director. Among them, the characteristic attributes corresponding to the director mentioned here may include: shooting style, narrative means, and the like.
It should be noted that in practical applications, the filming styles and narrative methods adopted by each film and television work shot by a director may be different. Therefore, the server can analyze each director's film and television works through the second analysis model to determine the narrative means and the shooting style reflected by the director in shooting each film and television work. Then, the server can count the number of film and television works corresponding to different shooting styles and the number of film and television works shot with different narrative methods, and then determine the most commonly used narrative method and the director's most obvious shooting style.
In this specification, after determining at least one actor selected by the user, the server may determine the feature attribute corresponding to each actor through the second analysis model. Specifically, the server determines each actor selected by the user, and can obtain, for each actor, each film and television work of the actor from the network, and then inputs these film and television works into the second analysis model to determine The feature attribute corresponding to this actor.
Among them, the feature attributes corresponding to the actor mentioned here include: physical characteristics, good at character types, voiceprint features, facial expression parameters (the facial expression parameters are used to characterize the actor's facial characteristics under different expressions), and so on.
It should be noted that, in this specification, the server needs to train the analysis model before using the analysis model (including the first analysis model and the second analysis model). Specifically, the staff training the analysis model may collect some film and television elements (including various scripts, directors, actors, etc.) as sample film and television elements in advance, and manually mark the feature attributes corresponding to the sample film and television elements. Then, these sample film and television elements can be input into the analysis model to be trained contained in the server, and the analysis model is trained by combining the feature attributes corresponding to the labeled sample film and television elements.
S202: Screen video materials that match the characteristic attributes from each of the video materials collected in advance.
In this specification, the server contains a large amount of video material collected in advance. Based on this, after the server determines the feature attributes corresponding to the movie and television elements selected by the user, it can use the determined feature attributes to collect from the pre-collected Among the video materials, the video materials that match the determined feature attributes are selected, so that in the subsequent process, the film and television works are produced through the filtered video materials.
Among them, for a film and television work, its various backgrounds and environments should be closely related to the script itself. Therefore, the server can determine, based on the determined feature attributes of the script, from among a large number of video materials collected in advance, various video materials that match the feature attributes of the script (such as era background, environment, and atmosphere of the plot).
Of course, in this specification, the server may also select video materials that match the corresponding feature attributes of the actor from a large number of video materials collected in advance according to the determined feature attributes of the actor, so that in the subsequent process, The video images of the actors included in the determined video materials are used to produce film and television works.
S204: Arranging the filtered video material through a pre-trained production model and the feature attributes to obtain a film and television work.
The server can use the pre-trained production model to arrange the selected video materials based on the determined feature attributes corresponding to each film and television element to produce a film and television work. Specifically, for a script, due to the different narrative methods and shooting styles of different directors, there will also be differences in the order and focus of the plot in the final film and television works in the narrative script.
Based on this, in this specification, the content layout model of the script can be determined by using the production model, the characteristic attributes corresponding to the determined script, and the characteristic attributes corresponding to the director, and then based on the determined content scheduling mode of the script. , Arrange the selected video material to get the first film and television work.
In other words, the server, through the production model, combines the feature attributes corresponding to the script and the feature attributes corresponding to the director, to determine the form in which the plot's plot development order and content focus should be presented. The sequence and content emphasis of the scenario mentioned here is the content arrangement mode of the scenario mentioned here.
After the server determines the first film and television work, the server may further add the video image of the determined at least one actor to the first film and television work according to the content arrangement mode of the script to obtain a second film and television work. The video image of the at least one actor may be selected from a large number of video materials collected in advance by the server according to the feature attributes corresponding to each actor determined.
The characters in a film and television work will appear at different moments in the film and television work according to the sequence of the film and television work. After a script is filmed by a director as a film and television work, the sequence of the plot development, the content focus, etc. will change from the original plot development sequence, the content focus, etc. of the script. Accordingly, the characters in the script are in the script The moments appearing in China will change accordingly.
Therefore, the server can determine, for each character in the script, each time the character appears in the script through the first analysis model. Then, the server may use the determined moments in which each character in the play appears as a characteristic attribute of the play, and combine the determined correspondence between each actor selected by the user and each role in the play through the production model, It is determined that each time the actors selected by the user should appear in the script in the content arrangement mode.
The correspondence between the actors selected by the user and the characters in the script mentioned herein may be determined by the server according to the correspondence between the actors selected by the user and the characters in the script. After the server analyzes the script through the first analysis model, the server can display to the user each character in the script, the character relationship between the characters, and the role status of each character (such as the protagonist, supporting role, etc.) through the terminal. The user can input the actors corresponding to each role in the terminal based on the role status of each role displayed by the server to the user through the terminal, so that each actor selected by the user determined by the server and each role in the script are determined by the server Corresponding relationship.
The server may add the video image corresponding to each actor to the first film and television work to obtain the second film and television according to each moment in which the actors selected by the user appear in the script in the content arrangement mode. works. That is, on the basis of obtaining the first film and television work, the server may, for each actor, in accordance with the plot development order of the script, add the video image of the actor to the corresponding role of the actor in the first film and television work. Every moment.
Of course, in this specification, the server may further determine a new script in the content layout mode after determining the content layout mode of the script, and input the new script into the first analysis model to determine The moment of the appearance of each character in the new script, and then determine the first film and television work corresponding to each actor in the new script (because the new script and the first film and television work are based on the content arrangement mode) , So the first film and television works correspond to the new script).
For a film and television work, the actors' physical movements, facial expressions, etc. will also be different under different atmospheres of the film and television work. Therefore, in this specification, for each actor, the server may determine the corresponding role of the actor in the script based on the determined role of the actor in the script in the content arrangement mode, and The actor's corresponding character appears in the appearance state at each moment in the script, and it is determined that the actor's appearance state at each moment in the film and television work. Among them, the appearance state mentioned here is used to represent what kind of body movements, facial expressions, etc. the actor should be at a moment in the film and television work. The server may analyze the script through the first analysis model, and use the determined appearance state of each character in the script as the characteristic attribute of the script.
For example, if it is determined that the atmosphere of the plot is tense in a section of the second film and television work, it can be determined that the appearance status of the character corresponding to the actor in this section should be a tense and dignified expression. .
After the server determines the appearance state of each actor appearing at each moment in the second film and television work, the server may perform the appearance adjustment on the video image of each actor according to the determined feature attributes corresponding to the actor.
Among them, the server can determine the facial expression parameters of each actor under different emotions through the second analysis model before, and then use the production model to determine the facial expression parameters of each actor under different emotions through the production model. The faces of each actor when appearing in the second film and television work are adjusted to suit the atmosphere of the plot. Similarly, the body movements of actors in different atmospheres of the plot in this second film and television work can also be adjusted based on the physical characteristics of the actors determined, as shown in FIG. 3.
FIG. 3 is a schematic diagram of adjusting a video image of each actor by a server provided in this specification.
Assuming that the server determines that character A corresponding to actor A appears at time A in the second film and television work, a frame of video image of the actor A can be added to time A in the second film and television work. After that, the server may determine that the actor A should be in a state of angry expression and stride forward at time A according to the determined plot atmosphere and the sequence of the plot development of the script, and then based on the determined actor A ’s position Corresponding facial expression parameters, physical characteristics, and the like, the video image of the actor A at time A is adjusted to obtain the adjusted video image of the actor A.
After the server adjusts the appearance state of each actor in the second film and television works at various moments, the adjusted second film and television works can perform processing such as soundtrack and adding subtitles, etc., and then obtain the final film and television works and return Watch for users.
The server may use the first analysis model to translate the text content of the script into the text corresponding to the language selected by the user, and determine the lines corresponding to each character. Then, according to the moments in which the characters appear in the second film and television work (or the first film and television work), the moments at which the lines corresponding to the characters appear in the second film and television work (or the first film and television work) can be determined. , And then convert each line into subtitles and add them to each moment corresponding to the second film and television work (or the first film and television work).
The server may determine the soundtrack material that matches the atmosphere of each plot from the soundtrack materials collected in advance according to the determined atmosphere of the plot contained in the script, and further assign each soundtrack material to the second movie according to the atmosphere of the plot. The moments appearing in the work are added to the second film and television work.
As can be seen from the above methods, since users can complete the production of film and television works through pre-trained analysis models and production models through the self-selected film and television elements, this not only greatly reduces the production cost of film and television works, but also improves film and television The production efficiency of the work, meanwhile, the film and television works used for the production are completed based on the various film and television elements selected by the user. Therefore, the produced film and TV works can well meet the needs of the users, thereby improving to a certain extent The viewing experience of the user.
It should be noted that the above-mentioned production model may be a staff member who trained the production model, and previously labeled the sample feature attributes corresponding to each sample film and television element and the standard film and television work corresponding to each sample film and television element collected. The production model is trained. For example, suppose the model worker is trained to manually determine the feature attributes corresponding to director A, actor B, C, and D, and will find director A, actor B, C, and D from the network. Participating film and television works as standard film and television works. The staff can train the production model based on the determined feature attributes and the standard film and television work.
Since the script may involve a large number of characters, such as the army and the people in the downtown, etc., the server can filter out some character materials from the character materials collected in advance according to the characteristics of the script, and then use these characters Corresponding video images are added at appropriate moments in the second film and television work (or the first film and television work).
The above is a method for making a film and television work provided by one or more embodiments of the present specification. Based on the same idea, the description also provides a corresponding film and television production device, as shown in FIG. 4.
FIG. 4 is a schematic diagram of a production device for a film and television work provided in this specification, which specifically includes:
The analysis module 401 analyzes a movie element input by a user through a pre-trained analysis model, and determines a feature attribute corresponding to the movie element;
A screening module 402, which screens video materials that match the characteristic attributes from each of the previously collected video materials;
The orchestration module 403 arranges the screened video material through a pre-trained production model and the feature attributes to obtain a film and television work.
The device further includes:
The first training module 404 trains the analysis model through the sample video elements collected in advance and the feature attributes corresponding to the labeled video elements.
The movie and television elements input by the user include: a script selected by the user and a movie and television candidate selected by the user; the movie and television candidates include: at least one of a director selected by the user and at least one actor selected by the user;
The analysis model includes a first analysis model and a second analysis model.
The analysis module 401, when the film and television element is a script selected by the user, analyze the script through the first analysis model to determine the characteristic attributes corresponding to the script; when When the film and television element is the director selected by the user, the film and television works of the director are determined, and the film and television works of the director are analyzed through the second analysis model to determine the characteristic attributes corresponding to the director; When the film and television element is at least one actor selected by the user, for each actor selected by the user, the film and television work of the actor is determined, and the film and television work of the actor is performed through the second analysis model. Analysis to determine the characteristic attributes corresponding to the actor.
The orchestration module 403 determines the content layout mode of the script based on the production model, the feature attributes corresponding to the script, and the feature attributes corresponding to the director; according to the content layout mode of the script, screening out Arrange the video material to obtain the first film and television work; and add the determined video image of the at least one actor to the first film and television work according to the content arrangement mode of the script to obtain a second film and television work; Regarding the video image of each actor in the second film and television work, the video image of the actor is adjusted according to the content arrangement mode of the script and the determined feature attributes corresponding to the actor, and the adjusted The second film and television works are produced as film and television works.
The characteristic attributes corresponding to the script include: each moment when each character in the script appears;
The orchestration module 403, for each actor, determines the corresponding role of the actor in the script according to the determined corresponding relationship between the at least one actor selected by the user and each role in the script; Determining the at least one actor in the content orchestration mode through the production model according to the determined moments in which each character in the script appears, and each character corresponding to the at least one actor in the script The moments appearing in the script of the video; adding the video image of the at least one actor to the first according to the moments appearing in the script of the content scheduling mode of the at least one actor Among the film and television works, the second film and television works were obtained.
The characteristic attributes corresponding to the script also include: the appearance state of each character in the script at each moment;
The orchestration module 403, for each actor, according to the determined role of the actor in the script, the moments in which each character in the script appears in the script in the content orchestration mode, and According to the appearance status of each character in the script at each moment, determine the appearance status of the actor appearing at each moment in the second film and television work; and based on the determined status of the actor at each moment in the second film and television work The appearance state and the corresponding feature attributes of the actor, adjust the appearance of the actor's video image.
The device further includes:
The second training module 405 trains the production model by using the sample feature attributes corresponding to the sample movie elements and the standard movie works corresponding to the sample movie elements collected in advance.
The first training module 404 and the second training module 405 may also be a module for training a model to be trained (including an analysis model and a production model).
Based on the above-mentioned production method of film and television works, this specification also provides a production device for film and television works, as shown in FIG. 5. The device includes one or more memories and a processor, the memory stores programs, and is configured to perform the following steps by the one or more processors:
Analyze the movie and television elements input by the user through a pre-trained analysis model to determine the characteristic attributes corresponding to the movie and television elements;
Filtering video materials that match the characteristic attributes from each of the previously collected video materials;
The screened video material is arranged through a pre-trained production model and the feature attributes to obtain a film and television work.
In one or more embodiments of the present specification, a movie element input by a user may be analyzed by using a pre-trained analysis model to determine a feature attribute corresponding to the movie element, and then from each of the video materials collected in advance Video materials that match the characteristic attributes corresponding to the determined film and television elements are screened, and then the screened video materials are arranged through the pre-trained production model and the characteristic attributes corresponding to the determined film and television elements to obtain film and television works.
As can be seen from the above methods, since users can complete the production of film and television works through pre-trained analysis models and production models through the self-selected film and television elements, this not only greatly reduces the production cost of film and television works, but also improves film and television The production efficiency of the work, meanwhile, the film and television works used for the production are completed based on the various film and television elements selected by the user. Therefore, the produced film and TV works can well meet the needs of the users, thereby improving to a certain extent The viewing experience of the user.
In the 1990s, for a technical improvement, it can be clearly distinguished whether it is an improvement in hardware (for example, the improvement of circuit structures such as diodes, transistors, switches, etc.) or an improvement in software (for method and process Improve). However, with the development of technology, the improvement of many methods and processes can be regarded as a direct improvement of the hardware circuit structure. Designers almost always get the corresponding hardware circuit structure by designing the improved method flow program into the hardware circuit. Therefore, it cannot be said that the improvement of a method flow cannot be realized by a hardware entity module. For example, a programmable logic device (Programmable Logic Device, PLD) (such as a Field Programmable Gate Array (FPGA)) is such an integrated circuit whose logic function is determined by the user's programming of the device . Designers can program their own digital system to "integrate" on a PLD without having to ask a chip manufacturer to design and produce a dedicated integrated circuit chip. Moreover, today, instead of making integrated circuit chips manually, this programming is mostly implemented using "logic compiler" software, which is similar to the software compiler used in program development. The original source code before compilation must also be written in a specific programming language. This is called the Hardware Description Language (HDL), and HDL is not only one, but there are many types, such as ABEL (Advanced Boolean Expression Language, AHDL (Altera Hardware Description Language), Confluence, CUPL (Cornell University Programming Language), HDCal, JHDL (Java Hardware Description Language), Lava, Lola, MyHDL, PALASM, RHDL (Ruby Hardware Description Language), etc. Currently, the most commonly used are Very-High-Speed Integrated Circuit Hardware Description Language (VHDL) and Verilog. Those skilled in the art should also be clear that as long as the method flow is logically programmed and integrated into the integrated circuit using the above-mentioned several hardware description languages, the hardware circuit implementing the logic method flow can be easily obtained.
The controller may be implemented in any suitable manner, for example, the controller may take the form of a microprocessor or processor and a computer-readable storage of computer-readable code (such as software or firmware) executable by the (micro) processor. Media, logic gates, switches, application specific integrated circuits (ASICs), programmable logic controllers and embedded microcontrollers. Examples of controllers include, but are not limited to, the following microcontrollers: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20 and Silicone Labs C8051F320, the memory controller can also be implemented as part of the control logic of the memory. Those skilled in the art also know that, in addition to implementing the controller in pure computer-readable code, it is entirely possible to make the controller controlled by logic gates, switches, dedicated integrated circuits, and programmable logic by programming logic steps in the method steps Controller and embedded microcontroller to achieve the same function. Therefore, the controller can be regarded as a hardware component, and the devices included in the controller for implementing various functions can also be regarded as the structure within the hardware component. Or even, the device for implementing various functions can be regarded as a structure that can be both a software module for implementing the method and a hardware component.
The system, device, module, or unit described in the foregoing embodiments may be specifically implemented by a computer chip or entity, or by a product having a certain function. A typical implementation is a computer. Specifically, the computer may be, for example, a personal computer, a laptop, a cellular phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, or a wearable device. Or a combination of any of these devices.
For the convenience of description, when describing the above device, the functions are divided into various units and described separately. Of course, when implementing this specification, the functions of each unit may be implemented in the same software or multiple software and / or hardware.
Those skilled in the art should understand that the embodiments of the present specification may be provided as a method, a system, or a computer program product. Therefore, this specification may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Moreover, this manual may take the form of a computer program product implemented on one or more computer-usable storage media (including but not limited to disk memory, CD-ROM, optical memory, etc.) containing computer-usable code. .
This specification is described with reference to flowcharts and / or block diagrams of methods, devices (systems), and computer program products according to one or more embodiments of this specification. It should be understood that each flow and / or block in the flowchart and / or block diagram, and a combination of the flow and / or block in the flowchart and / or block diagram can be implemented by computer program instructions. These computer program instructions can be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing device to generate a machine for instructions executed by the processor of the computer or other programmable data processing device Generate means for implementing the functions specified in one or more flowcharts and / or one or more blocks of the block diagram.
These computer program instructions may also be stored in computer readable memory that can guide a computer or other programmable data processing device to work in a specific manner, so that the instructions stored in the computer readable memory generate a manufactured article including a command device , The instruction device implements the functions specified in a flowchart or a plurality of processes and / or a block or a block of the block diagram.
These computer program instructions can also be loaded on a computer or other programmable data processing equipment, so that a series of operating steps can be performed on the computer or other programmable equipment to generate computer-implemented processing, and thus on the computer or other programmable equipment The instructions executed on the steps provide steps for implementing the functions specified in one or more flowcharts and / or one or more blocks of the block diagram.
In a typical configuration, a computing device includes one or more processors (CPUs), input / output interfaces, network interfaces, and memory.
Memory may include non-persistent memory, random access memory (RAM), and / or non-volatile memory in computer-readable media, such as read-only memory (ROM) or flash memory (flash) RAM). Memory is an example of a computer-readable medium.
Computer-readable media includes permanent and non-permanent, removable and non-removable media. Information can be stored by any method or technology. Information can be computer-readable instructions, data structures, modules of programs, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), and other types of random access memory (RAM) , Read-only memory (ROM), electrically erasable and programmable read-only memory (EEPROM), flash memory or other memory technologies, read-only disc read-only memory (CD-ROM), digital multifunction Optical discs (DVDs) or other optical storage, magnetic tape cartridges, magnetic tape storage or other magnetic storage devices, or any other non-transmission media may be used to store information that can be accessed by computing devices. According to the definition in this article, computer-readable media does not include temporary computer-readable media (transitory media), such as modulated data signals and carrier waves.
It should also be noted that the terms "including,""including," or any other variation thereof are intended to encompass non-exclusive inclusion, so that a process, method, product, or device that includes a range of elements includes not only those elements, but also Other elements not explicitly listed, or those that are inherent to such a process, method, product, or device. Without more restrictions, the elements defined by the sentence "including a ..." do not exclude the existence of other identical elements in the process, method, product or equipment including the elements.
This manual may be described in the general context of computer-executable instructions executed by a computer, such as program modules. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform specific tasks or implement specific abstract data types. One or more embodiments of the present specification may also be practiced in a decentralized computing environment, in which tasks are performed by remote processing devices connected through a communication network. In a distributed computing environment, program modules can be located in local and remote computer storage media, including storage devices.
Each embodiment in this specification is described in a progressive manner, and the same or similar parts between the various embodiments can be referred to each other. Each embodiment focuses on the differences from other embodiments. In particular, for the system embodiment, since it is basically similar to the method embodiment, the description is relatively simple. For the relevant part, refer to the description of the method embodiment.
The specific embodiments of the present specification have been described above. Other embodiments are within the scope of the appended patent applications. In some cases, the actions or steps described in the scope of the patent application may be performed in a different order than in the embodiments and still achieve the desired result. In addition, the processes depicted in the figures do not necessarily require the particular order shown or sequential order to achieve the desired results. In some embodiments, multiplexing and parallel processing are also possible or may be advantageous.
The above description is only one or more embodiments of the present specification, and is not intended to limit the present specification. It will be apparent to those skilled in the art that various modifications and variations can be made in one or more embodiments of the present specification. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of one or more embodiments of this specification shall be included in the scope of patent application of this specification.

401‧‧‧分析模組401‧‧‧analysis module

402‧‧‧篩選模組 402‧‧‧Screening Module

403‧‧‧編排模組 403‧‧‧Orchestration module

404‧‧‧第一訓練模組 404‧‧‧First Training Module

405‧‧‧第二訓練模組 405‧‧‧Second Training Module

此處所說明的附圖用來提供對本說明書的進一步理解,構成本說明書的一部分,本說明書的示意性實施例及其說明用於解釋本說明書,並不構成對本說明書的不當限定。在附圖中:The drawings described here are used to provide a further understanding of the specification and constitute a part of the specification. The schematic embodiments and the description of the specification are used to explain the specification, and do not constitute an improper limitation on the specification. In the drawings:

圖1為本說明書提供的結合判定出的各項特徵屬性進行影視作品製作的示意圖; FIG. 1 is a schematic diagram of production of a film and television work in combination with various characteristic attributes determined according to the present specification;

圖2為本說明書提供的影視作品的製作過程示意圖; FIG. 2 is a schematic diagram of the production process of the film and television works provided in this specification;

圖3為本說明書提供的伺服器對各演員的視頻圖像進行調整的示意圖; FIG. 3 is a schematic diagram of adjusting a video image of each actor by a server provided in this specification; FIG.

圖4為本說明書提供的影視作品的製作裝置示意圖; FIG. 4 is a schematic diagram of a production device for a film and television work provided in this specification;

圖5為本說明書提供的影視作品的製作設備示意圖。 FIG. 5 is a schematic diagram of the production equipment of the film and television works provided in this specification.

Claims (17)

一種影視作品的製作方法,包括: 通過預先訓練的分析模型,對使用者輸入的影視元素進行分析,判定所述影視元素對應的特徵屬性; 從預先收集的各視頻素材中篩選與所述特徵屬性相匹配的視頻素材; 通過預先訓練的製作模型以及所述特徵屬性,對篩選出的視頻素材進行編排,得到影視作品。A production method of film and television works, including: Analyze the movie and television elements input by the user through a pre-trained analysis model to determine the characteristic attributes corresponding to the movie and television elements; Filtering video materials that match the characteristic attributes from each of the previously collected video materials; The screened video material is arranged through a pre-trained production model and the feature attributes to obtain a film and television work. 如申請專利範圍第1項所述的方法,訓練所述分析模型,具體包括: 通過預先收集的各樣本影視元素以及標記出的所述各樣本影視元素對應的特徵屬性,對所述分析模型進行訓練。The method for training the analysis model according to the method described in item 1 of the patent application scope includes: The analysis model is trained by pre-collecting each sample video element and the labeled feature attributes corresponding to each sample video element. 如申請專利範圍第1項所述的方法,所述使用者輸入的影視元素包括:使用者選擇的劇本和使用者選擇的影視人選;所述影視人選包括:使用者選擇的導演、使用者選擇的至少一個演員中的至少一種; 所述分析模型包括:第一分析模型和第二分析模型。According to the method described in item 1 of the scope of patent application, the movie and television elements input by the user include: the script selected by the user and the movie and television candidates selected by the user; the movie and television candidates include: the director and the user choice At least one of at least one of the actors; The analysis model includes a first analysis model and a second analysis model. 如申請專利範圍第3項所述的方法,通過預先訓練的分析模型,對使用者輸入的影視元素進行分析,判定所述影視元素對應的特徵屬性,具體包括: 當所述影視元素為所述使用者選擇的劇本時,則通過所述第一分析模型,對所述劇本進行分析,以判定所述劇本對應的特徵屬性; 當所述影視元素為所述使用者選擇的導演時,判定所述導演的影視作品,通過所述第二分析模型,對所述導演的影視作品進行分析,以判定所述導演對應的特徵屬性; 當所述影視元素為所述使用者選擇的至少一個演員時,則針對所述使用者選擇的每個演員,判定該演員的影視作品,通過所述第二分析模型,對該演員的影視作品進行分析,以判定該演員對應的特徵屬性。According to the method described in item 3 of the scope of patent application, the pre-trained analysis model is used to analyze the movie and television elements input by the user to determine the feature attributes corresponding to the movie and television elements, which specifically include: When the film and television element is a script selected by the user, analyzing the script through the first analysis model to determine a characteristic attribute corresponding to the script; When the film and television element is the director selected by the user, determine the film and television works of the director, and analyze the film and television works of the director through the second analysis model to determine the feature attributes corresponding to the director ; When the film and television element is at least one actor selected by the user, for each actor selected by the user, the film and television work of the actor is determined, and the film and television work of the actor is determined by the second analysis model. An analysis is performed to determine the characteristic attributes corresponding to the actor. 如申請專利範圍第4項所述的方法,通過預先訓練的製作模型以及所述特徵屬性,對篩選出的視頻素材進行編排,得到影視作品,具體包括: 通過所述製作模型、所述劇本對應的特徵屬性以及所述導演對應的特徵屬性,判定所述劇本的內容編排模式; 根據所述劇本的內容編排模式,對篩選出的視頻素材進行編排,得到第一影視作品; 按照所述劇本的內容編排模式,將判定出的所述至少一個演員的視頻圖像添加在所述第一影視作品中,得到第二影視作品; 針對所述第二影視作品中每個演員的視頻圖像,根據所述劇本的內容編排模式以及判定出的該演員對應的特徵屬性,對該演員的視頻圖像進行調整,並將調整後的第二影視作品作為製作的影視作品。According to the method described in item 4 of the scope of patent application, the screened video materials are arranged through a pre-trained production model and the feature attributes to obtain film and television works, which specifically include: Determine the content arrangement mode of the script by using the production model, the characteristic attributes corresponding to the script, and the characteristic attributes corresponding to the director; Arrange the screened video material according to the content arrangement mode of the script to obtain the first film and television work; Adding the determined video image of the at least one actor to the first film and television work according to the content arrangement mode of the script to obtain a second film and television work; Regarding the video image of each actor in the second film and television work, the video image of the actor is adjusted according to the content arrangement mode of the script and the determined feature attributes corresponding to the actor, and the adjusted The second film and television works are produced as film and television works. 如申請專利範圍第5項所述的方法,所述劇本對應的特徵屬性包括:所述劇本中各角色出現的各時刻; 按照所述劇本的內容編排模式,將判定出的所述至少一個演員的視頻圖像添加在所述第一影視作品中,得到第二影視作品,具體包括: 針對每個演員,根據判定出的所述使用者選擇的所述至少一個演員與所述劇本中各角色的對應關係,判定該演員在所述劇本中對應的角色; 根據判定出的所述劇本中各角色出現的各時刻,以及所述至少一個演員在所述劇本中對應的各角色,通過所述製作模型,判定所述至少一個演員在所述內容編排模式下的劇本中所出現的各時刻; 根據判定出的所述至少一個演員在所述內容編排模式下的劇本中所出現的各時刻,將所述至少一個演員的視頻圖像添加在所述第一影視作品中,得到第二影視作品。According to the method described in item 5 of the scope of patent application, the characteristic attributes corresponding to the script include: each moment when each character in the script appears; Adding the determined video image of the at least one actor to the first film and television work according to the content arrangement mode of the script to obtain a second film and television work, which specifically includes: For each actor, according to the determined corresponding relationship between the at least one actor selected by the user and each role in the script, determine the corresponding role of the actor in the script; Determining the at least one actor in the content orchestration mode through the production model according to the determined moments in which each character in the script appears, and each character corresponding to the at least one actor in the script Moments appearing in the script; Adding the video image of the at least one actor to the first movie and television work according to each moment appearing in the script in the content scheduling mode of the at least one actor to obtain a second movie and television work . 如申請專利範圍第6項所述的方法,所述劇本對應的特徵屬性還包括:所述劇本中各角色在各時刻的形貌狀態; 針對所述第二影視作品中每個演員的視頻圖像,根據所述劇本的內容編排模式以及判定出的該演員對應的特徵屬性,對該演員的視頻圖像進行調整,具體包括: 針對每個演員,根據判定出的該演員在所述劇本中對應的角色、所述劇本中各角色在所述內容編排模式下的劇本中所出現的各時刻,以及所述劇本中各角色在各時刻的形貌狀態,判定該演員在所述第二影視作品中各時刻的形貌狀態; 根據判定出的該演員在所述第二影視作品中各時刻的形貌狀態,以及該演員對應的特徵屬性,對該演員的視頻圖像進行形貌調整。According to the method described in item 6 of the scope of patent application, the characteristic attributes corresponding to the script further include: the appearance state of each character in the script at each moment; Regarding the video image of each actor in the second film and television work, adjusting the video image of the actor according to the content arrangement mode of the script and the determined feature attributes corresponding to the actor, specifically includes: For each actor, according to the determined role of the actor in the script, the moments in which each character in the script appears in the script in the content arrangement mode, and the role of each character in the script in The appearance state of each moment, to determine the appearance state of the actor at each moment in the second film and television work; According to the determined appearance state of the actor at each moment in the second film and television work, and the characteristic attributes corresponding to the actor, the morphology adjustment of the actor's video image is performed. 如申請專利範圍第1項所述的方法,訓練所述製作模型,具體包括: 通過預先標記出的各樣本影視元素對應的各樣本特徵屬性以及收集到的所述各樣本影視元素對應的標準影視作品,對所述製作模型進行訓練。Training the production model according to the method described in item 1 of the scope of patent application, which specifically includes: The production model is trained by pre-marking each sample feature attribute corresponding to each sample movie element and the collected standard movie and television works corresponding to each sample movie element. 一種影視作品的製作裝置,包括: 分析模組,通過預先訓練的分析模型,對使用者輸入的影視元素進行分析,判定所述影視元素對應的特徵屬性; 篩選模組,從預先收集的各視頻素材中篩選與所述特徵屬性相匹配的視頻素材; 編排模組,通過預先訓練的製作模型以及所述特徵屬性,對篩選出的視頻素材進行編排,得到影視作品。A production device for film and television works, including: The analysis module analyzes the movie and television elements input by the user through a pre-trained analysis model to determine the feature attributes corresponding to the movie and television elements; A screening module that screens video materials that match the characteristic attributes from each of the previously collected video materials; The orchestration module arranges the screened video materials through a pre-trained production model and the feature attributes to obtain film and television works. 如申請專利範圍第9項所述的裝置,所述裝置還包括: 第一訓練模組,通過預先收集的各樣本影視元素以及標記出的所述各樣本影視元素對應的特徵屬性,對所述分析模型進行訓練。The device according to item 9 of the patent application scope, further comprising: The first training module trains the analysis model through the sample video elements collected in advance and the labeled feature attributes corresponding to the sample video elements. 如申請專利範圍第9項所述的裝置,所述使用者輸入的影視元素包括:使用者選擇的劇本和使用者選擇的影視人選;所述影視人選包括:使用者選擇的導演、使用者選擇的至少一個演員中的至少一種; 所述分析模型包括:第一分析模型和第二分析模型。According to the device described in item 9 of the scope of patent application, the movie and television elements input by the user include: the script selected by the user and the movie and television candidates selected by the user; the movie and television candidates include: the director and the user choice At least one of at least one of the actors; The analysis model includes a first analysis model and a second analysis model. 如申請專利範圍第11項所述的裝置,所述分析模組,當所述影視元素為所述使用者選擇的劇本時,則通過所述第一分析模型,對所述劇本進行分析,以判定所述劇本對應的特徵屬性;當所述影視元素為所述使用者選擇的導演時,判定所述導演的影視作品,通過所述第二分析模型,對所述導演的影視作品進行分析,以判定所述導演對應的特徵屬性;當所述影視元素為所述使用者選擇的至少一個演員時,則針對所述使用者選擇的每個演員,判定該演員的影視作品,通過所述第二分析模型,對該演員的影視作品進行分析,以判定該演員對應的特徵屬性。For example, in the device according to item 11 of the patent application scope, when the film and television element is a script selected by the user, the analysis module analyzes the script through the first analysis model to Determine the characteristic attribute corresponding to the script; when the film and television element is the director selected by the user, determine the film and television work of the director, and analyze the film and television work of the director through the second analysis model, To determine the feature attribute corresponding to the director; when the film and television element is at least one actor selected by the user, for each actor selected by the user, determine the actor's film and television work, and pass the first The second analysis model analyzes the film and television works of the actor to determine the corresponding feature attributes of the actor. 如申請專利範圍第12項所述的裝置,所述編排模組,通過所述製作模型、所述劇本對應的特徵屬性以及所述導演對應的特徵屬性,判定所述劇本的內容編排模式;根據所述劇本的內容編排模式,對篩選出的視頻素材進行編排,得到第一影視作品;按照所述劇本的內容編排模式,將判定出的所述至少一個演員的視頻圖像添加在所述第一影視作品中,得到第二影視作品;針對所述第二影視作品中每個演員的視頻圖像,根據所述劇本的內容編排模式以及判定出的該演員對應的特徵屬性,對該演員的視頻圖像進行調整,並將調整後的第二影視作品作為製作的影視作品。According to the device described in claim 12, the orchestration module determines the content orchestration mode of the script based on the production model, the feature attributes corresponding to the script, and the feature attributes corresponding to the director; The content arrangement mode of the script arranges the screened video materials to obtain a first film and television work; according to the content arrangement mode of the script, adds the determined video image of the at least one actor to the first In a film and television work, a second film and television work is obtained; for the video image of each actor in the second film and television work, according to the content arrangement mode of the script and the determined feature attributes corresponding to the actor, The video image is adjusted, and the adjusted second film and television work is used as the produced film and television work. 如申請專利範圍第13項所述的裝置,所述劇本對應的特徵屬性包括:所述劇本中各角色出現的各時刻; 所述編排模組,針對每個演員,根據判定出的所述使用者選擇的所述至少一個演員與所述劇本中各角色的對應關係,判定該演員在所述劇本中對應的角色;根據判定出的所述劇本中各角色出現的各時刻,以及所述至少一個演員在所述劇本中對應的各角色,通過所述製作模型,判定所述至少一個演員在所述內容編排模式下的劇本中所出現的各時刻;根據判定出的所述至少一個演員在所述內容編排模式下的劇本中所出現的各時刻,將所述至少一個演員的視頻圖像添加在所述第一影視作品中,得到第二影視作品。According to the device described in claim 13 of the patent application scope, the characteristic attributes corresponding to the script include: each moment when each character in the script appears; The orchestration module determines, for each actor, the corresponding role of the actor in the script according to the determined correspondence between the at least one actor selected by the user and each role in the script; according to At each moment when each character in the script appears, and each character corresponding to the at least one actor in the script, the production model is used to determine whether the at least one actor is in the content orchestration mode. Moments appearing in the script; and adding the video image of the at least one actor to the first film and television according to the judged moments appearing in the script in the content arrangement mode of the at least one actor In the work, get the second film and television works. 如申請專利範圍第14項所述的裝置,所述劇本對應的特徵屬性還包括:所述劇本中各角色在各時刻的形貌狀態; 所述編排模組,針對每個演員,根據判定出的該演員在所述劇本中對應的角色、所述劇本中各角色在所述內容編排模式下的劇本中所出現的各時刻,以及所述劇本中各角色在各時刻的形貌狀態,判定該演員出現在所述第二影視作品中各時刻的形貌狀態;根據判定出的該演員在所述第二影視作品中各時刻的形貌狀態,以及該演員對應的特徵屬性,對該演員的視頻圖像進行形貌調整。According to the device described in item 14 of the scope of patent application, the characteristic attributes corresponding to the script further include: the appearance state of each character in the script at each moment; The orchestration module, for each actor, according to the determined role of the actor in the script, the moments in which each character in the script appears in the script in the content orchestration mode, and the According to the appearance status of each character in the script at each moment, determine the appearance status of the actor at each moment in the second film and television work; based on the determined appearance of the actor at each moment in the second film and television work The appearance state and the corresponding feature attributes of the actor, adjust the appearance of the actor's video image. 如申請專利範圍第9項所述的裝置,所述裝置還包括: 第二訓練模組,通過預先標記出的各樣本影視元素對應的各樣本特徵屬性以及收集到的所述各樣本影視元素對應的標準影視作品,對所述製作模型進行訓練。The device according to item 9 of the patent application scope, further comprising: The second training module trains the production model through the sample feature attributes corresponding to the sample film and television elements labeled in advance and the collected standard film and television works corresponding to the sample film and television elements. 一種影視作品的製作設備,設備包括一個或多個記憶體以及處理器,所述記憶體儲存程式,並且被配置成由所述一個或多個處理器執行以下步驟: 通過預先訓練的分析模型,對使用者輸入的影視元素進行分析,判定所述影視元素對應的特徵屬性; 從預先收集的各視頻素材中篩選與所述特徵屬性相匹配的視頻素材; 通過預先訓練的製作模型以及所述特徵屬性,對篩選出的視頻素材進行編排,得到影視作品。A production device for a film and television work, the device includes one or more memories and a processor, the memory stores a program, and is configured to perform the following steps by the one or more processors: Analyze the movie and television elements input by the user through a pre-trained analysis model to determine the characteristic attributes corresponding to the movie and television elements; Filtering video materials that match the characteristic attributes from each of the previously collected video materials; The screened video material is arranged through a pre-trained production model and the feature attributes to obtain a film and television work.
TW107147329A 2018-03-09 2018-12-27 Method, device and equipment for making film and television works TWI713965B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
??201810192606.8 2018-03-09
CN201810192606.8 2018-03-09
CN201810192606.8A CN108549655A (en) 2018-03-09 2018-03-09 A kind of production method of films and television programs, device and equipment

Publications (2)

Publication Number Publication Date
TW201939322A true TW201939322A (en) 2019-10-01
TWI713965B TWI713965B (en) 2020-12-21

Family

ID=63515984

Family Applications (1)

Application Number Title Priority Date Filing Date
TW107147329A TWI713965B (en) 2018-03-09 2018-12-27 Method, device and equipment for making film and television works

Country Status (3)

Country Link
CN (1) CN108549655A (en)
TW (1) TWI713965B (en)
WO (1) WO2019169979A1 (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110121107A (en) * 2018-02-06 2019-08-13 上海全土豆文化传播有限公司 Video material collection method and device
CN108549655A (en) * 2018-03-09 2018-09-18 阿里巴巴集团控股有限公司 A kind of production method of films and television programs, device and equipment
CN111193957A (en) * 2018-11-14 2020-05-22 技嘉科技股份有限公司 Method for analyzing performer film and method for increasing performance effect
CN109886418A (en) * 2019-03-12 2019-06-14 深圳微品致远信息科技有限公司 A kind of method, system and storage medium intelligently generating Design Works based on machine learning
CN112866798B (en) * 2020-12-31 2023-05-05 北京字跳网络技术有限公司 Video generation method, device, equipment and storage medium
CN112801861A (en) * 2021-01-29 2021-05-14 恒安嘉新(北京)科技股份公司 Method, device and equipment for manufacturing film and television works and storage medium
CN113727039B (en) * 2021-07-29 2022-12-27 北京达佳互联信息技术有限公司 Video generation method and device, electronic equipment and storage medium
CN114885212B (en) * 2022-05-16 2024-02-23 北京三快在线科技有限公司 Video generation method and device, storage medium and electronic equipment

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200719352A (en) * 2005-11-11 2007-05-16 Best Wise Internat Computing Co Ltd Multimedia scenario template generating system and method thereof
EP2656627A1 (en) * 2010-12-22 2013-10-30 Thomson Licensing Method and system for providing media recommendations
CN102750366B (en) * 2012-06-18 2015-05-27 海信集团有限公司 Video search system and method based on natural interactive import and video search server
CN103488848A (en) * 2013-10-07 2014-01-01 仇瑞华 Synthesis production method of parameterized film template and finished film
US9858090B2 (en) * 2015-06-02 2018-01-02 International Business Machines Corporation Generating customized on-demand videos from automated test scripts
CN105868176A (en) * 2016-03-02 2016-08-17 北京同尘世纪科技有限公司 Text based video synthesis method and system
CN106021485B (en) * 2016-05-19 2019-05-14 中国传媒大学 A kind of polynary attribute cinematic data visualization system
US10642893B2 (en) * 2016-09-05 2020-05-05 Google Llc Generating theme-based videos
CN107067450A (en) * 2017-04-21 2017-08-18 福建中金在线信息科技有限公司 The preparation method and device of a kind of video
CN107679103B (en) * 2017-09-08 2020-08-04 口碑(上海)信息技术有限公司 Attribute analysis method and system for entity
CN107566907B (en) * 2017-09-20 2019-08-30 Oppo广东移动通信有限公司 Video clipping method, device, storage medium and terminal
CN108549655A (en) * 2018-03-09 2018-09-18 阿里巴巴集团控股有限公司 A kind of production method of films and television programs, device and equipment

Also Published As

Publication number Publication date
TWI713965B (en) 2020-12-21
CN108549655A (en) 2018-09-18
WO2019169979A1 (en) 2019-09-12

Similar Documents

Publication Publication Date Title
TWI713965B (en) Method, device and equipment for making film and television works
JP7252362B2 (en) Method for automatically editing video and portable terminal
US9870798B2 (en) Interactive real-time video editor and recorder
Davenport et al. Cinematic primitives for multimedia
US9798464B2 (en) Computing device
Zhao et al. The interplay of (semiotic) technologies and genre: the case of the selfie
US10541000B1 (en) User input-based video summarization
JP5432617B2 (en) Animation production method and apparatus
Yu et al. A deep ranking model for spatio-temporal highlight detection from a 360◦ video
US20150365600A1 (en) Composing real-time processed video content with a mobile device
Christiansen Adobe after effects CC visual effects and compositing studio techniques
KR20210082232A (en) Real-time video special effects systems and methods
WO2018050021A1 (en) Virtual reality scene adjustment method and apparatus, and storage medium
WO2021063096A1 (en) Video synthesis method, apparatus, electronic device, and storage medium
KR20210041057A (en) Technology to capture and edit dynamic depth images
US20200104030A1 (en) User interface elements for content selection in 360 video narrative presentations
CN110677707A (en) Interactive video generation method, generation device, equipment and readable medium
CN113704513B (en) Model training method, information display method and device
Kumar et al. Zooming on all actors: Automatic focus+ context split screen video generation
Cohen Database Documentary: From Authorship to Authoring in Remediated/Remixed Documentary
Du et al. Research on special effects of film and television movies based on computer virtual production VR technology
CN113286181A (en) Data display method and device
WO2022193931A1 (en) Virtual reality device and media resource playback method
CN111367598B (en) Method and device for processing action instruction, electronic equipment and computer readable storage medium
US20230326161A1 (en) Data processing method and apparatus, electronic device, computer-readable storage medium, and computer program product