TW202123178A - Method for realizing lens splitting effect, device and related products thereof - Google Patents

Method for realizing lens splitting effect, device and related products thereof Download PDF

Info

Publication number
TW202123178A
TW202123178A TW109116665A TW109116665A TW202123178A TW 202123178 A TW202123178 A TW 202123178A TW 109116665 A TW109116665 A TW 109116665A TW 109116665 A TW109116665 A TW 109116665A TW 202123178 A TW202123178 A TW 202123178A
Authority
TW
Taiwan
Prior art keywords
image
real
dimensional virtual
virtual
model
Prior art date
Application number
TW109116665A
Other languages
Chinese (zh)
Other versions
TWI752502B (en
Inventor
劉文韜
鄭佳宇
黃展鵬
李佳樺
Original Assignee
中國商深圳市商湯科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中國商深圳市商湯科技有限公司 filed Critical 中國商深圳市商湯科技有限公司
Publication of TW202123178A publication Critical patent/TW202123178A/en
Application granted granted Critical
Publication of TWI752502B publication Critical patent/TWI752502B/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/2053D [Three Dimensional] animation driven by audio data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Abstract

The embodiment of the present application discloses a method for realizing a lens splitting effect, device, and related products. The method includes: acquiring a three-dimensional virtual model; rendering the three-dimensional virtual model with at least two different lens angles to obtain at least two Virtual images corresponding to different lens angles.

Description

一種分鏡效果的實現方法、裝置及相關產品Method, device and related products for realizing split-mirror effect

本申請基於申請號為201911225211.4、申請日為2019年12月3日的中國專利申請提出,並要求該中國專利申請的優先權,該中國專利申請的全部內容在此以引入方式併入本申請。本申請涉及虛擬技術領域,尤其涉及一種分鏡效果的實現方法、裝置及相關產品。This application is filed based on a Chinese patent application with an application number of 201911225211.4 and an application date of December 3, 2019, and claims the priority of the Chinese patent application. The entire content of the Chinese patent application is hereby incorporated into this application by way of introduction. This application relates to the field of virtual technology, and in particular to a method, device and related products for realizing the split-mirror effect.

近年來,「虛擬人物」頻繁出現在我們的生活中,例如,人們熟知的「初音未來」、「洛天依」等虛擬偶像在音樂領域中的應用,或者虛擬主持人在新聞直播中的應用等等。由於虛擬人物可以代替真實人物在網路世界中進行活動,而且使用者可以根據需求自行設置虛擬人物的外觀、造型等等,因此,虛擬人物逐漸成為了一種人與人之間的交流方式。In recent years, "virtual characters" have frequently appeared in our lives, for example, the well-known applications of virtual idols such as "Hatsune Miku" and "Luo Tianyi" in the field of music, or the application of virtual hosts in live news and many more. Since virtual characters can replace real characters in activities in the online world, and users can set their own appearance, shape, etc. according to their needs, virtual characters have gradually become a way of communication between people.

目前,網路中的虛擬人物在生成過程中普遍採用運動捕獲技術,透過圖像識別的方法對拍攝得到的真實人物圖像進行分析,從而將真實人物的動作和表情定向到虛擬人物中,使得虛擬人物可以重現真實人物的動作和表情。At present, the virtual characters in the Internet generally use motion capture technology during the generation process, and analyze the captured images of real people through image recognition methods, so as to direct the actions and expressions of the real characters to the virtual characters, making Virtual characters can reproduce the actions and expressions of real characters.

本申請實施例揭露了一種分鏡效果的實現方法、裝置及相關產品。The embodiments of the present application disclose a method, device and related products for realizing the split-mirror effect.

第一方面,本申請實施例提供了一種分鏡效果的實現方法,包括:獲取三維虛擬模型;以至少兩個不同的鏡頭視角對三維虛擬模型進行渲染,得到至少兩個不同的鏡頭視角分別對應的虛擬圖像。In the first aspect, an embodiment of the present application provides a method for implementing the splitting effect, including: obtaining a three-dimensional virtual model; rendering the three-dimensional virtual model with at least two different lens angles to obtain at least two different lens angles corresponding to each Virtual image.

上述方法透過對獲取三維虛擬模型,並以至少兩個不同的鏡頭視角對三維虛擬模型進行渲染,從而得到至少兩個不同的鏡頭視角分別對應的虛擬圖像,使得使用者可以看到不同鏡頭視角下的虛擬圖像,為使用者帶來豐富的視覺體驗。The above method obtains a three-dimensional virtual model and renders the three-dimensional virtual model with at least two different lens angles, thereby obtaining virtual images corresponding to at least two different lens angles, so that the user can see different lens angles. The virtual image below brings a rich visual experience to the user.

在本申請的一些可選實施例中,三維虛擬模型包括處於三維虛擬場景模型中的三維虛擬人物模型,在獲取三維虛擬模型之前,上述方法還包括:獲取真實圖像,其中,真實圖像包括真實人物圖像;對真實人物圖像進行特徵提取得到特徵資訊,其中,特徵資訊包括真實人物的動作資訊;根據特徵資訊生成三維虛擬模型,以使得三維虛擬模型中的三維虛擬人物模型的動作資訊與真實人物的動作資訊對應。In some optional embodiments of the present application, the three-dimensional virtual model includes a three-dimensional virtual character model in a three-dimensional virtual scene model. Before obtaining the three-dimensional virtual model, the above method further includes: obtaining a real image, where the real image includes Real person images; feature extraction of real person images to obtain feature information, where the feature information includes the action information of the real person; generate a three-dimensional virtual model based on the feature information, so that the action information of the three-dimensional virtual character model in the three-dimensional virtual model Corresponds to the action information of real characters.

可以看出,透過對採集得到的真實人物圖像進行特徵提取,從而生成三維虛擬模型,使得三維虛擬模型中的三維虛擬人物模型可以重現真實人物的臉部表情和肢體動作,方便觀眾透過觀看三維虛擬模型對應的虛擬圖像便可以得知真實人物的臉部表情和肢體動作,從而使得觀眾與真人主播實現更為靈活的互動。It can be seen that by extracting the features of the collected real person images, a 3D virtual model is generated, so that the 3D virtual person model in the 3D virtual model can reproduce the facial expressions and body movements of the real person, which is convenient for the audience to watch through The virtual image corresponding to the three-dimensional virtual model can learn the facial expressions and body movements of the real person, so that the audience can interact more flexibly with the live anchor.

在本申請的一些可選實施例中,獲取真實圖像包括:獲取影片流,根據影片流中的至少兩幀圖像得到至少兩幀真實圖像;對真實人物圖像進行特徵提取得到特徵資訊,包括:分別對每一幀真實人物圖像進行特徵提取得到對應的特徵資訊。In some optional embodiments of the present application, obtaining the real image includes: obtaining a movie stream, obtaining at least two real images from at least two frames of the movie stream; performing feature extraction on the real person image to obtain feature information , Including: extracting features of each frame of real person images to obtain corresponding feature information.

可以看出,三維虛擬模型可以根據採集得到的多幀真實圖像即時變化,使得使用者可以看到不同鏡頭視角下的三維虛擬模型的動態變化過程。It can be seen that the three-dimensional virtual model can be changed in real time according to the multiple frames of real images acquired, so that the user can see the dynamic change process of the three-dimensional virtual model under different lens perspectives.

在本申請的一些可選實施例中,真實圖像還包括真實場景圖像,三維虛擬模型還包括三維虛擬場景模型;在獲取三維虛擬模型之前,上述方法還包括:根據真實場景圖像,構建三維虛擬場景模型。In some optional embodiments of the present application, the real image further includes a real scene image, and the three-dimensional virtual model also includes a three-dimensional virtual scene model; before acquiring the three-dimensional virtual model, the above method further includes: constructing the real scene image according to the real scene image. Three-dimensional virtual scene model.

可以看出,上述方法還可以利用真實場景圖像來構建三維虛擬模型中的三維虛擬場景圖像,相較於只能選擇特定的三維虛擬場景圖像來說,使得三維虛擬場景圖像的選擇性更多。It can be seen that the above method can also use real scene images to construct three-dimensional virtual scene images in the three-dimensional virtual model. Compared with the selection of specific three-dimensional virtual scene images, the choice of three-dimensional virtual scene images More sex.

在本申請的一些可選實施例中,獲取至少兩個不同的鏡頭視角,包括:根據至少兩幀真實圖像,得到至少兩個不同的鏡頭視角。In some optional embodiments of the present application, acquiring at least two different lens angles includes: obtaining at least two different lens angles according to at least two frames of real images.

可以看出,每幀真實圖像對應一個鏡頭視角,多幀真實圖像對應多個鏡頭視角,因此根據至少兩幀真實圖像可以得到至少兩幀不同的鏡頭視角,從而用於實現三維虛擬模型的鏡頭視角渲染,為使用者提供豐富的視覺體驗。It can be seen that each frame of real image corresponds to one lens angle, and multiple frames of real image correspond to multiple lens angles. Therefore, at least two frames of different lens angles can be obtained based on at least two frames of real images, which can be used to realize a three-dimensional virtual model. The lens perspective rendering provides users with a rich visual experience.

在本申請的一些可選實施例中,獲取至少兩個不同的鏡頭視角,包括:根據至少兩幀真實圖像分別對應的動作資訊,得到至少兩個不同的鏡頭視角。In some optional embodiments of the present application, acquiring at least two different lens angles includes: obtaining at least two different lens angles according to the action information corresponding to the at least two frames of real images.

可以看出,根據真實圖像中的真實人物的動作資訊來確定鏡頭視角,可以使得圖像中放大顯示對應的三維虛擬人物模型的動作,方便使用者透過觀看虛擬圖像從而得知真實人物的動作,提高交互性與趣味性。It can be seen that determining the lens angle of view based on the action information of the real person in the real image can magnify the action of the corresponding three-dimensional virtual person model in the image, so that the user can learn the real person’s behavior by watching the virtual image. Action to improve interactivity and fun.

在本申請的一些可選實施例中,獲取至少兩個不同的鏡頭視角,包括:獲取背景音樂;確定背景音樂對應的時間合集,其中時間合集包括至少兩個時間段;獲取時間合集中每一個時間段對應的鏡頭視角。In some optional embodiments of the present application, acquiring at least two different camera angles includes: acquiring background music; determining a time collection corresponding to the background music, where the time collection includes at least two time periods; acquiring each of the time collections The lens angle of view corresponding to the time period.

可以看出,上述方法中透過分析背景音樂,並確定背景音樂對應的時間合集,從而獲取多個不同鏡頭視角,透過這種方法可以提高鏡頭視角的多樣性,使得使用者可以得到更為豐富的視覺體驗。It can be seen that in the above method, by analyzing the background music and determining the time collection corresponding to the background music, multiple different lens perspectives can be obtained. Through this method, the diversity of the lens perspective can be increased, so that the user can get more abundant Visual experience.

在本申請的一些可選實施例中,至少兩個不同的鏡頭視角包括第一鏡頭視角和第二鏡頭視角;以至少兩個不同的鏡頭視角對三維虛擬模型進行渲染,得到至少兩個不同的鏡頭視角分別對應的虛擬圖像,包括:以第一鏡頭視角對三維虛擬模型進行渲染,得到第一虛擬圖像;以第二鏡頭視角對三維虛擬模型進行渲染,得到第二虛擬圖像;展示根據第一虛擬圖像和第二虛擬圖像形成的圖像序列。In some optional embodiments of the present application, the at least two different lens angles include a first lens angle of view and a second lens angle of view; the three-dimensional virtual model is rendered with at least two different lens angles to obtain at least two different lens angles. The virtual images corresponding to the lens perspectives respectively include: rendering the three-dimensional virtual model with the first lens perspective to obtain the first virtual image; rendering the three-dimensional virtual model with the second lens perspective to obtain the second virtual image; and displaying An image sequence formed according to the first virtual image and the second virtual image.

可以看出,分別以第一鏡頭視角和第二鏡頭視角對三維虛擬模型進行渲染,可以使得使用者觀看到第一鏡頭視角下的三維虛擬模型以及第二鏡頭視角下的三維虛擬模型,從而為使用者提供豐富的視覺體驗。It can be seen that rendering the three-dimensional virtual model with the first lens perspective and the second lens perspective respectively allows the user to view the three-dimensional virtual model in the first lens perspective and the three-dimensional virtual model in the second lens perspective, thus The user provides a rich visual experience.

在本申請的一些可選實施例中,以第二鏡頭視角對三維虛擬模型進行渲染,得到第二虛擬圖像,包括:將第一鏡頭視角下的三維虛擬模型進行平移或者旋轉,得到第二鏡頭視角下的三維虛擬模型;獲取第二鏡頭視角下的三維虛擬模型對應的第二虛擬圖像。In some optional embodiments of the present application, rendering the three-dimensional virtual model in the second lens perspective to obtain the second virtual image includes: translating or rotating the three-dimensional virtual model in the first lens perspective to obtain the second The three-dimensional virtual model under the lens angle of view; the second virtual image corresponding to the three-dimensional virtual model under the second lens angle of view is acquired.

可以看出,透過將第一鏡頭視角下的三維虛擬模型進行平移或者旋轉,可以快速且準確地得到第二鏡頭視角下的三維虛擬模型,也就是第二虛擬圖像。It can be seen that by translating or rotating the three-dimensional virtual model under the first lens angle of view, the three-dimensional virtual model under the second lens angle of view, that is, the second virtual image, can be quickly and accurately obtained.

在本申請的一些可選實施例中,展示根據第一圖像和第二虛擬圖像形成的圖像序列,包括:在第一虛擬圖像和第二虛擬圖像之間插入a幀虛擬圖像,使得第一虛擬圖像平緩切換至第二虛擬圖像,其中,a是正整數。In some optional embodiments of the present application, displaying the image sequence formed according to the first image and the second virtual image includes: inserting a virtual image between the first virtual image and the second virtual image Image, so that the first virtual image is gently switched to the second virtual image, where a is a positive integer.

可以看出,在第一虛擬圖像和第二虛擬圖像之間插入a幀虛擬圖像,使得觀眾可以看到由第一虛擬圖像到第二虛擬圖像的整個變化過程,而不是單一的兩張圖像(第一虛擬圖像和第二虛擬圖像),從而使得觀眾可以適應由第一虛擬圖像到第二虛擬圖像所造成的視覺差的變化效果。It can be seen that the a-frame virtual image is inserted between the first virtual image and the second virtual image, so that the viewer can see the entire change process from the first virtual image to the second virtual image, rather than a single virtual image. The two images (the first virtual image and the second virtual image), so that the audience can adapt to the visual difference caused by the first virtual image to the second virtual image.

在本申請的一些可選實施例中,方法還包括:對背景音樂進行節拍檢測,得到背景音樂的節拍合集,其中,節拍合集包括多個節拍,多個節拍中的每一個節拍對應一個舞臺特效;將節拍合集對應的目標舞臺特效添加到三維虛擬模型中。In some optional embodiments of the present application, the method further includes: performing beat detection on the background music to obtain a beat collection of the background music, where the beat collection includes multiple beats, and each beat of the multiple beats corresponds to a stage special effect ; Add the target stage special effects corresponding to the beat collection to the 3D virtual model.

可以看出,根據音樂的節拍資訊對虛擬人物模型所在的虛擬場景添加相應的舞臺特效,從而為觀眾呈現出不同的舞臺效果,增強了觀眾的觀看體驗度。It can be seen that corresponding stage effects are added to the virtual scene where the virtual character model is located according to the beat information of the music, thereby presenting different stage effects to the audience and enhancing the audience's viewing experience.

第二方面,本申請實施例還提供了一種分鏡效果的實現裝置,包括:獲取單元,配置為獲取三維虛擬模型;分鏡單元,配置為以至少兩個不同的鏡頭視角對三維虛擬模型進行渲染,得到至少兩個不同的鏡頭視角分別對應的虛擬圖像。In a second aspect, an embodiment of the present application also provides a device for implementing a splitting effect, including: an acquiring unit configured to acquire a three-dimensional virtual model; and a splitting unit configured to perform a three-dimensional virtual model with at least two different lens angles. Rendering to obtain virtual images corresponding to at least two different lens angles respectively.

在本申請的一些可選實施例中,三維虛擬模型包括處於三維虛擬場景模型中的三維虛擬人物模型,裝置還包括:特徵提取單元和三維虛擬模型生成單元;其中,獲取單元,還配置為在獲取三維虛擬模型之前,獲取真實圖像,其中,真實圖像包括真實人物圖像;特徵提取單元,配置為對真實人物圖像進行特徵提取得到特徵資訊,其中,特徵資訊包括真實人物的動作資訊;三維虛擬模型生成單元,配置為根據特徵資訊生成三維虛擬模型,以使得三維虛擬模型中的三維虛擬人物模型的動作資訊與真實人物的動作資訊對應。In some optional embodiments of the present application, the three-dimensional virtual model includes a three-dimensional virtual character model in a three-dimensional virtual scene model, and the device further includes: a feature extraction unit and a three-dimensional virtual model generation unit; wherein, the acquisition unit is also configured to Before acquiring the three-dimensional virtual model, acquire a real image, where the real image includes an image of a real person; the feature extraction unit is configured to perform feature extraction on the image of the real person to obtain feature information, where the feature information includes the action information of the real person ; The three-dimensional virtual model generating unit is configured to generate a three-dimensional virtual model according to the feature information, so that the action information of the three-dimensional virtual character model in the three-dimensional virtual model corresponds to the action information of the real character.

在本申請的一些可選實施例中,獲取單元,配置為獲取影片流,根據影片流中的至少兩幀圖像得到至少兩幀真實圖像;特徵提取單元,配置為對每一幀真實人物圖像進行特徵提取得到對應的特徵資訊。In some optional embodiments of the present application, the acquiring unit is configured to acquire a film stream, and obtain at least two frames of real images according to at least two frames of images in the film stream; Image feature extraction is performed to obtain corresponding feature information.

在本申請的一些可選實施例中,真實圖像還包括真實場景圖像,三維虛擬模型還包括三維虛擬場景模型;裝置還包括:三維虛擬場景圖像構建單元,配置為在獲取單元獲取三維虛擬模型之前,根據真實場景圖像,構建三維虛擬場景圖像。In some optional embodiments of the present application, the real image further includes a real scene image, and the three-dimensional virtual model also includes a three-dimensional virtual scene model; the device further includes: a three-dimensional virtual scene image construction unit configured to obtain a three-dimensional virtual scene image in the acquisition unit Before the virtual model, a three-dimensional virtual scene image is constructed according to the real scene image.

在本申請的一些可選實施例中,裝置還包括鏡頭視角獲取單元,配置為根據至少兩幀真實圖像,得到至少兩個不同的鏡頭視角。In some optional embodiments of the present application, the device further includes a lens angle acquisition unit configured to obtain at least two different lens angles according to at least two frames of real images.

在本申請的一些可選實施例中,裝置還包括鏡頭視角獲取單元,配置為根據至少兩幀真實圖像分別對應的動作資訊,得到至少兩個不同的鏡頭視角。In some optional embodiments of the present application, the device further includes a lens angle acquisition unit configured to obtain at least two different lens angles according to the action information corresponding to the at least two frames of real images.

在本申請的一些可選實施例中,裝置還包括鏡頭視角獲取單元,配置為獲取背景音樂;確定背景音樂對應的時間合集,其中時間合集包括至少兩個時間段;獲取時間合集中每一個時間段對應的鏡頭視角。In some optional embodiments of the present application, the device further includes a lens angle acquisition unit configured to acquire background music; determine a time collection corresponding to the background music, where the time collection includes at least two time periods; and acquire each time in the time collection The lens angle of view corresponding to the segment.

在本申請的一些可選實施例中,至少兩個不同的鏡頭視角包括第一鏡頭視角和第二鏡頭視角,分鏡單元,配置為以第一鏡頭視角對三維虛擬模型進行渲染,得到第一虛擬圖像;以第二鏡頭視角對三維虛擬模型進行渲染,得到第二虛擬圖像;展示根據第一虛擬圖像和第二虛擬圖像形成的圖像序列。In some optional embodiments of the present application, at least two different lens angles include a first lens angle of view and a second lens angle of view, and the splitting unit is configured to render the three-dimensional virtual model with the first lens angle of view to obtain the first lens angle. Virtual image; Render the three-dimensional virtual model from the second lens perspective to obtain the second virtual image; display the image sequence formed according to the first virtual image and the second virtual image.

在本申請的一些可選實施例中,分鏡單元,配置為將第一鏡頭視角下的三維虛擬模型進行平移或者旋轉,得到第二鏡頭視角下的三維虛擬模型;獲取第二鏡頭視角下的三維虛擬模型對應的第二虛擬圖像。In some optional embodiments of the present application, the splitting unit is configured to translate or rotate the three-dimensional virtual model under the first lens angle of view to obtain the three-dimensional virtual model under the second lens angle of view; and obtain the three-dimensional virtual model under the second lens angle of view. The second virtual image corresponding to the three-dimensional virtual model.

在本申請的一些可選實施例中,分鏡單元,配置為在第一虛擬圖像和第二虛擬圖像之間插入a幀虛擬圖像,使得第一虛擬圖像平緩切換至第二虛擬圖像,其中,a是正整數。In some optional embodiments of the present application, the mirror splitting unit is configured to insert a frame of virtual image between the first virtual image and the second virtual image, so that the first virtual image is gently switched to the second virtual image. Image, where a is a positive integer.

在本申請的一些可選實施例中,裝置還包括:節拍檢測單元和舞臺特效生成單元;其中,節拍檢測單元,配置為對背景音樂進行節拍檢測,得到背景音樂的節拍合集,其中,節拍合集包括多個節拍,多個節拍中的每一個節拍對應一個舞臺特效;舞臺特效生成單元,配置為將節拍合集對應的目標舞臺特效添加到三維虛擬模型中。In some optional embodiments of the present application, the device further includes: a beat detection unit and a stage special effect generation unit; wherein the beat detection unit is configured to perform beat detection on the background music to obtain a beat collection of the background music, wherein the beat collection Including multiple beats, each of the multiple beats corresponds to a stage special effect; the stage special effect generation unit is configured to add the target stage special effect corresponding to the beat collection to the three-dimensional virtual model.

第三方面,本申請實施例提供了一種電子設備,包括:處理器、通訊介面以及記憶體;記憶體用於儲存指令,處理器用於執行指令,通訊介面用於在處理器的控制下與其他設備進行通訊,其中,處理器執行指令時使得電子設備實現如上述第一方面中的任一項方法。In a third aspect, an embodiment of the present application provides an electronic device, including: a processor, a communication interface, and a memory; the memory is used to store instructions, the processor is used to execute instructions, and the communication interface is used to communicate with other devices under the control of the processor. The device communicates, wherein when the processor executes the instruction, the electronic device implements any one of the methods in the first aspect.

第四方面,本申請實施例提供了一種電腦可讀儲存介質,儲存有電腦程式,上述電腦程式被硬體執行以實現上述第一方面中的任一項方法。In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium that stores a computer program, and the computer program is executed by hardware to implement any one of the methods in the first aspect.

第五方面,本申請實施例提供了一種電腦程式產品,當電腦程式產品被電腦讀取並執行時,如上述第一方面中的任一項方法被執行。In the fifth aspect, an embodiment of the present application provides a computer program product. When the computer program product is read and executed by a computer, any one of the methods in the first aspect described above is executed.

本申請實施例中使用的術語僅用於對本申請的具體實施例進行解釋,而非旨在限定本申請。The terms used in the embodiments of the present application are only used to explain the specific embodiments of the present application, and are not intended to limit the present application.

本申請實施例提供的一種分鏡效果的實現方法、裝置及相關產品可以應用在社交、娛樂以及教育等多個領域,比如說,可以用於虛擬直播、虛擬社群中進行社交互動,也可以用於舉辦虛擬演唱會,還可以應用於課堂教學等等。為了方便理解本申請實施例,下面以虛擬直播為例,對本申請實施例的具體應用場景進行詳細說明。The implementation method, device, and related products of the split-mirror effect provided by the embodiments of the present application can be applied in many fields such as social interaction, entertainment, education, etc., for example, can be used for virtual live broadcast, social interaction in virtual communities, or Used to hold virtual concerts, can also be used in classroom teaching and so on. In order to facilitate the understanding of the embodiments of the present application, the following takes virtual live broadcast as an example to describe the specific application scenarios of the embodiments of the present application in detail.

虛擬直播,是一種在直播平臺上利用虛擬人物代替真人主播進行直播的方式。由於虛擬人物具有豐富的表現力,也更加符合社交網路的傳播環境,因此,虛擬直播產業發展迅猛。在虛擬直播的過程中,通常利用臉部表情捕捉、動作捕捉以及聲音處理等電腦技術,將真人主播的臉部表情和動作套用在虛擬人物模型上,從而實現觀眾與虛擬主播在影片網站或者社交網站中的互動。Virtual live broadcast is a way to use virtual characters instead of live anchors to conduct live broadcasts on a live broadcast platform. Because virtual characters have rich expressive power and are more in line with the communication environment of social networks, the virtual live broadcast industry is developing rapidly. In the process of virtual live broadcast, computer technologies such as facial expression capture, motion capture, and sound processing are usually used to apply the facial expressions and actions of the live anchor to the virtual character model, so as to realize the audience and the virtual anchor on the movie website or social network. Interaction in the website.

為了節省直播成本以及後期製作費用,用戶通常直接使用手機、平板電腦等終端設備進行直播。請參見第1圖,第1圖是本申請實施例提供的一種具體應用場景的示意圖,在如第1圖示出的直播過程中,攝影設備110對真人主播進行拍攝,並將採集到的真實人物圖像透過網路傳送至伺服器120中進行處理,伺服器120再將生成的虛擬圖像發送至使用者終端130,從而使得不同的觀眾透過對應的使用者終端130觀看到整個直播過程。In order to save live broadcast costs and post-production costs, users usually directly use mobile phones, tablet computers and other terminal devices for live broadcasts. Please refer to Figure 1. Figure 1 is a schematic diagram of a specific application scenario provided by an embodiment of the present application. The character image is sent to the server 120 for processing through the network, and the server 120 sends the generated virtual image to the user terminal 130, so that different viewers can watch the entire live broadcast process through the corresponding user terminal 130.

可以看出,這種方式的虛擬直播雖然成本較低,但是由於只有的單個攝影設備110對真人主播進行拍攝,因此生成的虛擬主播的姿態與攝影設備110與真人主播之間的相對位置有關,也就是說,觀眾只能看到特定視角下的虛擬人物,而這個特定視角取決於攝影設備110與真人主播之間的相對位置,從而使得呈現出的直播效果不盡如人意。例如,在虛擬直播過程中常常出現虛擬主播的動作僵硬、鏡頭切換畫面不流暢或者鏡頭畫面單調枯燥等問題,從而造成觀眾的視覺疲勞,無法令觀眾體會到身臨其境的感受。It can be seen that although the cost of virtual live broadcast in this way is relatively low, since only a single camera device 110 shoots the live anchor, the posture of the generated virtual anchor is related to the relative position between the camera device 110 and the live anchor. That is to say, the audience can only see the virtual character in a specific angle of view, and this specific angle of view depends on the relative position between the photographing device 110 and the live broadcaster, so that the live broadcast effect presented is unsatisfactory. For example, in the process of virtual live broadcast, there are often problems such as stiff movements of virtual anchors, unsmooth shot switching screens, or monotonous and boring shots, which cause visual fatigue of the audience and make it impossible for the audience to experience the immersive experience.

類似的,在其他應用場景中,例如,直播教學場景;教學過程中老師透過線上教學的形式為學生教授知識,但是這種教學方法通常是枯燥乏味的,影片中的老師無法即時得知學生對知識點的掌握情況,學生也只能看到單一視角畫面中的老師或者教學講義,容易造成學生的疲憊感,與老師現場教學相比影片教學的教學效果大打折扣。又例如,在舉辦演唱會的過程中可能由於天氣、場地等限制,造成演唱會無法如期舉辦時,歌手可以在錄音室中舉辦虛擬演唱會,以模擬真實演唱會的情景,為了實現真實演唱會的情景,通常需要搭設多台攝影機對歌手進行拍攝,這種虛擬演唱會的舉辦方式操作複雜且浪費成本,而且利用多台攝影機進行拍攝可以得到多個鏡頭下的畫面,這就可能存在鏡頭切換不流暢的問題,從而使得使用者無法適應不同鏡頭畫面在切換時所造成的視覺差。Similarly, in other application scenarios, for example, live teaching scenes; in the teaching process, teachers teach students knowledge through online teaching, but this teaching method is usually boring, and the teacher in the film cannot know how students are right. For the mastery of knowledge points, students can only see the teacher or teaching handouts in a single perspective, which can easily cause students to feel tired. Compared with the teacher’s on-site teaching, the teaching effect of film teaching is greatly reduced. For another example, when the concert may not be held as scheduled due to weather and venue restrictions during the concert, the singer can hold a virtual concert in the recording studio to simulate the scene of a real concert, in order to achieve a real concert It is usually necessary to set up multiple cameras to shoot the singer. This kind of virtual concert is complicated to operate and wastes the cost. Moreover, the use of multiple cameras to shoot can get pictures under multiple lenses, which may cause lens switching The problem of non-smoothness makes the user unable to adapt to the visual difference caused by the switching of different lenses.

為了解決上述應用場景中經常出現的畫面鏡頭視角單一以及鏡頭切換畫面不流暢等問題,本申請實施例提供了一種用於實現分鏡效果的方法,該方法根據採集得到的真實圖像生成三維虛擬模型,並根據背景音樂或者真實人物的動作得到多個不同的鏡頭視角,然後以多個不同的鏡頭視角對三維虛擬模型進行渲染,得到多個不同的鏡頭視角分別對應的虛擬圖像,從而模擬出在虛擬場景中有多個虛擬相機對三維虛擬模型進行拍攝的效果,提高了觀眾的觀看體驗感。另外,該方法還透過對背景音樂的節拍進行解析,並根據節拍資訊在三維虛擬模型中添加對應的舞臺特效,為觀眾呈現出不同的舞臺效果,進一步增強了觀眾的觀看體驗感。In order to solve the problems of single lens angle of view and unsmooth lens switching images that often occur in the above application scenarios, an embodiment of the present application provides a method for realizing the splitting effect. The method generates a three-dimensional virtual image based on the collected real image. Model, and obtain multiple different lens perspectives according to background music or the actions of real characters, and then render the three-dimensional virtual model with multiple different lens perspectives to obtain virtual images corresponding to multiple different lens perspectives to simulate The effect of multiple virtual cameras shooting the three-dimensional virtual model in the virtual scene improves the viewing experience of the audience. In addition, the method also analyzes the beats of the background music and adds corresponding stage effects to the three-dimensional virtual model according to the beat information to present different stage effects to the audience, which further enhances the audience's viewing experience.

下面,首先解釋本申請實施例中由真實圖像生成三維虛擬模型的具體過程。In the following, the specific process of generating a three-dimensional virtual model from a real image in an embodiment of the present application is first explained.

在本申請實施例中,三維虛擬模型包括處於三維虛擬場景中的三維虛擬人物模型。以第2圖為例,第2圖示出的一種可能的三維虛擬模型的示意圖,根據第2圖所示的三維虛擬模型可以看到三維虛擬人物模型的雙手舉到胸前,為了突出對比效果,第2圖的左上角還展示了由分鏡效果實現裝置採集得到的真實圖像,其中,真實人物也是雙手舉到胸前。換句話說,三維虛擬人物模型與真實人物的動作一致。可以理解,上述第2圖僅僅是一種舉例,在實際應用中,分鏡效果實現裝置採集得到真實圖像可以是三維圖像,也可以是二維圖像,真實圖像中人物的數量可以是一個,也可以是多個,真實人物的動作可以是雙手舉到胸前,也可以是抬起左腳或者其他動作等等,相應的,由真實人物圖像生成的三維虛擬模型中三維虛擬人物模型的數量可以是一個,也可以是多個,三維虛擬人物模型的動作可以是雙手舉到胸前,也可以是抬起左腳或者其他動作等等,此處不作具體限定。In the embodiment of the present application, the three-dimensional virtual model includes a three-dimensional virtual character model in a three-dimensional virtual scene. Taking Figure 2 as an example, Figure 2 shows a schematic diagram of a possible three-dimensional virtual model. According to the three-dimensional virtual model shown in Figure 2, it can be seen that the hands of the three-dimensional virtual character model are raised to the chest, in order to highlight the contrast The effect, the upper left corner of the second figure also shows the real image collected by the splitter effect realization device, in which the real person is also raising his hands to his chest. In other words, the three-dimensional virtual character model is consistent with the actions of the real character. It can be understood that the above-mentioned Figure 2 is only an example. In practical applications, the real image collected by the device for implementing the split-mirror effect can be a three-dimensional image or a two-dimensional image, and the number of characters in the real image can be One or more. The action of the real character can be raising both hands to the chest, or raising the left foot or other actions, etc. Correspondingly, the three-dimensional virtual model generated from the image of the real character The number of character models can be one or more. The action of the three-dimensional virtual character model can be raising both hands to the chest, raising the left foot or other actions, etc., which are not specifically limited here.

在本申請實施例中,分鏡效果實現裝置對真實人物進行拍攝,得到多幀真實圖像I1 ,I2 ,…,In ,並按照時間順序對真實圖像I1 ,I2 ,…,In 分別進行特徵提取,從而得到多個對應的三維虛擬模型

Figure 02_image001
,其中n是正整數,並且真實圖像I1 ,I2 ,…,In 與三維虛擬模型
Figure 02_image001
之間存在一一對應的關係,也就是說,一幀真實圖像用於生成一個三維虛擬模型。示例性的,以真實圖像
Figure 02_image003
生成三維虛擬模型
Figure 02_image005
為例,一個三維虛擬模型是可以這樣得到的:In the present application example, the sub-mirror to achieve the effect of real people shooting apparatus, a plurality of frames to obtain a real image I 1, I 2, ..., I n, and in chronological order of the real image I 1, I 2, ... ,I n perform feature extraction respectively to obtain multiple corresponding three-dimensional virtual models
Figure 02_image001
, Where n is a positive integer, and the real images I 1 , I 2 ,..., I n are the same as the three-dimensional virtual model
Figure 02_image001
There is a one-to-one correspondence, that is, a real image is used to generate a three-dimensional virtual model. Exemplary, with real images
Figure 02_image003
Generate 3D virtual model
Figure 02_image005
As an example, a 3D virtual model can be obtained like this:

步驟一,分鏡效果實現裝置獲取真實圖像

Figure 02_image003
。Step one, the real image of the realizing device of the split-mirror effect is obtained
Figure 02_image003
.

其中,真實圖像

Figure 02_image003
中包括真實人物圖像,並且i是正整數,
Figure 02_image007
。Among them, the real image
Figure 02_image003
Includes images of real people, and i is a positive integer,
Figure 02_image007
.

步驟二,分鏡效果實現裝置對真實圖像

Figure 02_image003
中的真實人物圖像進行特徵提取,得到特徵資訊。其中,特徵資訊包括真實人物的動作資訊。Step two, the split mirror effect is realized by the device on the real image
Figure 02_image003
Feature extraction is performed on real person images in, and feature information is obtained. Among them, the feature information includes action information of real characters.

其中,獲取真實圖像包括:獲取影片流,根據影片流中的至少兩幀圖像得到至少兩幀真實圖像;相應的,對所述真實人物圖像進行特徵提取得到特徵資訊,包括:分別對每一幀所述真實人物圖像進行特徵提取得到對應的特徵資訊。Wherein, acquiring the real image includes: acquiring a movie stream, and obtaining at least two frames of real images according to at least two frames of images in the movie stream; correspondingly, performing feature extraction on the real person image to obtain feature information includes: respectively Perform feature extraction on each frame of the real person image to obtain corresponding feature information.

可以理解的,特徵資訊用於控制三維虛擬人物模型的姿態,特徵資訊中的動作資訊包括臉部表情特徵以及肢體動作特徵,臉部表情特徵用於描述人物的各種情緒狀態,例如,高興、悲傷、驚訝、恐懼、憤怒或者厭惡等等,肢體動作特徵用於描述真實人物的動作狀態,例如,舉起左手、抬起右腳或者跳躍等等。另外,特徵資訊還可以包括人物資訊,其中,人物資訊包括真實人物的多個人體關鍵點及其對應的位置資訊,人體關鍵點包括人臉關鍵點和人體骨骼關鍵點,位置特徵包括真實人物的人體關鍵點的位置座標。Understandably, the feature information is used to control the posture of the three-dimensional virtual character model. The action information in the feature information includes facial expression features and body motion features. Facial expression features are used to describe various emotional states of the character, such as happiness and sadness. , Surprise, fear, anger, or disgust, etc., body movement characteristics are used to describe the movement state of real characters, for example, raising the left hand, raising the right foot, or jumping, etc. In addition, the feature information can also include character information, where the character information includes multiple key points of the human body of a real person and their corresponding position information. The key points of the human body include key points of the human face and the key points of human bones, and the position characteristics include the key points of the real person. The position coordinates of the key points of the human body.

可選的,分鏡效果實現裝置透過對真實圖像

Figure 02_image003
進行圖像分割,提取得到真實圖像
Figure 02_image003
中的真實人物圖像;再對提取得到的真實人物圖像進行關鍵點檢測得到上述多個人體關鍵點以及多個人體關鍵點的位置資訊,其中,上述人體關鍵點包括人臉關鍵點和人體骨骼關鍵點,上述人體關鍵點具體可以位於人體的頭部區域、脖子區域、肩膀區域、脊柱區域、腰部區域、臀部區域、手腕區域、手臂區域、膝蓋區域、腿部區域、腳腕區域以及腳掌區域等等;透過對人臉關鍵點以及人臉關鍵點的位置資訊進行分析,得到真實圖像
Figure 02_image003
中真實人物的臉部表情特徵;透過對人體骨骼關鍵點以及人體骨骼關鍵點的位置資訊進行分析,得到真實圖像
Figure 02_image003
中真實人物的骨骼特徵,從而得到真實人物的肢體動作特徵。Optionally, the split-mirror effect realization device penetrates the real image
Figure 02_image003
Perform image segmentation and extract the real image
Figure 02_image003
The real person image in the image; then the key point detection is performed on the extracted real person image to obtain the above-mentioned multiple human body key points and the position information of the multiple human body key points, where the human body key points include the face key points and the human body Bone key points, the above human key points can be located in the head area, neck area, shoulder area, spine area, waist area, hip area, wrist area, arm area, knee area, leg area, ankle area, and sole of the human body. Area, etc.; through the analysis of the key points of the face and the location information of the key points of the face, the real image is obtained
Figure 02_image003
Facial expression features of real people in the middle; through analyzing the key points of human bones and the position information of key points of human bones, real images are obtained
Figure 02_image003
In order to obtain the characteristics of the body movements of the real person in the bone features of the real person.

可選的,分鏡效果實現裝置將真實圖像

Figure 02_image003
輸入神經網路中進行特徵提取,經過多個卷積層的計算後,提取得到上述多個人體關鍵點資訊。其中,神經網路是透過大量的訓練得到的,神經網路可以是卷積神經網路(Convolution Neural Network, CNN),也可以是反向傳播神經網路(Back Propagation Neural Network, BPNN),還可以是生成對抗網路(Generative Adversarial Network, GAN)或者迴圈神經網路(Recurrent Neural Network, RNN)等等,此處不作具體限定。需要說明的,上述人體特徵的提取過程可以在同一個神經網路中進行,也可以在不同神經網路中進行。例如,分鏡效果實現裝置可以利用CNN提取人臉關鍵點,得到人體臉部表情特徵;也可以利用BPNN提取人體骨骼關鍵點,得到人體骨骼特徵以及肢體動作特徵,此處不作具體限定。另外,上述用於驅動三維虛擬人物模型的特徵資訊的示例僅僅用於進行舉例,在實際應用中還可以包括其他特徵資訊,此處不作具體限定。Optionally, the split-mirror effect realization device converts the real image
Figure 02_image003
Input the neural network for feature extraction, and after multiple convolutional layer calculations, the multiple key point information of the human body is extracted. Among them, the neural network is obtained through a lot of training. The neural network can be a Convolution Neural Network (CNN) or a Back Propagation Neural Network (BPNN). It can be a Generative Adversarial Network (GAN) or a Recurrent Neural Network (RNN), etc., and there is no specific limitation here. It should be noted that the above-mentioned human body feature extraction process can be performed in the same neural network or in different neural networks. For example, the device for realizing the split-mirror effect can use CNN to extract key points of human faces to obtain facial expression features; it can also use BPNN to extract key points of human bones to obtain human bone features and limb movement features, which are not specifically limited here. In addition, the above examples of the feature information used to drive the three-dimensional virtual character model are only used as examples, and other feature information may also be included in practical applications, which is not specifically limited here.

步驟三,分鏡效果實現裝置根據特徵資訊生成三維虛擬模型

Figure 02_image009
中的三維虛擬人物模型,以使得三維虛擬模型
Figure 02_image009
中的三維虛擬人物模型與真實圖像
Figure 02_image003
中真實人物的動作資訊對應。Step 3: The device for achieving split-mirror effect generates a three-dimensional virtual model based on the feature information
Figure 02_image009
3D virtual character model in the 3D virtual model to make the 3D virtual model
Figure 02_image009
3D virtual character model and real image in
Figure 02_image003
Corresponding to the action information of real characters in the game.

可選的,分鏡效果實現裝置透過上述特徵資訊建立真實人物的人體關鍵點到虛擬人物模型的人體關鍵點之間的映射關係;再根據映射關係控制虛擬人物模型的表情和姿態,從而使得虛擬人物模型的臉部表情和肢體動作與真實人物的臉部表情和肢體動作一致。Optionally, the split-mirror effect realization device establishes the mapping relationship between the key points of the human body of the real person and the key points of the human body of the virtual character model through the above-mentioned feature information; and then controls the expression and posture of the virtual character model according to the mapping relationship, so as to make the virtual character model The facial expressions and body movements of the character model are consistent with the facial expressions and body movements of the real characters.

可選的,分鏡效果實現裝置分別對真實人物的人體關鍵點進行序號標注,得到真實人物的人體關鍵點的標注資訊,其中,人體關鍵點與標注資訊一一對應;再根據真實人物的人體關鍵點的標注資訊來標注虛擬人物模型中的人體關鍵點。例如,真實人物的左手手腕的標注資訊是1號,則三維虛擬人物模型的左手手腕的標注資訊也是1號,真實人物的左手手臂的標注資訊是2號,則三維虛擬人物模型的左手手腕的標注資訊也是2號等等;再將真實人物的人體關鍵點標注資訊與三維虛擬人物模型的人體關鍵點標注資訊進行匹配,並將真實人物的人體關鍵點位置資訊映射到對應的三維虛擬人物模型的人體關鍵點中,從而使得三維虛擬人物模型可以重現真實人物的臉部表情和肢體動作。Optionally, the split-mirror effect realization device respectively performs serial number labeling on the key points of the human body of the real person to obtain the annotation information of the key points of the human body of the real person. Among them, the key points of the human body correspond to the annotation information one by one; The annotation information of the key points is used to mark the key points of the human body in the virtual character model. For example, if the label information of the left wrist of the real character is No. 1, the label information of the left wrist of the 3D virtual character model is also No. 1, and the label information of the left arm of the real character is No. 2, then the left wrist of the 3D virtual character model The annotation information is also No. 2 and so on; then the human key point annotation information of the real person is matched with the human key point annotation information of the three-dimensional virtual character model, and the key point position information of the real person’s human body is mapped to the corresponding three-dimensional virtual character model In the key points of the human body, the three-dimensional virtual character model can reproduce the facial expressions and body movements of real characters.

在本申請實施例中,真實圖像

Figure 02_image003
還包括真實場景圖像,三維虛擬模型
Figure 02_image010
還包括三維虛擬場景模型,上述根據真實圖像
Figure 02_image003
生成三維虛擬模型
Figure 02_image005
的方法還包括:根據真實圖像
Figure 02_image003
中的真實場景圖像,構建三維虛擬模型
Figure 02_image010
中的三維虛擬場景。In the embodiment of this application, the real image
Figure 02_image003
It also includes real scene images, 3D virtual models
Figure 02_image010
It also includes a three-dimensional virtual scene model, the above is based on real images
Figure 02_image003
Generate 3D virtual model
Figure 02_image005
The method also includes: According to the real image
Figure 02_image003
The real scene image in, construct a three-dimensional virtual model
Figure 02_image010
In the 3D virtual scene.

可選的,分鏡效果實現裝置首先對真實圖像

Figure 02_image003
進行圖像分割,得到真實圖像
Figure 02_image003
中的真實場景圖像;再提取真實場景圖像中的場景特徵,例如,真實場景中物體的位置特徵、形狀特徵以及大小特徵等等;根據場景特徵構建三維虛擬模型
Figure 02_image005
中的三維虛擬場景模型,使得三維虛擬模型
Figure 02_image005
中的三維虛擬場景模型可以高度還原真實圖像
Figure 02_image003
中的真實場景圖像。Optionally, the device for realizing the split-mirror effect first
Figure 02_image003
Perform image segmentation to get real images
Figure 02_image003
The real scene image in the real scene image; then extract the scene features from the real scene image, for example, the position feature, shape feature, and size feature of the object in the real scene; build a three-dimensional virtual model according to the scene feature
Figure 02_image005
The three-dimensional virtual scene model in the three-dimensional virtual model, making the three-dimensional virtual model
Figure 02_image005
The three-dimensional virtual scene model in can highly restore the real image
Figure 02_image003
The real scene image in.

為了簡便陳述,上述僅僅說明了由真實圖像

Figure 02_image003
生成三維虛擬模型
Figure 02_image010
的過程,實際上,三維虛擬模型
Figure 02_image011
的生成過程與三維虛擬模型
Figure 02_image010
的生成過程類似,此處不再展開贅述。For the sake of simplicity, the above only illustrates that the real image
Figure 02_image003
Generate 3D virtual model
Figure 02_image010
The process, in fact, the three-dimensional virtual model
Figure 02_image011
Generation process and 3D virtual model
Figure 02_image010
The generation process is similar, so I won’t go into details here.

需要說明的,三維虛擬模型中的三維虛擬場景模型可以根據真實圖像中的真實場景圖像構建,也可以是用戶自訂的三維虛擬場景模型;三維虛擬模型中三維虛擬人物模型的五官外貌可以由真實圖像中的真實人物圖像的五官構建,也可以是用戶自訂的五官外貌,此處不作具體限定。It should be noted that the 3D virtual scene model in the 3D virtual model can be constructed based on the real scene image in the real image, or it can be a user-defined 3D virtual scene model; the facial features of the 3D virtual character model in the 3D virtual model can be It is constructed from the facial features of the real person image in the real image, or it can be a user-defined facial features, which is not specifically limited here.

接下來,對本申請實施例中涉及的以多個不同的鏡頭視角對三維虛擬模型

Figure 02_image001
中的每一個三維虛擬模型進行鏡頭視角渲染,使得觀眾可以看到同一個三維虛擬模型在不同鏡頭視角下的虛擬圖像進行詳細說明。以真實圖像
Figure 02_image003
生成的三維虛擬模型
Figure 02_image010
為例,分別使用k個不同的鏡頭對三維虛擬模型
Figure 02_image010
進行渲染,得到k個不同鏡頭視角下的虛擬圖像
Figure 02_image013
,其中
Figure 02_image015
,從而實現分鏡切換的效果,其具體過程可以表述如下:Next, for the three-dimensional virtual model involved in the embodiment of this application with multiple different lens perspectives
Figure 02_image001
Each of the three-dimensional virtual models is rendered from the perspective of the lens, so that the audience can see the virtual images of the same three-dimensional virtual model under different lens perspectives for detailed description. Real image
Figure 02_image003
Generated 3D virtual model
Figure 02_image010
As an example, use k different lenses to pair the 3D virtual model
Figure 02_image010
Perform rendering to get k virtual images under different lens angles
Figure 02_image013
,among them
Figure 02_image015
, In order to achieve the effect of split mirror switching, the specific process can be expressed as follows:

如第3圖所示,第3圖是本申請實施例提供的一種分鏡效果實現方法的流程示意圖。本實施方式的分鏡效果實現方法包括但不限於以下步驟:As shown in FIG. 3, FIG. 3 is a schematic flowchart of a method for implementing a split mirror effect provided by an embodiment of the present application. The method for realizing the splitting effect of this embodiment includes but not limited to the following steps:

S101、分鏡效果實現裝置獲取三維虛擬模型。S101. The device for achieving split-mirror effect obtains a three-dimensional virtual model.

在本申請實施例中,三維虛擬模型用於類比真實人物和真實場景,三維虛擬模型包括處於三維虛擬場景模型中的三維虛擬人物模型,三維虛擬模型是根據真實圖像生成的。其中,三維虛擬人物模型是根據真實圖像包括的真實人物圖像生成的,三維虛擬模型中的三維虛擬人物模型用於類比真實圖像中的真實人物,並且三維虛擬人物模型的動作與真實人物的動作對應。三維虛擬場景模型可以是根據真實圖像包括的真實場景圖像構建的,也可以是預設的三維虛擬場景模型。當三維虛擬場景模型是由真實場景圖像構建得到的,則三維虛擬場景模型可用於類比真實圖像中的真實場景。In the embodiment of the present application, the three-dimensional virtual model is used to compare a real person and a real scene. The three-dimensional virtual model includes a three-dimensional virtual character model in a three-dimensional virtual scene model, and the three-dimensional virtual model is generated based on a real image. Among them, the three-dimensional virtual character model is generated based on the real character image included in the real image, the three-dimensional virtual character model in the three-dimensional virtual model is used to compare the real character in the real image, and the actions of the three-dimensional virtual character model are the same as the real character Corresponding to the action. The three-dimensional virtual scene model may be constructed based on the real scene image included in the real image, or may be a preset three-dimensional virtual scene model. When the three-dimensional virtual scene model is constructed from the real scene image, the three-dimensional virtual scene model can be used to simulate the real scene in the real image.

S102、分鏡效果實現裝置獲取至少兩個不同的鏡頭視角。S102. The device for achieving a split-mirror effect obtains at least two different lens angles of view.

在本申請實施例中,鏡頭視角用於表示相機在拍攝物體時相機相對於被攝物體的位置。例如,相機在物體的正上方進行拍攝時可以得到物體的俯視圖。假設相機位於物體的正上方對應的鏡頭視角為

Figure 02_image017
,則利用該相機拍攝得到的圖像展示了鏡頭視角
Figure 02_image017
下的物體,也就是物體的俯視圖。In the embodiments of the present application, the angle of view of the lens is used to indicate the position of the camera relative to the object when the camera is shooting the object. For example, the camera can get a top view of the object when shooting directly above the object. Assuming that the camera is located directly above the object, the corresponding lens angle of view is
Figure 02_image017
, The image taken by the camera shows the lens angle of view
Figure 02_image017
The object underneath is the top view of the object.

在一些可選的實施例中,獲取至少兩個不同的鏡頭視角包括:根據至少兩幀真實圖像,得到至少兩個不同的鏡頭視角。其中,真實圖像可以是由真實相機拍攝得到的,真實相機相對於真實人物的位置可能是多個,由多個處於不同位置的真實相機拍攝得到的多張真實圖像展示了多個不同鏡頭視角下的真實人物。In some optional embodiments, obtaining at least two different lens angles includes: obtaining at least two different lens angles according to at least two frames of real images. Among them, the real image can be taken by a real camera, the position of the real camera relative to the real person may be multiple, and the multiple real images taken by multiple real cameras in different positions show multiple different lenses. Real people in perspective.

在另一些可選的實施例中,獲取至少兩個不同的鏡頭視角包括:根據至少兩幀真實圖像分別對應的動作資訊,得到至少兩個不同的鏡頭視角。其中,動作資訊包括真實圖像中真實人物的肢體動作以及臉部表情。其中,肢體動作包括很多種,肢體動作例如可以是舉起右手、抬起左腳、跳躍等動作中的一種或者多種,臉部表情同樣也包括很多種,臉部表情例如可以是微笑、流淚、惱怒等臉部表情中的一種或者多種。本實施例中對肢體動作和臉部表情的示例不限於上述描述。In other optional embodiments, acquiring at least two different lens angles includes: obtaining at least two different lens angles according to the action information corresponding to the at least two frames of real images. Among them, the action information includes the body movements and facial expressions of real characters in real images. Among them, the body movements include many kinds. For example, the body movements can be one or more of raising the right hand, raising the left foot, jumping, etc. The facial expressions also include many kinds, such as smiling, tearing, etc. One or more of facial expressions such as anger. Examples of body movements and facial expressions in this embodiment are not limited to the above description.

在本申請實施例中,一個動作或者多種動作的組合對應一個鏡頭視角。例如,當真實人物微笑且跳躍時對應的鏡頭視角為

Figure 02_image019
,當真實人物只跳躍時對應的鏡頭視角可以是鏡頭視角
Figure 02_image019
,也可以是鏡頭視角
Figure 02_image021
等等,同樣的,當真實人物只微笑時對應的鏡頭視角可以是鏡頭視角
Figure 02_image019
,也可以是鏡頭視角
Figure 02_image021
,還可以是鏡頭視角
Figure 02_image023
等等。In the embodiment of the present application, one action or a combination of multiple actions corresponds to one lens angle of view. For example, when a real person smiles and jumps, the corresponding lens angle is
Figure 02_image019
, When the real person only jumps, the corresponding lens angle can be the lens angle
Figure 02_image019
, It can also be the lens angle of view
Figure 02_image021
Wait, the same, when the real person only smiles, the corresponding lens angle can be the lens angle
Figure 02_image019
, It can also be the lens angle of view
Figure 02_image021
, It can also be the lens angle of view
Figure 02_image023
and many more.

在又一些可選的實施例中,獲取至少兩個不同的鏡頭視角包括:獲取背景音樂;確定背景音樂對應的時間合集,其中時間合集包括至少兩個時間段;獲取時間合集中每一個時間段對應的鏡頭視角。其中,真實圖像可以是一段影片流中的一幀或者多幀,影片流中包括圖像資訊和背景音樂資訊,其中,一幀圖像與一幀音樂對應。背景音樂資訊包括背景音樂以及對應的時間合集,時間合集包括至少兩個時間段,每個時間段對應一個鏡頭視角。In still other optional embodiments, obtaining at least two different camera angles includes: obtaining background music; determining a time collection corresponding to the background music, where the time collection includes at least two time periods; obtaining each time period in the time collection Corresponding lens angle of view. The real image may be one or more frames in a movie stream. The movie stream includes image information and background music information, where one frame of image corresponds to one frame of music. The background music information includes background music and a corresponding time collection. The time collection includes at least two time periods, and each time period corresponds to a camera angle of view.

S103、分鏡效果實現裝置以至少兩個不同的鏡頭視角對三維虛擬模型進行渲染,得到至少兩個不同的鏡頭視角分別對應的虛擬圖像。S103. The device for implementing the split-mirror effect renders the three-dimensional virtual model with at least two different lens angles to obtain virtual images corresponding to at least two different lens angles respectively.

在本申請實施例中,上述至少兩個不同的鏡頭視角包括第一鏡頭視角和第二鏡頭視角,以至少兩個不同的鏡頭視角對所述三維虛擬模型進行渲染,得到至少兩個不同的鏡頭視角分別對應的虛擬圖像包括:S1031、以第一鏡頭視角對三維虛擬模型進行渲染,得到第一虛擬圖像;S1032、以第二鏡頭視角對三維虛擬模型進行渲染,得到第二虛擬圖像。In the embodiment of the present application, the aforementioned at least two different lens angles include a first lens angle of view and a second lens angle of view, and the three-dimensional virtual model is rendered with at least two different lens angles to obtain at least two different lenses The virtual images corresponding to the viewing angles include: S1031, rendering the three-dimensional virtual model from the first lens perspective to obtain the first virtual image; S1032, rendering the three-dimensional virtual model from the second lens perspective to obtain the second virtual image .

在本申請實施例中,以第二鏡頭視角對三維虛擬模型進行渲染,得到第二虛擬圖像包括:將第一鏡頭視角下的三維虛擬模型進行平移或者旋轉,得到第二鏡頭視角下的三維虛擬模型;獲取第二鏡頭視角下的三維虛擬模型對應的第二虛擬圖像。In the embodiment of the present application, rendering the three-dimensional virtual model from the second lens perspective to obtain the second virtual image includes: translating or rotating the three-dimensional virtual model in the first lens perspective to obtain the three-dimensional virtual model in the second lens perspective. Virtual model; acquiring a second virtual image corresponding to the three-dimensional virtual model under the second lens perspective.

可以理解的,第一鏡頭視角可以是根據真實圖像得到的,也可以是根據真實圖像對應的動作資訊得到的,還可以是根據背景音樂對應的時間合集得到的;同樣的,第二鏡頭視角可以是根據真實圖像得到的,也可以是根據真實圖像對應的動作資訊得到的,還可以是根據背景音樂對應的時間合集得到的,本申請實施例中不作具體限定。It is understandable that the first lens angle can be obtained based on the real image, or based on the action information corresponding to the real image, or based on the time collection corresponding to the background music; similarly, the second lens The viewing angle may be obtained based on the real image, or based on the action information corresponding to the real image, or based on the time collection corresponding to the background music, which is not specifically limited in the embodiment of the present application.

S1033、展示根據第一虛擬圖像和第二虛擬圖像形成的圖像序列。S1033. Display an image sequence formed according to the first virtual image and the second virtual image.

在本申請實施例中,上述展示根據第一圖像和第二虛擬圖像形成的圖像序列包括:在第一虛擬圖像和所述第二虛擬圖像之間插入a幀虛擬圖像,使得第一虛擬圖像平緩切換至第二虛擬圖像,其中,a是正整數。In the embodiment of the present application, the above-mentioned displaying the image sequence formed according to the first image and the second virtual image includes: inserting a frame of virtual image between the first virtual image and the second virtual image, Make the first virtual image switch to the second virtual image smoothly, where a is a positive integer.

可選的,在第一虛擬圖像與第二虛擬圖像之間插入a幀虛擬圖像

Figure 02_image025
,使得第一虛擬圖像平緩切換至第二虛擬圖像,其中a幀虛擬圖像
Figure 02_image027
插入的時間點為
Figure 02_image028
,時間點
Figure 02_image030
形成的曲線的斜率值滿足先單調遞減後單調遞增的函數,並且a是正整數。Optionally, insert a frame of virtual image between the first virtual image and the second virtual image
Figure 02_image025
, So that the first virtual image is gently switched to the second virtual image, where a frame of virtual image
Figure 02_image027
The time point of insertion is
Figure 02_image028
, Point in time
Figure 02_image030
The slope value of the formed curve satisfies the function of monotonically decreasing first and then increasing monotonically, and a is a positive integer.

舉例說明,第4圖示出了一種插值曲線的示意圖,如第4圖所示,分鏡效果實現裝置在第1分鐘時獲得第一虛擬圖像,在第2分鐘時獲得第二虛擬圖像,且第一虛擬圖像呈現的是三維虛擬模型的正視圖,第二虛擬圖像呈現的是三維虛擬模型的左視圖。為了使得觀眾可以看到流暢的鏡頭切換畫面,分鏡效果實現裝置在第1分鐘與第2分鐘之間插入多個時間點,並且在每一個時間點處插入一幀虛擬圖像,例如,在1.4分鐘時插入虛擬圖像P1 ,在第1.65分鐘時插入虛擬圖像P2 ,在第1.8分鐘時插入虛擬圖像P3 ,在第1.85分鐘插入虛擬圖像P4 ,其中,虛擬圖像P1 呈現的是將三維虛擬模型向左旋轉30度的效果,虛擬圖像P2 呈現的是將三維虛擬模型向左旋轉50度的效果,虛擬圖像P3 和虛擬圖像P4 呈現的均是將三維虛擬模型向左旋轉90度的效果,使得觀眾可以看到三維虛擬模型由正視圖逐漸變換到左視圖的整個過程,而不是單一的兩張圖像(三維虛擬模型的正視圖和三維虛擬模型的左視圖),從而使得觀眾可以適應從正視圖切換到左視圖的視覺差的變化效果。For example, Fig. 4 shows a schematic diagram of an interpolation curve. As shown in Fig. 4, the segmentation effect realization device obtains the first virtual image at the first minute and the second virtual image at the second minute. , And the first virtual image presents the front view of the three-dimensional virtual model, and the second virtual image presents the left view of the three-dimensional virtual model. In order to allow the audience to see a smooth shot switching screen, the split-lens effect realization device inserts multiple time points between the first minute and the second minute, and inserts a virtual image at each time point, for example, in when the insertion of the virtual image P 1 1.4 minutes, inserting the virtual image P 2 during the first 1.65 minutes, inserting the virtual image P of 1.8 minutes at 3, the virtual image P 4 inserted in the first 1.85 minutes, wherein the virtual image P 1 presents the effect of rotating the three-dimensional virtual model to the left by 30 degrees, the virtual image P 2 presents the effect of rotating the three-dimensional virtual model to the left by 50 degrees, and the virtual image P 3 and the virtual image P 4 present the effect Both are the effect of rotating the 3D virtual model 90 degrees to the left, so that the audience can see the whole process of the 3D virtual model gradually changing from the front view to the left view, instead of a single two images (the front view and the front view of the 3D virtual model). The left view of the three-dimensional virtual model), so that the audience can adapt to the changing effect of the visual difference when switching from the front view to the left view.

在本申請的一些可選實施例中,對本申請實施例中提到的利用舞臺特效對三維虛擬模型進行渲染,從而為觀眾呈現出不同的舞臺效果進行詳細說明,具體包括以下步驟:In some optional embodiments of this application, the use of stage special effects mentioned in the embodiments of this application to render a three-dimensional virtual model to present different stage effects to the audience is described in detail, which specifically includes the following steps:

步驟一,分鏡效果實現裝置對背景音樂進行節拍檢測,得到背景音樂的節拍合集。Step 1: The device for realizing the split-mirror effect detects the beats of the background music, and obtains the beat collection of the background music.

其中,節拍合集包括多個節拍,多個節拍中的每一個節拍對應一個舞臺特效。可選的,分鏡效果實現裝置可以利用著色器和粒子特效分別對三維虛擬模型進行渲染處理,例如,著色器可用於實現虛擬舞臺背面的聚光燈旋轉效果以及虛擬舞臺本身的音效波浪效果,粒子特效用於在三維虛擬模型中添加如火花、落葉、流星等類似的視覺效果。Among them, the beat collection includes multiple beats, and each beat of the multiple beats corresponds to a stage special effect. Optionally, the split-mirror effect realization device can use shaders and particle special effects to respectively render the 3D virtual model. For example, the shader can be used to realize the spotlight rotation effect on the back of the virtual stage and the sound wave effect of the virtual stage itself, and particle special effects. It is used to add similar visual effects such as sparks, fallen leaves, meteors, etc. to the 3D virtual model.

步驟二,分鏡效果實現裝置將節拍合集對應的目標舞臺特效添加到三維虛擬模型中。Step 2: The split-mirror effect realization device adds the target stage special effects corresponding to the beat collection to the three-dimensional virtual model.

上述方法透過根據採集得到的真實圖像生成三維虛擬模型,並根據採集得到的真實圖像、背景音樂以及真實人物的動作進行相應的鏡頭視角切換,從而類比出在虛擬場景中有多個虛擬相機對三維虛擬模型進行拍攝的效果,提高了觀眾的觀看體驗感。另外,該方法還透過對背景音樂的節拍進行解析,並根據節拍資訊在虛擬圖像中添加對應的舞臺特效,為觀眾呈現出不同的舞臺效果,進一步增強了觀眾的觀看體驗感。The above method generates a three-dimensional virtual model according to the collected real images, and performs corresponding lens angle switching according to the collected real images, background music, and the actions of real characters, thus analogy with multiple virtual cameras in the virtual scene The effect of shooting the three-dimensional virtual model improves the viewer's viewing experience. In addition, the method also analyzes the beat of the background music, and adds corresponding stage effects to the virtual image according to the beat information, presenting different stage effects to the audience, and further enhancing the audience's viewing experience.

為了便於理解上述實施例涉及的分鏡效果實現方法,下面透過舉例的方式詳細地說明本申請實施例的分鏡效果實現方法。In order to facilitate the understanding of the method for realizing the split-lens effect involved in the foregoing embodiment, the method for realizing the split-lens effect of the embodiment of the present application will be described in detail below through an example.

請參見第5圖,第5圖示出了一種具體的實施例的流程示意圖。Please refer to Fig. 5, which shows a schematic flowchart of a specific embodiment.

S201、分鏡效果實現裝置獲取真實圖像以及背景音樂,並根據真實圖像獲得第一鏡頭視角。其中,當背景音樂響起時,真實人物根據背景音樂進行動作,真實相機對真實人物進行拍攝得到真實圖像。S201. The device for achieving split-mirror effect obtains a real image and background music, and obtains a first lens angle of view according to the real image. Among them, when the background music sounds, the real person acts according to the background music, and the real camera shoots the real person to obtain the real image.

S202、分鏡效果實現裝置根據真實圖像生成三維虛擬模型。其中,三維虛擬模型是分鏡效果實現裝置在第一時刻獲取得到的。S202. The device for realizing split-mirror effect generates a three-dimensional virtual model according to the real image. Among them, the three-dimensional virtual model is obtained at the first moment by the device for realizing the split mirror effect.

S203、分鏡效果實現裝置對背景音樂進行節拍檢測,得到背景音樂的節拍合集,並將節拍合集對應的目標舞臺特效添加到三維虛擬模型中。S203. The split-mirror effect realization device detects the beat of the background music to obtain the beat collection of the background music, and adds the target stage special effect corresponding to the beat collection to the three-dimensional virtual model.

S204、分鏡效果實現裝置以第一鏡頭視角對三維虛擬模型進行渲染,得到第一鏡頭視角對應的第一虛擬圖像。S204. The device for implementing split-mirror effect renders the three-dimensional virtual model with the first lens angle of view to obtain a first virtual image corresponding to the first lens angle of view.

S205、分鏡效果實現裝置確定背景音樂對應的時間合集。S205. The device for realizing the split-mirror effect determines the time collection corresponding to the background music.

其中,時間合集包括多個時間段,多個時間段中的每個時間段對應一個鏡頭視角。Wherein, the time collection includes multiple time periods, and each of the multiple time periods corresponds to a lens angle.

S206、分鏡效果實現裝置判斷動作資訊庫中是否包含有動作資訊,在動作資訊庫中不包含動作資訊的情況下執行S207-S209,在動作資訊庫中包含動作資訊的情況下執行S210-S212。其中,動作資訊是真實圖像中真實人物的動作資訊,動作資訊庫包括多個動作資訊,多個動作資訊中的每個動作資訊對應一個鏡頭視角。S206. The split-mirror effect realization device judges whether the action information database contains action information, executes S207-S209 if the action information database does not contain action information, and executes S210-S212 if the action information database contains action information. . Wherein, the action information is the action information of the real person in the real image, and the action information database includes a plurality of action information, and each action information in the plurality of action information corresponds to a camera angle of view.

S207、分鏡效果實現裝置根據時間合集,確定第一時刻所處的時間段對應的第二鏡頭視角。S207. The device for realizing the splitting effect determines the second lens angle corresponding to the time period at the first moment according to the time collection.

S208、分鏡效果實現裝置以第二鏡頭視角對三維虛擬模型進行渲染,得到第二鏡頭視角對應的第二虛擬圖像。S208. The device for implementing the split-mirror effect renders the three-dimensional virtual model with the second lens angle of view to obtain a second virtual image corresponding to the second lens angle of view.

S209、分鏡效果實現裝置展示根據第一虛擬圖像和第二虛擬圖像形成的圖像序列。S209. The device for realizing split-mirror effect displays an image sequence formed according to the first virtual image and the second virtual image.

S210、分鏡效果實現裝置根據動作資訊,確定與動作資訊對應的第三鏡頭視角。S210. The device for achieving split-lens effect determines a third lens angle corresponding to the action information according to the action information.

S211、分鏡效果實現裝置以第三鏡頭視角對三維虛擬模型進行渲染,得到第三鏡頭視角對應的第三虛擬圖像。S211. The device for implementing the split-mirror effect renders the three-dimensional virtual model with the third lens angle of view to obtain a third virtual image corresponding to the third lens angle of view.

S212、分鏡效果實現裝置展示根據第一虛擬圖像和第三虛擬圖像形成的圖像序列。S212. The device for realizing split-mirror effect displays an image sequence formed according to the first virtual image and the third virtual image.

根據如第5圖所述的方法,本申請實施例提供了如第6圖所示的一種分鏡規則示意圖,根據第6圖示出的分鏡規則對虛擬圖像進行分鏡處理以及舞臺特效處理,可以得到如第7A-7D圖示出的四種虛擬圖像的效果圖。According to the method described in FIG. 5, the embodiment of the present application provides a schematic diagram of a splitting rule as shown in FIG. 6, and performing splitting processing and stage special effects on a virtual image according to the splitting rule shown in FIG. 6 Through processing, the effect diagrams of the four virtual images shown in Figures 7A-7D can be obtained.

如第7A圖所示,在第1分鐘時,分鏡效果實現裝置在鏡頭視角V1 下對真實人物進行拍攝,得到真實圖像

Figure 02_image031
(如第7A圖左上角所示),然後根據真實圖像
Figure 02_image031
得到三維虛擬模型
Figure 02_image033
。分鏡效果實現裝置對背景音樂進行節拍檢測,確定第1分鐘對應的節拍為B1 ,並根據節拍B1 得到第1分鐘時的舞臺特效
Figure 02_image035
,然後將舞臺特效
Figure 02_image035
添加到三維虛擬模型
Figure 02_image033
中;分鏡效果實現裝置根據預設的鏡頭腳本確定第1分鐘對應的鏡頭視角(簡稱為時間鏡頭視角)為V1 ;分鏡效果實現裝置檢測到真實人物在第1分鐘的動作是雙手舉到胸前,並且雙手舉到胸前這個動作不在動作資訊庫中,即不存在動作對應的鏡頭視角(簡稱為動作鏡頭視角),則此時分鏡效果實現裝置上顯示如第7A圖所示的虛擬圖像,其中,第7A圖所示虛擬圖像和真實圖像
Figure 02_image031
的鏡頭視角相同。As shown in Fig. 7A, at the first minute, the split-lens effect realization device shoots a real person under the lens angle of view V 1 to obtain a real image
Figure 02_image031
(As shown in the upper left corner of Figure 7A), and then according to the real image
Figure 02_image031
Get 3D virtual model
Figure 02_image033
. The split-mirror effect realization device detects the beat of the background music, determines that the beat corresponding to the first minute is B 1 , and obtains the stage special effects at the first minute according to the beat B 1
Figure 02_image035
, And then put the stage effects
Figure 02_image035
Add to 3D virtual model
Figure 02_image033
Medium; the splitting effect realization device determines the lens angle corresponding to the first minute (referred to as the time lens angle of view) as V 1 according to the preset lens script; the splitting effect realization device detects that the real person’s movements in the first minute are both hands The action of raising to the chest and raising both hands to the chest is not in the action information database, that is, there is no lens angle of view corresponding to the action (referred to as the action lens angle of view), then the split-lens effect realization device displays as shown in Figure 7A The virtual image shown, of which the virtual image and the real image shown in Figure 7A
Figure 02_image031
The lens angle of view is the same.

如第7B圖所示,在第2分鐘時,分鏡效果實現裝置在鏡頭視角V1 下對真實人物進行拍攝,得到真實圖像

Figure 02_image037
(如第7B圖左上角所示),然後根據真實圖像
Figure 02_image037
得到三維虛擬模型
Figure 02_image039
。分鏡效果實現裝置對背景音樂進行節拍檢測,確定第2分鐘對應的節拍B2 ,並根據節拍B2 得到第2分鐘時的舞臺特效
Figure 02_image041
,然後在三維虛擬模型
Figure 02_image039
中添加舞臺特效
Figure 02_image041
;分鏡效果實現裝置根據預設的鏡頭腳本確定第2分鐘對應的鏡頭視角(簡稱為時間鏡頭視角)為V2 ;分鏡效果實現裝置檢測到真實人物在第2分鐘的動作是向上抬起雙手,並且向上抬起雙手這個動作不在動作資訊庫中,即不存在動作對應的鏡頭視角(簡稱為動作鏡頭視角),則此時分鏡效果實現裝置將三維虛擬模型
Figure 02_image039
向左上方旋轉得到鏡頭視角為V2 對應的虛擬圖像。可以看出,當在三維虛擬模型
Figure 02_image039
中添加舞臺特效
Figure 02_image041
時,第7B圖示出的虛擬圖像比第7A圖示出的虛擬圖像中增添了燈光效果。As shown in Fig. 7B, in the second minute, the split-lens effect realization device shoots a real person under the lens angle of view V 1 to obtain a real image
Figure 02_image037
(As shown in the upper left corner of Figure 7B), and then according to the real image
Figure 02_image037
Get 3D virtual model
Figure 02_image039
. The split-mirror effect realization device detects the beat of the background music, determines the beat B 2 corresponding to the second minute, and obtains the stage special effects at the second minute according to the beat B 2
Figure 02_image041
And then in the 3D virtual model
Figure 02_image039
Add stage effects
Figure 02_image041
; The split-lens effect realization device determines the lens angle corresponding to the second minute (referred to as the time-lens angle) as V 2 according to the preset lens script; the split-lens effect realization device detects that the real person’s action in the second minute is lifted up The action of raising both hands and raising both hands is not in the action information database, that is, there is no lens angle of view corresponding to the action (referred to as the action lens angle of view), then the split-lens effect realization device will use the three-dimensional virtual model
Figure 02_image039
Rotate to the upper left to obtain a virtual image corresponding to the lens angle of view V 2. It can be seen that when the 3D virtual model
Figure 02_image039
Add stage effects
Figure 02_image041
At this time, the virtual image shown in Figure 7B adds lighting effects to the virtual image shown in Figure 7A.

如第7C圖所示,在第3分鐘時,分鏡效果實現裝置在鏡頭視角V1 下對真實人物進行拍攝,得到真實圖像

Figure 02_image043
(如第7C圖左上角所示),然後根據真實圖像
Figure 02_image045
得到三維虛擬模型
Figure 02_image046
。分鏡效果實現裝置對背景音樂進行節拍檢測,確定第3分鐘對應的節拍B3 ,並根據節拍B3 得到第3分鐘時的舞臺特效
Figure 02_image048
,然後在三維虛擬模型
Figure 02_image050
中添加舞臺特效
Figure 02_image051
;分鏡效果實現裝置根據預設的鏡頭腳本確定第3分鐘對應的鏡頭視角(簡稱為時間鏡頭視角)為V2 ;分鏡效果實現裝置檢測到真實人物在第3分鐘的動作是向上抬起左腳,並且抬起左腳這個動作對應的鏡頭視角(簡稱為動作鏡頭視角)為V3 ,則此時分鏡效果實現裝置將三維虛擬模型
Figure 02_image052
向左旋轉得到鏡頭視角為V3 對應的虛擬圖像。可以看出,當在三維虛擬模型
Figure 02_image050
中添加舞臺特效
Figure 02_image051
時,第7C圖示出的虛擬圖像與第7B圖示出的虛擬圖像中的燈光效果不同,並且第7C圖示出的虛擬圖像中呈現有音效波浪效果。As shown in Figure 7C, in the 3rd minute, the split-lens effect realization device takes a picture of a real person under the lens angle of view V 1 to obtain a real image
Figure 02_image043
(As shown in the upper left corner of Figure 7C), and then according to the real image
Figure 02_image045
Get 3D virtual model
Figure 02_image046
. The split-mirror effect realization device detects the beat of the background music, determines the beat B 3 corresponding to the third minute, and obtains the stage special effects at the third minute according to the beat B 3
Figure 02_image048
And then in the 3D virtual model
Figure 02_image050
Add stage effects
Figure 02_image051
; The split-lens effect realization device determines the lens angle corresponding to the third minute (referred to as the time-lens angle) as V 2 according to the preset lens script; the split-lens effect realization device detects that the real person’s action in the third minute is lifted up Left foot, and the lens angle of view corresponding to the action of raising the left foot (referred to as the action lens angle of view) is V 3 , then the device for realizing the split-lens effect will use the three-dimensional virtual model
Figure 02_image052
Rotate Left camera angle is obtained V 3 corresponding to the virtual image. It can be seen that when the 3D virtual model
Figure 02_image050
Add stage effects
Figure 02_image051
At this time, the virtual image shown in Figure 7C has a different lighting effect from the virtual image shown in Figure 7B, and the virtual image shown in Figure 7C has a sound wave effect.

如第7D圖所示,在第4分鐘時,分鏡效果實現裝置在鏡頭視角V1 下對真實人物進行拍攝,得到真實圖像

Figure 02_image053
(如第7D圖左上角所示),然後根據真實圖像
Figure 02_image053
得到三維虛擬模型
Figure 02_image055
。分鏡效果實現裝置對背景音樂進行節拍檢測,確定第4分鐘對應的節拍B4 ,並根據節拍B4 得到第4分鐘時的舞臺特效
Figure 02_image048
,然後在三維虛擬模型
Figure 02_image055
中添加舞臺特效
Figure 02_image057
;分鏡效果實現裝置根據預設的鏡頭腳本確定第3分鐘對應的鏡頭視角(簡稱為時間鏡頭視角)為V4 ;分鏡效果實現裝置檢測到真實人物在第4分鐘的動作是站立,並且站立這個動作對應的鏡頭視角(簡稱為動作鏡頭視角)為V4 ,則此時分鏡效果實現裝置將三維虛擬模型
Figure 02_image055
向右旋轉得到鏡頭視角為V4 對應的虛擬圖像。可以看出,當在三維虛擬模型
Figure 02_image055
中添加舞臺特效
Figure 02_image057
時,使得第7D圖示出的虛擬圖像與第7C圖示出的虛擬圖像中舞臺效果不相同。As shown in Fig. 7D, at the 4th minute, the split-lens effect realization device shoots a real person under the lens angle of view V 1 to obtain a real image
Figure 02_image053
(As shown in the upper left corner of Figure 7D), and then according to the real image
Figure 02_image053
Get 3D virtual model
Figure 02_image055
. The segmentation effect realization device detects the beat of the background music, determines the beat B 4 corresponding to the 4th minute, and obtains the stage special effects at the 4th minute according to the beat B 4
Figure 02_image048
And then in the 3D virtual model
Figure 02_image055
Add stage effects
Figure 02_image057
; The splitting effect realization device determines the lens angle corresponding to the 3rd minute (referred to as the time lens angle of view) as V 4 according to the preset lens script; the splitting effect realization device detects that the real person’s action in the 4th minute is standing, and The angle of view of the lens corresponding to the action of standing (referred to as the angle of view of the action lens) is V 4 , at this time, the device for realizing the split-lens effect will use the three-dimensional virtual model
Figure 02_image055
Rotate to the right to obtain a virtual image corresponding to the lens angle of view V 4. It can be seen that when the 3D virtual model
Figure 02_image055
Add stage effects
Figure 02_image057
When the virtual image shown in Figure 7D is different from the virtual image shown in Figure 7C, the stage effect is different.

本申請實施例提供的分鏡效果實現裝置可以是軟體裝置也可以是硬體裝置,當分鏡效果實現裝置為軟體裝置時,分鏡效果實現裝置可以單獨部署在雲環境下的一個計算設備上,也可以單獨部署在一個終端設備上,當分鏡效果實現裝置是硬體設備時,分鏡效果實現裝置內部的單元模組也可以有多種劃分,各個模組可以是軟體模組也可以是硬體模組,也可以部分是軟體模組部分是硬體模組,本申請不對其進行限制。第8圖為一種示例性的劃分方式,如第8圖所示,第8圖是本申請實施例提供的一種分鏡效果的實現裝置800,包括:獲取單元810,配置為獲取三維虛擬模型;分鏡單元820,配置為以至少兩個不同的鏡頭視角對三維虛擬模型進行渲染,得到至少兩個不同的鏡頭視角分別對應的虛擬圖像。The splitting effect realization device provided by the embodiments of the present application can be a software device or a hardware device. When the splitting effect realization device is a software device, the splitting effect realization device can be separately deployed on a computing device in a cloud environment , It can also be deployed separately on a terminal device. When the split-lens effect realization device is a hardware device, the unit modules inside the split-lens effect realization device can also be divided into various types, and each module can be a software module or a The hardware module may also be part of the software module and part of the hardware module, and this application does not limit it. Fig. 8 is an exemplary division method. As shown in Fig. 8, Fig. 8 is an apparatus 800 for implementing a split-mirror effect provided by an embodiment of the present application, and includes: an obtaining unit 810 configured to obtain a three-dimensional virtual model; The mirror splitting unit 820 is configured to render the three-dimensional virtual model with at least two different lens angles to obtain virtual images corresponding to at least two different lens angles.

在本申請一些可選實施例中,三維虛擬模型包括處於三維虛擬場景模型中的三維虛擬人物模型,上述裝置還包括:特徵提取單元830和三維虛擬模型生成單元840;其中,In some optional embodiments of the present application, the three-dimensional virtual model includes a three-dimensional virtual character model in a three-dimensional virtual scene model, and the above-mentioned apparatus further includes: a feature extraction unit 830 and a three-dimensional virtual model generation unit 840; wherein,

獲取單元810,還配置為在獲取三維虛擬模型之前,獲取真實圖像,其中,真實圖像包括真實人物圖像;特徵提取單元830,配置為對真實人物圖像進行特徵提取得到特徵資訊,其中,特徵資訊包括真實人物的動作資訊;三維虛擬模型生成單元840,配置為根據特徵資訊生成三維虛擬模型,以使得三維虛擬模型中的三維虛擬人物模型的動作資訊與真實人物的動作資訊對應。The acquiring unit 810 is further configured to acquire a real image before acquiring the three-dimensional virtual model, where the real image includes a real person image; the feature extraction unit 830 is configured to perform feature extraction on the real person image to obtain feature information, wherein The feature information includes the action information of the real person; the 3D virtual model generating unit 840 is configured to generate a 3D virtual model according to the feature information, so that the action information of the 3D virtual person model in the 3D virtual model corresponds to the action information of the real person.

在本申請一些可選實施例中,獲取單元,配置為獲取影片流,根據影片流中的至少兩幀圖像得到至少兩幀真實圖像;特徵提取單元830,配置為分別對每一幀真實人物圖像進行特徵提取得到對應的特徵資訊。In some optional embodiments of the present application, the acquiring unit is configured to acquire a film stream, and obtain at least two frames of real images according to at least two frames of images in the film stream; the feature extraction unit 830 is configured to perform real images for each frame. Character images are extracted to obtain corresponding characteristic information.

在本申請一些可選實施例中,真實圖像還包括真實場景圖像,三維虛擬模型還包括三維虛擬場景模型;上述裝置還包括:三維虛擬場景圖像構建單元850,配置為在獲取單元獲取三維虛擬模型之前,根據真實場景圖像,構建三維虛擬場景圖像。In some optional embodiments of the present application, the real image further includes a real scene image, and the three-dimensional virtual model also includes a three-dimensional virtual scene model; the above-mentioned apparatus further includes: a three-dimensional virtual scene image construction unit 850 configured to acquire at the acquisition unit Before the three-dimensional virtual model, a three-dimensional virtual scene image is constructed according to the real scene image.

在本申請一些可選實施例中,上述裝置還包括鏡頭視角獲取單元860,配置為獲取至少兩個不同的鏡頭視角。具體的,在一些可選實施方式中,鏡頭視角獲取單元860,配置為根據至少兩幀真實圖像,得到至少兩個不同的鏡頭視角。In some optional embodiments of the present application, the above-mentioned device further includes a lens angle acquisition unit 860 configured to obtain at least two different lens angles. Specifically, in some optional embodiments, the lens angle of view acquisition unit 860 is configured to obtain at least two different lens angles according to at least two frames of real images.

在本申請一些可選實施例中,鏡頭視角獲取單元860,配置為根據至少兩幀真實圖像分別對應的動作資訊,得到至少兩個不同的鏡頭視角。In some optional embodiments of the present application, the lens angle acquisition unit 860 is configured to obtain at least two different lens angles according to the action information corresponding to the at least two frames of real images.

在本申請一些可選實施例中,鏡頭視角獲取單元860,配置為獲取背景音樂;確定背景音樂對應的時間合集,其中時間合集包括至少兩個時間段;獲取時間合集中每一個時間段對應的鏡頭視角。In some optional embodiments of the present application, the lens angle acquisition unit 860 is configured to acquire background music; determine the time collection corresponding to the background music, where the time collection includes at least two time periods; and obtain the corresponding time period in the time collection Lens angle of view.

在本申請一些可選實施例中,至少兩個不同的鏡頭視角包括第一鏡頭視角和第二鏡頭視角,分鏡單元820,配置為以第一鏡頭視角對三維虛擬模型進行渲染,得到第一虛擬圖像;以第二鏡頭視角對三維虛擬模型進行渲染,得到第二虛擬圖像;展示根據第一虛擬圖像和第二虛擬圖像形成的圖像序列。In some optional embodiments of the present application, at least two different lens angles include a first lens angle of view and a second lens angle of view, and the splitter unit 820 is configured to render the three-dimensional virtual model with the first lens angle of view to obtain the first lens angle. Virtual image; Render the three-dimensional virtual model from the second lens perspective to obtain the second virtual image; display the image sequence formed according to the first virtual image and the second virtual image.

在本申請一些可選實施例中,分鏡單元820,配置為將第一鏡頭視角下的三維虛擬模型進行平移或者旋轉,得到第二鏡頭視角下的三維虛擬模型;獲取第二鏡頭視角下的三維虛擬模型對應的第二虛擬圖像。In some optional embodiments of the present application, the splitting unit 820 is configured to translate or rotate the three-dimensional virtual model under the first lens angle of view to obtain the three-dimensional virtual model under the second lens angle of view; and obtain the three-dimensional virtual model under the second lens angle of view. The second virtual image corresponding to the three-dimensional virtual model.

在本申請一些可選實施例中,分鏡單元820,配置為在第一虛擬圖像和第二虛擬圖像之間插入a幀虛擬圖像,使得第一虛擬圖像平緩切換至第二虛擬圖像,其中,a是正整數。In some optional embodiments of the present application, the splitting unit 820 is configured to insert a frame of virtual image between the first virtual image and the second virtual image, so that the first virtual image is gently switched to the second virtual image. Image, where a is a positive integer.

在本申請一些可選實施例中,上述裝置還包括:節拍檢測單元870,配置為對背景音樂進行節拍檢測,得到背景音樂的節拍合集,其中,節拍合集包括多個節拍,多個節拍中的每一個節拍對應一個舞臺特效;舞臺特效生成單元880,配置為將節拍合集對應的目標舞臺特效添加到三維虛擬模型中。In some optional embodiments of the present application, the above-mentioned device further includes: a beat detection unit 870 configured to perform beat detection on the background music to obtain a beat collection of the background music, wherein the beat collection includes multiple beats, Each beat corresponds to a stage special effect; the stage special effect generation unit 880 is configured to add the target stage special effect corresponding to the beat collection to the three-dimensional virtual model.

上述分鏡效果實現裝置透過根據採集得到的真實圖像生成三維虛擬模型,並根據採集得到的真實圖像、背景音樂以及真實人物的動作得到多個鏡頭視角,並利用多個鏡頭視角對三維虛擬模型進行相應的鏡頭視角切換,從而類比出在虛擬場景中有多個虛擬相機對三維虛擬模型進行拍攝的效果,使得用戶可以看到多個不同鏡頭視角下的三維虛擬模型,提高了觀眾的觀看體驗感。另外,該裝置還透過對背景音樂的節拍進行解析,並根據節拍資訊在三維虛擬模型中添加對應的舞臺特效,為觀眾呈現出不同的舞臺效果,進一步增強了觀眾的直播觀看體驗感。The aforementioned split-mirror effect realization device generates a three-dimensional virtual model based on the collected real image, and obtains multiple lens perspectives based on the collected real image, background music, and the actions of real characters, and uses multiple lens perspectives to compare the three-dimensional virtual model. The model performs the corresponding lens angle switch, which is analogous to the effect of multiple virtual cameras shooting the 3D virtual model in the virtual scene, so that the user can see the 3D virtual model under different lens angles, which improves the audience’s viewing. Sense of experience. In addition, the device also analyzes the beats of the background music and adds corresponding stage effects to the three-dimensional virtual model based on the beat information to present different stage effects to the audience, which further enhances the audience’s live viewing experience.

參見第9圖,本申請實施例提供了電子設備900的結構示意圖,前述中的分鏡效果實現裝置應用於電子設備900中。電子設備900包括:處理器910、通訊介面920以及記憶體930,其中,處理器910、通訊介面920以及記憶體930可透過匯流排940進行耦合。其中,Referring to FIG. 9, an embodiment of the present application provides a schematic structural diagram of an electronic device 900, and the foregoing device for implementing the split-mirror effect is applied to the electronic device 900. The electronic device 900 includes a processor 910, a communication interface 920, and a memory 930. The processor 910, the communication interface 920, and the memory 930 can be coupled through a bus 940. among them,

處理器910可以是中央處理器(Central Processing Unit,CPU),通用處理器、數位訊號處理器(Digital Signal Processor,DSP)、專用積體電路(Application-Specific Integrated Circuit,ASIC)、現場可程式設計閘陣列(Field Programmable Gate Array,FPGA)或者其他可程式設計邏輯器件(Programmable Logic Device,PLD)、電晶體邏輯器件、硬體部件或者其任意組合。處理器910可以實現或執行結合本申請揭露內容所描述的各種示例性的方法。具體的,處理器910讀取記憶體930中儲存的程式碼,並與通訊介面920配合執行本申請上述實施例中由分鏡效果實現裝置執行的方法的部分或者全部步驟。The processor 910 can be a central processing unit (Central Processing Unit, CPU), a general-purpose processor, a digital signal processor (Digital Signal Processor, DSP), a dedicated integrated circuit (Application-Specific Integrated Circuit, ASIC), on-site programmable design Field Programmable Gate Array (FPGA) or other programmable logic devices (Programmable Logic Device, PLD), transistor logic devices, hardware components, or any combination thereof. The processor 910 may implement or execute various exemplary methods described in conjunction with the disclosure of this application. Specifically, the processor 910 reads the program code stored in the memory 930, and cooperates with the communication interface 920 to execute part or all of the steps of the method executed by the split-mirror effect realization device in the above-mentioned embodiment of the present application.

通訊介面920可以為有線介面或無線介面,用於與其他模組或設備進行通訊,有線介面可以是乙太介面、控制器區域網路介面、區域互聯網路(Local Interconnect Network,LIN)以及FlexRay介面,無線介面可以是蜂窩網路介面或使用無線區域網介面等。具體的,上述通訊介面920可以與輸入輸出設備950相連接,輸入輸出設備950可以包括滑鼠、鍵盤、麥克風等其他終端設備。The communication interface 920 can be a wired interface or a wireless interface for communicating with other modules or devices. The wired interface can be an Ethernet interface, a controller area network interface, a local interconnect network (Local Interconnect Network, LIN), and a FlexRay interface , The wireless interface can be a cellular network interface or a wireless LAN interface. Specifically, the aforementioned communication interface 920 may be connected to an input/output device 950, and the input/output device 950 may include other terminal devices such as a mouse, a keyboard, and a microphone.

記憶體930可以包括易失性記憶體,例如隨機存取記憶體(Random Access Memory,RAM);記憶體930也可以包括非易失性記憶體(Non-Volatile Memory),例如唯讀記憶體(Read-Only Memory,ROM)、快閃記憶體、硬碟(Hard Disk Drive,HDD)或固態硬碟(Solid-State Drive,SSD),記憶體930還可以包括上述種類的記憶體的組合。記憶體930可以儲存有程式碼以及程式資料。其中,程式碼由上述分鏡效果實現裝置800中的部分或者全部單元的代碼組成,例如,獲取單元810的代碼、分鏡單元820的代碼、特徵提取單元830的代碼、三維虛擬模型生成單元840的代碼、三維虛擬場景圖像構建單元850的代碼、鏡頭視角獲取單元860的代碼、節拍檢測單元870的代碼以及舞臺特效生成單元880的代碼等等。程式資料由分鏡效果實現裝置800在運行過程中產生的資料,例如,真實圖像資料、三維虛擬模型資料、鏡頭視角資料、背景音樂資料以及虛擬圖像資料等等。The memory 930 may include volatile memory, such as random access memory (Random Access Memory, RAM); the memory 930 may also include non-volatile memory (Non-Volatile Memory), such as read-only memory ( Read-Only Memory, ROM), flash memory, hard disk (HDD) or solid-state drive (Solid-State Drive, SSD), the memory 930 may also include a combination of the foregoing types of memory. The memory 930 can store program codes and program data. Wherein, the program code is composed of the codes of some or all of the units in the above-mentioned mirror effect realization device 800, for example, the code of the acquisition unit 810, the code of the mirror unit 820, the code of the feature extraction unit 830, and the three-dimensional virtual model generation unit 840. The code of the 3D virtual scene image construction unit 850, the code of the lens angle acquisition unit 860, the code of the beat detection unit 870, the code of the stage special effect generation unit 880, and so on. The program data is data generated during the operation of the split-mirror effect realization device 800, such as real image data, three-dimensional virtual model data, lens angle data, background music data, virtual image data, and so on.

匯流排940可以是控制器區域網路(Controller Area Network,CAN)或其他實現車內各個系統或設備之間互連的內部匯流排。匯流排940可以分為位址匯流排、資料匯流排、控制匯流排等。為了便於表示,圖中僅用一條粗線表示,但並不表示僅有一根匯流排或一種類型的匯流排。The bus 940 may be a Controller Area Network (CAN) or other internal bus that implements interconnection between various systems or devices in the vehicle. The bus 940 can be divided into address bus, data bus, control bus and so on. For ease of representation, the figure is only represented by a thick line, but it does not mean that there is only one busbar or one type of busbar.

應當理解,電子設備900可能包含相比於第9圖展示的更多或者更少的組件,或者有不同的元件配置方式。It should be understood that the electronic device 900 may include more or fewer components than those shown in FIG. 9, or may have different component configurations.

本申請實施例還提供了一種電腦可讀儲存介質,上述電腦可讀儲存介質儲存有電腦程式,上述電腦程式被硬體(例如處理器等)執行,以實現上述分鏡效果實現方法中部分或全部步驟。An embodiment of the present application also provides a computer-readable storage medium. The computer-readable storage medium stores a computer program, and the computer program is executed by hardware (such as a processor, etc.) to realize part or All steps.

本申請實施例還提供了一種電腦程式產品,當上述電腦程式產品在上述分鏡效果實現裝置或者電子設備上運行時,執行上述分鏡效果實現方法的部分或全部步驟。The embodiment of the present application also provides a computer program product, which executes part or all of the steps of the above-mentioned method for realizing the split-mirror effect when the computer program product runs on the above-mentioned device or electronic device for realizing the split-mirror effect.

在上述實施例中,可以全部或部分地透過軟體、硬體、韌體或者其任意組合來實現。當使用軟體實現時,可以全部或部分地以電腦程式產品的形式實現。所述電腦程式產品包括一個或多個電腦指令。在電腦上載入和執行所述電腦程式指令時,全部或部分地產生按照本申請實施例所述的流程或功能。所述電腦可以是通用電腦、專用電腦、電腦網路、或者其他可程式設計裝置。所述電腦指令可以儲存在電腦可讀儲存介質中,或者從一個電腦可讀儲存介質向另一個電腦可讀儲存介質傳輸,例如,所述電腦指令可以從一個網站站點、電腦、伺服器或資料中心透過有線(例如同軸電纜、光纖、數位用戶線路)或無線(例如紅外、無線、微波等)方式向另一個網站站點、電腦、伺服器或資料中心進行傳輸。所述電腦可讀儲存介質可以是電腦能夠存取的任何可用介質或者是包含一個或多個可用介質集成的伺服器、資料中心等資料存放裝置。所述可用介質可以是磁性介質,(例如,軟碟、儲存盤、磁帶)、光介質(例如,DVD)、或者半導體介質(例如SSD)等。在所述實施例中,對各個實施例的描述都各有側重,某個實施例中沒有詳述的部分,可以參見其他實施例的相關描述。In the above-mentioned embodiments, it may be implemented in whole or in part through software, hardware, firmware, or any combination thereof. When implemented by software, it can be implemented in the form of a computer program product in whole or in part. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the processes or functions described in the embodiments of the present application are generated in whole or in part. The computer may be a general-purpose computer, a dedicated computer, a computer network, or other programmable devices. The computer instructions may be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be from a website, computer, server, or The data center transmits data to another website, computer, server or data center through wired (such as coaxial cable, optical fiber, digital subscriber line) or wireless (such as infrared, wireless, microwave, etc.). The computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, a data center, or the like integrated with one or more available media. The usable medium may be a magnetic medium, (for example, a floppy disk, a storage disk, and a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, an SSD). In the embodiments, the description of each embodiment has its own focus. For parts that are not described in detail in an embodiment, reference may be made to related descriptions of other embodiments.

在本申請所提供的幾個實施例中,應該理解到,所揭露的裝置,也可以透過其它的方式實現。例如以上所描述的裝置實施例僅是示意性的,例如所述單元的劃分,僅僅為一種邏輯功能劃分,實際實現時可以有另外的劃分方式,例如多個單元或元件可結合或者可以集成到另一個系統,或一些特徵可以忽略或不執行。另一點,所顯示或討論的相互之間的間接耦合或者直接耦合或通訊連接可以是透過一些介面,裝置或單元的間接耦合或通訊連接,可以是電性或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed device can also be implemented in other ways. For example, the device embodiments described above are only illustrative. For example, the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or elements can be combined or integrated into Another system, or some features can be ignored or not implemented. In addition, the shown or discussed indirect coupling or direct coupling or communication connection between each other may be through some interfaces, indirect coupling or communication connection between devices or units, and may be in electrical or other forms.

所述作為分離部件說明的單元可以是或者也可以不是物理上分開的,作為單元顯示的部件可以是或者也可以不是物理單元,即可以位於一個地方,或者,也可以分佈到多個網路單元上。可以根據實際的需要選擇其中的部分或者全部單元來實現本申請實施例的方案的目的。The units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed to multiple network units. on. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions in the embodiments of the present application.

另外,在本申請各實施例中的各功能單元可集成在一個處理單元中,也可以是各單元單獨物理存在,也可以是兩個或兩個以上單元集成在一個單元中。所述集成的單元既可以採用硬體的形式實現,也可以採用軟體功能單元的形式實現。In addition, the functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit. The integrated unit can be implemented in the form of hardware or software functional unit.

所述集成的單元如果以軟體功能單元的形式實現並作為獨立的產品銷售或使用時,可以儲存在一個電腦可讀取儲存介質中。基於這樣的理解,本申請技術方案本質上或者說對現有技術做出貢獻的部分或者該技術方案的全部或部分可以以軟體產品的形式體現出來,該電腦軟體產品儲存在一個儲存介質中,包括若干指令用以使得一台電腦設備(可為個人電腦、伺服器或者網路設備等)執行本申請各個實施例所述方法的全部或部分步驟。而前述的儲存介質例如可包括:U盤、移動硬碟、唯讀記憶體、隨機存取記憶體、磁碟或光碟等各種可儲存程式碼的介質。If the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium. Based on this understanding, the technical solution of this application essentially or the part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which can be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application. The aforementioned storage medium may include, for example, various media capable of storing program codes, such as a U disk, a removable hard disk, a read-only memory, a random access memory, a floppy disk, or an optical disk.

以上所述,僅為本申請實施例的可選實施方式,但本申請實施例的保護範圍並不局限於此,任何熟悉本技術領域的技術人員在本申請揭露的技術範圍內,可輕易想到各種等效的修改或替換,這些修改或替換都應涵蓋在本申請的保護範圍之內。因此,本申請實施例的保護範圍應以發明申請專利範圍的保護範圍為準。The above are only optional implementation manners of the embodiments of the present application, but the protection scope of the embodiments of the present application is not limited thereto. Any person skilled in the art can easily think of within the technical scope disclosed in the present application. Various equivalent modifications or replacements shall be covered within the protection scope of this application. Therefore, the protection scope of the embodiments of this application shall be subject to the protection scope of the invention application patent.

110:攝影設備 120:伺服器 130:用戶終端 S101,S102,S103:步驟 S201,S202,S203,S204,S205,S206,S207,S208,S209,S210,S211,S212:步驟 800:分鏡效果實現裝置 810:獲取單元 820:分鏡單元 830:特徵提取單元 840:三維虛擬模型生成單元 850:三維虛擬場景圖像構建單元 860:鏡頭視角獲取單元 870:節拍檢測單元 880:舞臺特效生成單元 900:電子設備 910:處理器 920:通訊介面 930:記憶體 940:匯流排 950:輸入輸出設備110: Photography equipment 120: server 130: user terminal S101, S102, S103: steps S201, S202, S203, S204, S205, S206, S207, S208, S209, S210, S211, S212: steps 800: Split-lens effect realization device 810: get unit 820: Splitter unit 830: Feature Extraction Unit 840: 3D virtual model generation unit 850: 3D virtual scene image construction unit 860: Lens angle acquisition unit 870: Beat detection unit 880: stage special effects generation unit 900: electronic equipment 910: processor 920: Communication interface 930: Memory 940: Bus 950: input and output devices

為了更清楚地說明本申請實施例或背景技術中的技術方案,下面將對本申請實施例描述中所需要使用的附圖作簡單地介紹,顯而易見地,下面描述中的附圖是本申請的一些實施例,對於本領域普通技術人員來講,在不付出進步性勞動的前提下,還可以根據這些附圖獲得其他的附圖。 第1圖是本申請實施例提供的一種具體應用場景的示意圖; 第2圖是本申請實施例提供的一種可能的三維虛擬模型的示意圖; 第3圖是本申請實施例提供的一種分鏡效果實現方法的流程示意圖; 第4圖是本申請實施例提供的一種插值曲線的示意圖; 第5圖是本申請實施例提供的一種具體實施例的流程示意圖; 第6圖是本申請實施例提供的一種分鏡規則示意圖; 第7A圖是本申請實施例提供的一種可能的虛擬圖像的效果圖; 第7B圖是本申請實施例提供的一種可能的虛擬圖像的效果圖; 第7C圖是本申請實施例提供的一種可能的虛擬圖像的效果圖; 第7D圖是本申請實施例提供的一種可能的虛擬圖像的效果圖; 第8圖是本申請實施例提供的一種分鏡效果的實現裝置的結構示意圖; 第9圖是本申請實施例提供的一種電子設備的結構示意圖。In order to more clearly describe the technical solutions in the embodiments of this application or the background art, the following will briefly introduce the drawings needed in the description of the embodiments of this application. Obviously, the drawings in the following description are some of the present application. Embodiments, for those of ordinary skill in the art, without making progressive labor, other drawings can be obtained based on these drawings. Figure 1 is a schematic diagram of a specific application scenario provided by an embodiment of the present application; Figure 2 is a schematic diagram of a possible three-dimensional virtual model provided by an embodiment of the present application; FIG. 3 is a schematic flowchart of a method for realizing a split-mirror effect provided by an embodiment of the present application; Figure 4 is a schematic diagram of an interpolation curve provided by an embodiment of the present application; Figure 5 is a schematic flowchart of a specific embodiment provided by an embodiment of the present application; Figure 6 is a schematic diagram of a splitting rule provided by an embodiment of the present application; Figure 7A is an effect diagram of a possible virtual image provided by an embodiment of the present application; Figure 7B is an effect diagram of a possible virtual image provided by an embodiment of the present application; Figure 7C is an effect diagram of a possible virtual image provided by an embodiment of the present application; Figure 7D is an effect diagram of a possible virtual image provided by an embodiment of the present application; FIG. 8 is a schematic structural diagram of a device for implementing a split-mirror effect provided by an embodiment of the present application; Figure 9 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.

S101,S102,S103:步驟S101, S102, S103: steps

Claims (13)

一種分鏡效果的實現方法,包括: 獲取三維虛擬模型; 以至少兩個不同的鏡頭視角對所述三維虛擬模型進行渲染,得到至少兩個不同的鏡頭視角分別對應的虛擬圖像。A method for realizing the split-mirror effect includes: Obtain a three-dimensional virtual model; The three-dimensional virtual model is rendered with at least two different lens angles to obtain virtual images corresponding to at least two different lens angles. 根據請求項1所述的方法,其中,所述三維虛擬模型包括處於三維虛擬場景模型中的三維虛擬人物模型,在所述獲取三維虛擬模型之前,所述方法還包括: 獲取真實圖像,其中,所述真實圖像包括真實人物圖像; 對所述真實人物圖像進行特徵提取得到特徵資訊,其中,所述特徵資訊包括所述真實人物的動作資訊; 根據所述特徵資訊生成所述三維虛擬模型,以使得所述三維虛擬模型中的所述三維虛擬人物模型的動作資訊與所述真實人物的動作資訊對應。The method according to claim 1, wherein the three-dimensional virtual model includes a three-dimensional virtual character model in a three-dimensional virtual scene model, and before the obtaining the three-dimensional virtual model, the method further includes: Acquiring a real image, where the real image includes an image of a real person; Perform feature extraction on the real person image to obtain feature information, where the feature information includes action information of the real person; The three-dimensional virtual model is generated according to the characteristic information, so that the action information of the three-dimensional virtual character model in the three-dimensional virtual model corresponds to the action information of the real character. 根據請求項2所述的方法,其中,所述獲取真實圖像包括: 獲取影片流,根據所述影片流中的至少兩幀圖像得到至少兩幀所述真實圖像; 所述對所述真實人物圖像進行特徵提取得到特徵資訊,包括: 分別對每一幀所述真實人物圖像進行特徵提取得到對應的特徵資訊。The method according to claim 2, wherein the obtaining a real image includes: Acquiring a film stream, and obtaining at least two frames of the real image according to at least two frames of images in the film stream; The performing feature extraction on the real person image to obtain feature information includes: Perform feature extraction on each frame of the real person image to obtain corresponding feature information. 根據請求項3所述的方法,其中,所述真實圖像還包括真實場景圖像,所述三維虛擬模型還包括所述三維虛擬場景模型;在所述獲取三維虛擬模型之前,所述方法還包括: 根據所述真實場景圖像,構建所述三維虛擬場景模型。The method according to claim 3, wherein the real image further includes a real scene image, the three-dimensional virtual model further includes the three-dimensional virtual scene model; before the acquiring the three-dimensional virtual model, the method further include: According to the real scene image, the three-dimensional virtual scene model is constructed. 根據請求項3或4所述的方法,其中,獲取所述至少兩個不同的鏡頭視角,包括: 根據所述至少兩幀所述真實圖像,得到所述至少兩個不同的鏡頭視角。The method according to claim 3 or 4, wherein acquiring the at least two different lens angles includes: According to the at least two frames of the real image, the at least two different lens angles of view are obtained. 根據請求項3或4所述的方法,其中,獲取所述至少兩個不同的鏡頭視角,包括: 根據所述至少兩幀所述真實圖像分別對應的動作資訊,得到所述至少兩個不同的鏡頭視角。The method according to claim 3 or 4, wherein acquiring the at least two different lens angles includes: The at least two different camera angles are obtained according to the action information corresponding to the at least two frames of the real images respectively. 根據請求項3或4所述的方法,其中,獲取所述至少兩個不同的鏡頭視角,包括: 獲取背景音樂; 確定所述背景音樂對應的時間合集,其中,所述時間合集包括至少兩個時間段; 獲取所述時間合集中每一個時間段對應的鏡頭視角。The method according to claim 3 or 4, wherein acquiring the at least two different lens angles includes: Get background music; Determining a time collection corresponding to the background music, wherein the time collection includes at least two time periods; Obtain the lens angle of view corresponding to each time period in the time collection. 根據請求項1所述的方法,其中,所述至少兩個不同的鏡頭視角包括第一鏡頭視角和第二鏡頭視角;所述以至少兩個不同的鏡頭視角對所述三維虛擬模型進行渲染,得到至少兩個不同的鏡頭視角分別對應的虛擬圖像,包括: 以所述第一鏡頭視角對所述三維虛擬模型進行渲染,得到第一虛擬圖像; 以所述第二鏡頭視角對所述三維虛擬模型進行渲染,得到第二虛擬圖像; 展示根據所述第一虛擬圖像和所述第二虛擬圖像形成的圖像序列。The method according to claim 1, wherein the at least two different lens angles include a first lens angle of view and a second lens angle of view; the rendering of the three-dimensional virtual model with at least two different lens angles, Obtain at least two virtual images corresponding to different lens angles, including: Rendering the three-dimensional virtual model with the first lens perspective to obtain a first virtual image; Rendering the three-dimensional virtual model with the second lens perspective to obtain a second virtual image; The image sequence formed according to the first virtual image and the second virtual image is displayed. 根據請求項8所述的方法,其中,所述以所述第二鏡頭視角對所述三維虛擬模型進行渲染,得到第二虛擬圖像,包括: 將所述第一鏡頭視角下的所述三維虛擬模型進行平移或者旋轉,得到所述第二鏡頭視角下的所述三維虛擬模型; 獲取所述第二鏡頭視角下的所述三維虛擬模型對應的所述第二虛擬圖像。The method according to claim 8, wherein the rendering the three-dimensional virtual model with the second lens angle of view to obtain a second virtual image includes: Translate or rotate the three-dimensional virtual model in the first lens angle of view to obtain the three-dimensional virtual model in the second lens angle of view; Acquiring the second virtual image corresponding to the three-dimensional virtual model in the second lens angle of view. 根據請求項9所述的方法,其中,所述展示根據所述第一圖像和所述第二虛擬圖像形成的圖像序列,包括: 在所述第一虛擬圖像和所述第二虛擬圖像之間插入a幀虛擬圖像,使得所述第一虛擬圖像平緩切換至所述第二虛擬圖像,其中,a是正整數。The method according to claim 9, wherein the displaying the image sequence formed according to the first image and the second virtual image includes: A frame of virtual image is inserted between the first virtual image and the second virtual image, so that the first virtual image is gently switched to the second virtual image, where a is a positive integer. 根據請求項7所述的方法,其中,所述方法還包括: 對所述背景音樂進行節拍檢測,得到所述背景音樂的節拍合集,其中,所述節拍合集包括多個節拍,所述多個節拍中的每一個節拍對應一個舞臺特效; 將所述節拍合集對應的目標舞臺特效添加到所述三維虛擬模型中。The method according to claim 7, wherein the method further includes: Performing beat detection on the background music to obtain a beat collection of the background music, wherein the beat collection includes multiple beats, and each beat in the multiple beats corresponds to a stage special effect; The target stage special effect corresponding to the beat collection is added to the three-dimensional virtual model. 一種電子設備,所述電子設備包括:處理器、通訊介面以及記憶體;所述記憶體用於儲存指令,所述處理器用於執行所述指令,所述通訊介面用於在所述處理器的控制下與其他設備進行通訊,其中,所述處理器執行所述指令時實現請求項1至11任一項請求項所述的方法。An electronic device comprising: a processor, a communication interface, and a memory; the memory is used to store instructions, the processor is used to execute the instructions, and the communication interface is used to connect to the processor Communicate with other devices under control, wherein the processor implements the method described in any one of the request items 1 to 11 when the processor executes the instruction. 一種電腦可讀儲存介質,儲存有電腦程式,所述電腦程式被硬體執行以實現請求項1至11任一項請求項所述的方法。A computer-readable storage medium stores a computer program, and the computer program is executed by hardware to implement the method described in any one of claim items 1 to 11.
TW109116665A 2019-12-03 2020-05-20 Method for realizing lens splitting effect, electronic equipment and computer readable storage medium thereof TWI752502B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911225211.4 2019-12-03
CN201911225211.4A CN111080759B (en) 2019-12-03 2019-12-03 Method and device for realizing split mirror effect and related product

Publications (2)

Publication Number Publication Date
TW202123178A true TW202123178A (en) 2021-06-16
TWI752502B TWI752502B (en) 2022-01-11

Family

ID=70312713

Family Applications (1)

Application Number Title Priority Date Filing Date
TW109116665A TWI752502B (en) 2019-12-03 2020-05-20 Method for realizing lens splitting effect, electronic equipment and computer readable storage medium thereof

Country Status (5)

Country Link
JP (1) JP7457806B2 (en)
KR (1) KR20220093342A (en)
CN (1) CN111080759B (en)
TW (1) TWI752502B (en)
WO (1) WO2021109376A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI762375B (en) * 2021-07-09 2022-04-21 國立臺灣大學 Semantic segmentation failure detection system

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113630646A (en) * 2021-07-29 2021-11-09 北京沃东天骏信息技术有限公司 Data processing method and device, equipment and storage medium
CN114157879A (en) * 2021-11-25 2022-03-08 广州林电智能科技有限公司 Full scene virtual live broadcast processing equipment
CN114630173A (en) * 2022-03-03 2022-06-14 北京字跳网络技术有限公司 Virtual object driving method and device, electronic equipment and readable storage medium
CN114745598B (en) * 2022-04-12 2024-03-19 北京字跳网络技术有限公司 Video data display method and device, electronic equipment and storage medium
CN114900743A (en) * 2022-04-28 2022-08-12 中德(珠海)人工智能研究院有限公司 Scene rendering transition method and system based on video plug flow
CN117014651A (en) * 2022-04-29 2023-11-07 北京字跳网络技术有限公司 Video generation method and device
CN115442542B (en) * 2022-11-09 2023-04-07 北京天图万境科技有限公司 Method and device for splitting mirror
CN115883814A (en) * 2023-02-23 2023-03-31 阿里巴巴(中国)有限公司 Method, device and equipment for playing real-time video stream

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201333882A (en) * 2012-02-14 2013-08-16 Univ Nat Taiwan Augmented reality apparatus and method thereof
US20150049078A1 (en) * 2013-08-15 2015-02-19 Mep Tech, Inc. Multiple perspective interactive image projection
CN106157359B (en) * 2015-04-23 2020-03-10 中国科学院宁波材料技术与工程研究所 Design method of virtual scene experience system
US10068376B2 (en) * 2016-01-11 2018-09-04 Microsoft Technology Licensing, Llc Updating mixed reality thumbnails
US10019131B2 (en) * 2016-05-10 2018-07-10 Google Llc Two-handed object manipulations in virtual reality
CN106295955A (en) * 2016-07-27 2017-01-04 邓耀华 A kind of client based on augmented reality is to the footwear custom-built system of factory and implementation method
CN106385576B (en) * 2016-09-07 2017-12-08 深圳超多维科技有限公司 Stereoscopic Virtual Reality live broadcasting method, device and electronic equipment
CN107103645B (en) * 2017-04-27 2018-07-20 腾讯科技(深圳)有限公司 virtual reality media file generation method and device
CN107194979A (en) * 2017-05-11 2017-09-22 上海微漫网络科技有限公司 The Scene Composition methods and system of a kind of virtual role
US10278001B2 (en) * 2017-05-12 2019-04-30 Microsoft Technology Licensing, Llc Multiple listener cloud render with enhanced instant replay
JP6469279B1 (en) 2018-04-12 2019-02-13 株式会社バーチャルキャスト Content distribution server, content distribution system, content distribution method and program
CN108538095A (en) * 2018-04-25 2018-09-14 惠州卫生职业技术学院 Medical teaching system and method based on virtual reality technology
JP6595043B1 (en) 2018-05-29 2019-10-23 株式会社コロプラ GAME PROGRAM, METHOD, AND INFORMATION PROCESSING DEVICE
CN108830894B (en) * 2018-06-19 2020-01-17 亮风台(上海)信息科技有限公司 Remote guidance method, device, terminal and storage medium based on augmented reality
CN108833740B (en) * 2018-06-21 2021-03-30 珠海金山网络游戏科技有限公司 Real-time prompter method and device based on three-dimensional animation live broadcast
CN108961376A (en) * 2018-06-21 2018-12-07 珠海金山网络游戏科技有限公司 The method and system of real-time rendering three-dimensional scenic in virtual idol live streaming
CN108877838B (en) * 2018-07-17 2021-04-02 黑盒子科技(北京)有限公司 Music special effect matching method and device
JP6538942B1 (en) 2018-07-26 2019-07-03 株式会社Cygames INFORMATION PROCESSING PROGRAM, SERVER, INFORMATION PROCESSING SYSTEM, AND INFORMATION PROCESSING APPARATUS
CN110139115B (en) * 2019-04-30 2020-06-09 广州虎牙信息科技有限公司 Method and device for controlling virtual image posture based on key points and electronic equipment
CN110335334A (en) * 2019-07-04 2019-10-15 北京字节跳动网络技术有限公司 Avatars drive display methods, device, electronic equipment and storage medium
CN110427110B (en) * 2019-08-01 2023-04-18 广州方硅信息技术有限公司 Live broadcast method and device and live broadcast server

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI762375B (en) * 2021-07-09 2022-04-21 國立臺灣大學 Semantic segmentation failure detection system

Also Published As

Publication number Publication date
CN111080759B (en) 2022-12-27
WO2021109376A1 (en) 2021-06-10
JP7457806B2 (en) 2024-03-28
TWI752502B (en) 2022-01-11
JP2023501832A (en) 2023-01-19
CN111080759A (en) 2020-04-28
KR20220093342A (en) 2022-07-05

Similar Documents

Publication Publication Date Title
TWI752502B (en) Method for realizing lens splitting effect, electronic equipment and computer readable storage medium thereof
WO2022001593A1 (en) Video generation method and apparatus, storage medium and computer device
US9654734B1 (en) Virtual conference room
CN113240782B (en) Streaming media generation method and device based on virtual roles
TWI255141B (en) Method and system for real-time interactive video
JP2022166078A (en) Composing and realizing viewer's interaction with digital media
CN114097248B (en) Video stream processing method, device, equipment and medium
JP6683864B1 (en) Content control system, content control method, and content control program
CN109035415B (en) Virtual model processing method, device, equipment and computer readable storage medium
WO2023045637A1 (en) Video data generation method and apparatus, electronic device, and readable storage medium
CN112927349A (en) Three-dimensional virtual special effect generation method and device, computer equipment and storage medium
JP7202935B2 (en) Attention level calculation device, attention level calculation method, and attention level calculation program
KR102200239B1 (en) Real-time computer graphics video broadcasting service system
CN112153472A (en) Method and device for generating special picture effect, storage medium and electronic equipment
CN116095353A (en) Live broadcast method and device based on volume video, electronic equipment and storage medium
WO2024027063A1 (en) Livestream method and apparatus, storage medium, electronic device and product
WO2024031882A1 (en) Video processing method and apparatus, and computer readable storage medium
KR100445846B1 (en) A Public Speaking Simulator for treating anthropophobia
JP2001051579A (en) Method and device for displaying video and recording medium recording video display program
JP2021009351A (en) Content control system, content control method, and content control program
US20240048780A1 (en) Live broadcast method, device, storage medium, electronic equipment and product
KR102622709B1 (en) Method and Apparatus for generating 360 degree image including 3-dimensional virtual object based on 2-dimensional image
WO2022160867A1 (en) Remote reproduction method, system, and apparatus, device, medium, and program product
JP7344084B2 (en) Content distribution system, content distribution method, and content distribution program
US20230319225A1 (en) Automatic Environment Removal For Human Telepresence