WO2021109376A1 - Method and device for producing multiple camera-angle effect, and related product - Google Patents

Method and device for producing multiple camera-angle effect, and related product Download PDF

Info

Publication number
WO2021109376A1
WO2021109376A1 PCT/CN2020/082545 CN2020082545W WO2021109376A1 WO 2021109376 A1 WO2021109376 A1 WO 2021109376A1 CN 2020082545 W CN2020082545 W CN 2020082545W WO 2021109376 A1 WO2021109376 A1 WO 2021109376A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
dimensional virtual
real
model
virtual
Prior art date
Application number
PCT/CN2020/082545
Other languages
French (fr)
Chinese (zh)
Inventor
刘文韬
郑佳宇
黄展鹏
李佳桦
Original Assignee
深圳市商汤科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市商汤科技有限公司 filed Critical 深圳市商汤科技有限公司
Priority to KR1020227018465A priority Critical patent/KR20220093342A/en
Priority to JP2022528715A priority patent/JP7457806B2/en
Publication of WO2021109376A1 publication Critical patent/WO2021109376A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/2053D [Three Dimensional] animation driven by audio data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Definitions

  • This application relates to the field of virtual technology, and in particular to a method, device and related products for realizing the split-mirror effect.
  • the virtual characters in the network generally use motion capture technology in the generation process, and the real person images obtained by the image recognition method are analyzed, so as to direct the actions and expressions of the real characters to the virtual characters, so that the virtual characters can be Reproduce the movements and expressions of real characters.
  • the embodiments of the present application disclose a method, device and related products for realizing the split-mirror effect.
  • the embodiment of the present application provides a method for implementing the splitting effect, including: obtaining a three-dimensional virtual model; rendering the three-dimensional virtual model with at least two different lens angles to obtain at least two different lens angles, respectively The corresponding virtual image.
  • the above method obtains a three-dimensional virtual model and renders the three-dimensional virtual model with at least two different lens angles, so as to obtain virtual images corresponding to at least two different lens angles, so that the user can see the images under different lens angles.
  • Virtual images provide users with a rich visual experience.
  • the three-dimensional virtual model includes a three-dimensional virtual character model in a three-dimensional virtual scene model.
  • the above method further includes: obtaining a real image, where the real image includes a real character Image; feature extraction of real person images to obtain feature information, where the feature information includes the action information of the real person; generate a three-dimensional virtual model according to the feature information, so that the action information of the three-dimensional virtual character model in the three-dimensional virtual model is the same as that of the real person Action information correspondence.
  • a 3D virtual model is generated, so that the 3D virtual person model in the 3D virtual model can reproduce the facial expressions and body movements of the real person, and it is convenient for the audience to watch the 3D virtual
  • the virtual image corresponding to the model can learn the facial expressions and body movements of the real person, so that the audience can interact more flexibly with the live anchor.
  • acquiring the real image includes: acquiring a video stream, and obtaining at least two frames of real images according to at least two frames of images in the video stream; performing feature extraction on real person images to obtain feature information, including: Perform feature extraction on each frame of real person images to obtain corresponding feature information.
  • the three-dimensional virtual model can be changed in real time according to the multiple frames of real images collected, so that the user can see the dynamic change process of the three-dimensional virtual model under different lens perspectives.
  • the real image further includes a real scene image
  • the three-dimensional virtual model also includes a three-dimensional virtual scene model; before obtaining the three-dimensional virtual model, the above method further includes: constructing a three-dimensional virtual scene based on the real scene image model.
  • the above method can also use real scene images to construct three-dimensional virtual scene images in the three-dimensional virtual model, which makes the three-dimensional virtual scene images more selective than only selecting specific three-dimensional virtual scene images.
  • acquiring at least two different lens angles includes: obtaining at least two different lens angles according to at least two frames of real images.
  • each frame of real image corresponds to a lens angle
  • multiple frames of real image correspond to multiple lens angles. Therefore, at least two frames of different lens angles can be obtained from at least two frames of real images, which can be used to realize the lens angle of the 3D virtual model. Rendering to provide users with a rich visual experience.
  • acquiring at least two different lens angles includes: obtaining at least two different lens angles according to the action information corresponding to the at least two frames of real images.
  • determining the lens angle of view based on the action information of the real person in the real image can magnify the action of the corresponding three-dimensional virtual character model in the image, so that the user can learn the action of the real person by watching the virtual image and improve the interaction Sex and fun.
  • acquiring at least two different camera angles includes: acquiring background music; determining a time collection corresponding to the background music, where the time collection includes at least two time periods; acquiring each of the time collections The lens angle of view corresponding to the time period.
  • the at least two different lens angles include a first lens angle of view and a second lens angle of view; the three-dimensional virtual model is rendered with at least two different lens angles to obtain at least two different lens angles.
  • the virtual images corresponding to the lens perspectives respectively include: rendering the three-dimensional virtual model with the first lens perspective to obtain the first virtual image; rendering the three-dimensional virtual model with the second lens perspective to obtain the second virtual image; An image sequence formed by the virtual image and the second virtual image.
  • rendering the three-dimensional virtual model with the first lens perspective and the second lens perspective respectively allows the user to view the three-dimensional virtual model in the first lens perspective and the three-dimensional virtual model in the second lens perspective, thereby providing users with Provide a rich visual experience.
  • rendering the three-dimensional virtual model in the second lens perspective to obtain the second virtual image includes: translating or rotating the three-dimensional virtual model in the first lens perspective to obtain the second lens A three-dimensional virtual model under a viewing angle; acquiring a second virtual image corresponding to the three-dimensional virtual model under a second lens perspective.
  • the three-dimensional virtual model under the second lens angle of view that is, the second virtual image
  • displaying the image sequence formed according to the first image and the second virtual image includes: inserting a frame of virtual image between the first virtual image and the second virtual image, so that the first virtual image The image is gently switched to the second virtual image, where a is a positive integer.
  • the a-frame virtual image is inserted between the first virtual image and the second virtual image, so that the viewer can see the entire change process from the first virtual image to the second virtual image, instead of a single two images ( The first virtual image and the second virtual image), so that the audience can adapt to the effect of the visual difference caused by the first virtual image to the second virtual image.
  • the method further includes: performing beat detection on the background music to obtain a beat collection of the background music, where the beat collection includes multiple beats, and each beat of the multiple beats corresponds to a stage special effect ; Add the target stage special effects corresponding to the beat collection to the 3D virtual model.
  • stage effects are added to the virtual scene where the virtual character model is located, thereby presenting different stage effects to the audience and enhancing the audience's viewing experience.
  • an embodiment of the present application also provides a device for realizing a splitting effect, including: an acquiring unit configured to acquire a three-dimensional virtual model; and a splitting unit configured to view the three-dimensional virtual model from at least two different lens angles. Perform rendering to obtain at least two virtual images respectively corresponding to different lens angles.
  • the three-dimensional virtual model includes a three-dimensional virtual character model in a three-dimensional virtual scene model
  • the device further includes: a feature extraction unit and a three-dimensional virtual model generation unit; wherein, the acquisition unit is also configured to Before acquiring a three-dimensional virtual model, acquire a real image, where the real image includes an image of a real person; the feature extraction unit is configured to perform feature extraction on the image of a real person to obtain feature information, where the feature information includes the action information of the real person; the three-dimensional virtual model The generating unit is configured to generate a three-dimensional virtual model according to the feature information, so that the action information of the three-dimensional virtual character model in the three-dimensional virtual model corresponds to the action information of the real character.
  • the obtaining unit is configured to obtain a video stream, and obtain at least two frames of real images according to at least two frames of images in the video stream; the feature extraction unit is configured to perform processing on each frame of real person images. Feature extraction obtains corresponding feature information.
  • the real image further includes a real scene image
  • the three-dimensional virtual model also includes a three-dimensional virtual scene model
  • the device further includes: a three-dimensional virtual scene image construction unit configured to before the acquisition unit acquires the three-dimensional virtual model , According to the real scene image, construct a three-dimensional virtual scene image.
  • the device further includes a lens angle acquisition unit configured to obtain at least two different lens angles according to at least two frames of real images.
  • the device further includes a lens angle acquisition unit configured to obtain at least two different lens angles according to the action information corresponding to the at least two frames of real images, respectively.
  • the device further includes a lens angle acquisition unit configured to acquire background music; determine a time collection corresponding to the background music, where the time collection includes at least two time periods; and acquire each time in the time collection The lens angle of view corresponding to the segment.
  • a lens angle acquisition unit configured to acquire background music; determine a time collection corresponding to the background music, where the time collection includes at least two time periods; and acquire each time in the time collection The lens angle of view corresponding to the segment.
  • At least two different lens angles include a first lens angle of view and a second lens angle of view
  • the splitting unit is configured to render the three-dimensional virtual model with the first lens angle of view to obtain the first lens angle.
  • Virtual image Render the three-dimensional virtual model from the second lens perspective to obtain the second virtual image; display the image sequence formed according to the first virtual image and the second virtual image.
  • the splitting unit is configured to translate or rotate the three-dimensional virtual model under the first lens angle of view to obtain the three-dimensional virtual model under the second lens angle of view; and obtain the three-dimensional virtual model under the second lens angle of view.
  • the second virtual image corresponding to the three-dimensional virtual model.
  • the mirror splitting unit is configured to insert a frame of virtual image between the first virtual image and the second virtual image, so that the first virtual image is gently switched to the second virtual image, wherein, a is a positive integer.
  • the device further includes: a beat detection unit and a stage special effect generation unit; wherein the beat detection unit is configured to perform beat detection on the background music to obtain a beat collection of the background music, wherein the beat collection Including multiple beats, each of the multiple beats corresponds to a stage special effect; the stage special effect generation unit is configured to add the target stage special effect corresponding to the beat collection to the three-dimensional virtual model.
  • an embodiment of the present application provides an electronic device, including: a processor, a communication interface, and a memory; the memory is used to store instructions, the processor is used to execute instructions, and the communication interface is used to communicate with other devices under the control of the processor. Communicating, wherein the processor executes the instruction to enable the electronic device to implement any one of the methods in the first aspect described above.
  • an embodiment of the present application provides a computer-readable storage medium that stores a computer program, and the computer program is executed by hardware to implement any one of the methods in the first aspect.
  • the embodiments of the present application provide a computer program product.
  • the computer program product is read and executed by a computer, any one of the methods in the above-mentioned first aspect is executed.
  • Fig. 1 is a schematic diagram of a specific application scenario provided by an embodiment of the present application.
  • FIG. 2 is a schematic diagram of a possible three-dimensional virtual model provided by an embodiment of the present application.
  • FIG. 3 is a schematic flowchart of a method for realizing a split-mirror effect provided by an embodiment of the present application
  • FIG. 4 is a schematic diagram of an interpolation curve provided by an embodiment of the present application.
  • FIG. 5 is a schematic flowchart of a specific embodiment provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a splitting rule provided by an embodiment of the present application.
  • FIG. 7A is an effect diagram of a possible virtual image provided by an embodiment of the present application.
  • FIG. 7B is an effect diagram of a possible virtual image provided by an embodiment of the present application.
  • FIG. 7C is an effect diagram of a possible virtual image provided by an embodiment of the present application.
  • FIG. 7D is an effect diagram of a possible virtual image provided by an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of a device for implementing a split-mirror effect provided by an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • the method, device and related products for realizing the split-mirror effect provided by the embodiments of the present application can be applied in many fields such as social interaction, entertainment, and education. For example, it can be used for virtual live broadcast, social interaction in virtual communities, or Used to hold virtual concerts, can also be used in classroom teaching and so on.
  • the following takes virtual live broadcast as an example to describe the specific application scenarios of the embodiments of the present application in detail.
  • Virtual live broadcast is a way to use virtual characters instead of live anchors to conduct live broadcasts on a live broadcast platform. Because virtual characters have rich expressive power and are more in line with the communication environment of social networks, the virtual live broadcast industry is developing rapidly.
  • computer technologies such as facial expression capture, motion capture, and sound processing are usually used to apply the facial expressions and actions of the live anchor to the virtual character model, so as to realize the audience and the virtual anchor in the video website or social networking website. of interaction.
  • FIG. 1 is a schematic diagram of a specific application scenario provided by an embodiment of the present application.
  • the server 120 transmits the generated virtual image to the user terminal 130 through the network for processing, so that different viewers can watch the entire live broadcast process through the corresponding user terminal 130.
  • the posture of the generated virtual anchor is related to the relative position between the camera device 110 and the live anchor. That is to say, the audience can only see the virtual character under a specific angle of view, and this specific angle of view depends on the relative position between the camera device 110 and the live broadcaster, so that the live broadcast effect presented is unsatisfactory.
  • problems such as stiff movements of virtual anchors, unsmooth shot switching screens, or monotonous and boring shots, which cause visual fatigue of the audience and make it impossible for the audience to experience the immersive experience.
  • the teacher teaches students knowledge through online teaching, but this teaching method is usually boring, and the teacher in the video cannot know the students in real time
  • students can only see the teacher or teaching handouts in a single perspective, which can easily cause students' fatigue.
  • the teaching effect of video teaching is greatly reduced.
  • the singer can hold a virtual concert in the recording studio to simulate the scene of a real concert, in order to achieve a real concert It is usually necessary to set up multiple cameras to shoot the singer. This kind of virtual concert is complicated to operate and wastes costs.
  • the use of multiple cameras for shooting can get pictures under multiple lenses, which may cause lens switching. The problem of non-smoothness makes users unable to adapt to the visual difference caused by switching between different lenses.
  • an embodiment of the present application provides a method for realizing the splitting effect.
  • the method generates a three-dimensional virtual image based on the collected real image. Model, and obtain multiple different lens perspectives according to background music or the actions of real characters, and then render the three-dimensional virtual model with multiple different lens perspectives to obtain virtual images corresponding to multiple different lens perspectives, thereby simulating In the virtual scene, there are multiple virtual cameras to shoot the three-dimensional virtual model, which improves the viewer's viewing experience.
  • the method also analyzes the beats of the background music and adds corresponding stage effects to the three-dimensional virtual model according to the beat information to present different stage effects to the audience, which further enhances the audience's viewing experience.
  • the three-dimensional virtual model includes a three-dimensional virtual character model in a three-dimensional virtual scene.
  • Figure 2 shows a schematic diagram of a possible three-dimensional virtual model.
  • the hands of the three-dimensional virtual character model are raised to the chest.
  • the figure The upper left corner of 2 also shows the real image collected by the split-mirror effect realization device, in which the real person is also raising his hands to his chest.
  • the three-dimensional virtual character model is consistent with the actions of the real character. It can be understood that the above-mentioned Figure 2 is only an example.
  • the real image collected by the device for implementing the split-mirror effect can be a three-dimensional image or a two-dimensional image.
  • the number of characters in the real image can be one or There are multiple.
  • the action of the real character can be raising both hands to the chest, raising the left foot or other actions, etc.
  • the number of 3D virtual character models in the 3D virtual model generated from the real character image can be One or more than one.
  • the action of the three-dimensional virtual character model can be raising both hands to the chest, raising the left foot or other actions, etc., which are not specifically limited here.
  • the sub-mirror effect device for implementing real people shooting to obtain a plurality of frames real image I 1, I 2, ..., I n-, and in chronological order I on the real image 1, I 2 in the present application embodiment, ..., I Perform feature extraction on n to obtain multiple corresponding three-dimensional virtual models M 1 , M 2 ,..., M n , where n is a positive integer, and the real images I 1 , I 2 ,..., I n and the three-dimensional virtual model M 1
  • n is a positive integer
  • There is a one-to-one correspondence between ,M 2 ,...,M n that is, one frame of real image is used to generate a three-dimensional virtual model.
  • a three-dimensional virtual model can be obtained as follows:
  • Step one the device for achieving the split-mirror effect obtains the real image I i .
  • the real image I i includes real person images, and i is a positive integer, 1 ⁇ i ⁇ n.
  • Step 2 The device for implementing the split-mirror effect performs feature extraction on the real person image in the real image I i to obtain feature information.
  • the feature information includes action information of real characters.
  • obtaining a real image includes: obtaining a video stream, and obtaining at least two frames of real images according to at least two frames of images in the video stream; correspondingly, performing feature extraction on the real person image to obtain feature information includes: separately for each frame Feature extraction is performed on the real person image to obtain corresponding feature information.
  • the feature information is used to control the posture of the three-dimensional virtual character model.
  • the action information in the feature information includes facial expression features and body action features. Facial expression features are used to describe various emotional states of the character, such as happy, sad, Surprise, fear, anger or disgust, etc., physical movement characteristics are used to describe the movement state of real characters, for example, raising the left hand, raising the right foot, or jumping.
  • the feature information can also include character information, where the character information includes multiple key points of the human body of the real person and their corresponding position information.
  • the key points of the human body include key points of the face and the key points of the human skeleton, and the position features include the key points of the real person The position coordinates of the key points of the human body.
  • the split-mirror effect realization device extracts and obtains the real person image in the real image I i by performing image segmentation on the real image I i ; and then performs key point detection on the extracted real person image to obtain the aforementioned multiple human key points And the position information of multiple key points of the human body, where the key points of the human body include key points of the face and the key points of the bones of the human body.
  • the key points of the human body may be located in the head area, neck area, shoulder area, spine area, and waist of the human body.
  • the device for realizing the split-mirror effect inputs the real image I i into the neural network for feature extraction, and after calculation of multiple convolutional layers, the multiple key point information of the human body is extracted.
  • the neural network is obtained through a large amount of training.
  • the neural network can be a Convolution Neural Network (CNN), a Back Propagation Neural Network (BPNN), or a generated confrontation Network (Generative Adversarial Network, GAN) or Recurrent Neural Network (Recurrent Neural Network, RNN), etc., which are not specifically limited here.
  • CNN Convolution Neural Network
  • BPNN Back Propagation Neural Network
  • GAN Geneative Adversarial Network
  • RNN Recurrent Neural Network
  • the device for implementing the split-mirror effect can use CNN to extract key points of a human face to obtain facial expression features; it can also use BPNN to extract key points of human bones to obtain human bone features and limb movement features, which are not specifically limited here.
  • CNN to extract key points of a human face to obtain facial expression features
  • BPNN to extract key points of human bones to obtain human bone features and limb movement features
  • the above example of the feature information used to drive the three-dimensional virtual character model is only used as an example, and other feature information may also be included in practical applications, which is not specifically limited here.
  • Step 3 The split-mirror effect realization device generates the three- dimensional virtual character model in the three-dimensional virtual model M i according to the characteristic information, so that the three-dimensional virtual character model in the three-dimensional virtual model M i corresponds to the action information of the real character in the real image I i.
  • the split-mirror effect realization device establishes a mapping relationship between the key points of the human body of the real person and the key points of the human body of the virtual character model through the above-mentioned feature information; and then controls the expression and posture of the virtual character model according to the mapping relationship, thereby making the virtual
  • the facial expressions and body movements of the character model are consistent with the facial expressions and body movements of the real characters.
  • the split-mirror effect realization device respectively performs serial number labeling on the key points of the human body of the real person to obtain the label information of the key points of the human body of the real person.
  • the key points of the human body correspond to the label information one by one;
  • the annotation information of the key points is used to mark the key points of the human body in the virtual character model. For example, if the label information of the left wrist of the real person is No. 1, the label information of the left wrist of the three-dimensional virtual character model is also No. 1, and the label information of the left arm of the real character is No. 2, then the left wrist of the three-dimensional virtual character model
  • the annotation information is also No.
  • the key point annotation information of the human body of the real person is matched with the key point annotation information of the human body of the three-dimensional virtual character model, and the position information of the key point of the human body of the real character is mapped to the corresponding three-dimensional virtual character model
  • the three-dimensional virtual character model can reproduce the facial expressions and body movements of real characters.
  • the real image I i also includes a real scene image
  • the three-dimensional virtual model M i also includes a three-dimensional virtual scene model.
  • the above-mentioned method for generating a three-dimensional virtual model M i based on the real image I i further includes: real scene image i, M i construct a three-dimensional virtual model of the three-dimensional virtual scene.
  • the device for realizing the split-mirror effect first performs image segmentation on the real image I i to obtain the real scene image in the real image I i ; then extracts the scene features in the real scene image, for example, the position features of the objects in the real scene, Shape feature, size feature, etc.; construct the three- dimensional virtual scene model in the three-dimensional virtual model M i according to the scene feature, so that the three-dimensional virtual scene model in the three-dimensional virtual model M i can highly restore the real scene image in the real image I i.
  • the above only illustrates the process of generating a three-dimensional virtual model M i from a real image I i .
  • the three-dimensional virtual model M 1 , M 2 ,...,M i-1 ,M i+1 ,...,M n the generation process of the three-dimensional virtual model generation process is similar to M i, will not expand further described herein.
  • the 3D virtual scene model in the 3D virtual model can be constructed based on the real scene image in the real image, or it can be a user-defined 3D virtual scene model; the facial features of the 3D virtual character model in the 3D virtual model can be changed from real The facial features of the real person image in the image can also be user-defined facial features, which is not specifically limited here.
  • FIG. 3 is a schematic flowchart of a method for realizing a split-mirror effect provided by an embodiment of the present application.
  • the method for realizing the splitting effect of this embodiment includes but not limited to the following steps:
  • the device for achieving split-mirror effect obtains a three-dimensional virtual model.
  • the three-dimensional virtual model is used to simulate real characters and real scenes.
  • the three-dimensional virtual model includes a three-dimensional virtual character model in a three-dimensional virtual scene model, and the three-dimensional virtual model is generated based on a real image.
  • the three-dimensional virtual character model is generated based on the real character image included in the real image
  • the three-dimensional virtual character model in the three-dimensional virtual model is used to simulate the real character in the real image
  • the actions of the three-dimensional virtual character model correspond to the actions of the real character .
  • the three-dimensional virtual scene model may be constructed based on the real scene image included in the real image, or may be a preset three-dimensional virtual scene model. When the three-dimensional virtual scene model is constructed from the real scene image, the three-dimensional virtual scene model can be used to simulate the real scene in the real image.
  • the device for achieving a split-mirror effect obtains at least two different lens angles of view.
  • the angle of view of the lens is used to indicate the position of the camera relative to the object when the camera is shooting the object.
  • the camera can get a top view of the object when shooting directly above the object.
  • the corresponding lens angle of view is V
  • the image captured by the camera shows the object under the lens angle of V, that is, the top view of the object.
  • obtaining at least two different lens angles includes: obtaining at least two different lens angles according to at least two frames of real images.
  • the real image can be taken by a real camera
  • the position of the real camera relative to the real person may be multiple
  • the multiple real images taken by multiple real cameras at different positions show multiple different lens perspectives. Real people.
  • obtaining at least two different lens angles includes: obtaining at least two different lens angles according to the action information corresponding to the at least two frames of real images.
  • the motion information includes the body motions and facial expressions of real characters in real images.
  • the body movements include many kinds.
  • the body movements can be one or more of raising the right hand, raising the left foot, jumping, etc.
  • the facial expressions also include many kinds.
  • the facial expressions can be, for example, smiling, tearing, etc.
  • One or more of facial expressions such as anger. Examples of body movements and facial expressions in this embodiment are not limited to the above description.
  • one action or a combination of multiple actions corresponds to one lens angle of view.
  • the corresponding lens angle of view is V 1
  • the corresponding lens angle of view can be the lens angle of view V 1 , or the lens angle of view V 2, etc., the same
  • the corresponding lens angle of view can be the lens angle of view V 1 , the lens angle of view V 2 , or the lens angle of view V 3, and so on.
  • obtaining at least two different camera angles includes: obtaining background music; determining a time collection corresponding to the background music, where the time collection includes at least two time periods; obtaining each time period in the time collection Corresponding lens angle of view.
  • the real image may be one or more frames in a video stream.
  • the video stream includes image information and background music information, where one frame of image corresponds to one frame of music.
  • the background music information includes background music and a corresponding time collection.
  • the time collection includes at least two time periods, and each time period corresponds to a lens angle.
  • the device for implementing the split-mirror effect renders the three-dimensional virtual model with at least two different lens angles to obtain virtual images corresponding to at least two different lens angles respectively.
  • the aforementioned at least two different lens angles include a first lens angle of view and a second lens angle of view
  • the three-dimensional virtual model is rendered with at least two different lens angles to obtain at least two different lenses
  • the virtual images corresponding to the viewing angles respectively include: S1031, rendering the three-dimensional virtual model with the first lens perspective to obtain the first virtual image; S1032, rendering the three-dimensional virtual model with the second lens perspective to obtain the second virtual image.
  • rendering the three-dimensional virtual model from the second lens perspective to obtain the second virtual image includes: translating or rotating the three-dimensional virtual model from the first lens perspective to obtain the three-dimensional virtual model from the second lens perspective. Model; Acquire the second virtual image corresponding to the three-dimensional virtual model under the second lens perspective.
  • the first lens angle can be obtained based on the real image, it can also be obtained based on the action information corresponding to the real image, or it can be obtained based on the time collection corresponding to the background music; similarly, the second lens angle can be It is obtained based on the real image, or based on the action information corresponding to the real image, or based on the time collection corresponding to the background music, which is not specifically limited in the embodiment of the present application.
  • the above display of the image sequence formed according to the first image and the second virtual image includes: inserting a frame of virtual image between the first virtual image and the second virtual image, so that the first virtual image is smooth Switch to the second virtual image, where a is a positive integer.
  • a frame of virtual images P 1 , P 2 ,..., P a between the first virtual image and the second virtual image, so that the first virtual image is gently switched to the second virtual image, where a frame
  • the time points at which the virtual images P 1 , P 2 ,..., P a are inserted are b 1 , b 2 ,..., b a , and the time points b 1 , b 2 ,..., b a
  • the slope value satisfies the function of monotonically decreasing first and then increasing monotonically, and a is a positive integer.
  • FIG. 4 shows a schematic diagram of an interpolation curve.
  • the device for realizing the split-mirror effect obtains the first virtual image at the first minute, and the second virtual image at the second minute.
  • One virtual image presents the front view of the three-dimensional virtual model, and the second virtual image presents the left view of the three-dimensional virtual model.
  • the split-lens effect realization device inserts multiple time points between the first minute and the second minute, and inserts a virtual image at each time point, for example, in 1.4 inserting the virtual image P is minutes 1, inserting the virtual image P in the first 1.65 minutes 2, insertion of the virtual image P at 1.8 minutes 3, insertion of the virtual image P 4 in the first 1.85 minutes, wherein the virtual image P 1 presented is
  • the virtual image P 2 presents the effect of rotating the three-dimensional virtual model to the left by 50 degrees
  • the virtual image P 3 and the virtual image P 4 both present the effect of rotating the three-dimensional virtual model to the left
  • the 90-degree effect allows the audience to see the entire process of the 3D virtual model gradually changing from the front view to the left view, instead of a single two images (the front view of the 3D virtual model and the left view of the 3D virtual model), thus making The audience can adapt to the changing effect of the visual difference when switching from
  • stage special effects mentioned in the embodiments of this application to render a three-dimensional virtual model to present different stage effects to the audience is described in detail, which specifically includes the following steps:
  • Step 1 The device for realizing the split-mirror effect detects the beats of the background music, and obtains a collection of beats of the background music.
  • the beat collection includes multiple beats, and each beat of the multiple beats corresponds to a stage special effect.
  • the split-mirror effect realization device can use shaders and particle special effects to respectively render the 3D virtual model.
  • the shader can be used to realize the spotlight rotation effect on the back of the virtual stage and the sound wave effect of the virtual stage itself, and particle special effects. It is used to add similar visual effects such as sparks, fallen leaves, meteors, etc. to the 3D virtual model.
  • Step 2 The split-mirror effect realization device adds the target stage special effects corresponding to the beat collection to the three-dimensional virtual model.
  • the above method generates a three-dimensional virtual model based on the collected real images, and switches the corresponding lens perspective according to the collected real images, background music, and the actions of real characters, thereby simulating that there are multiple virtual cameras in the virtual scene.
  • the effect of shooting the virtual model improves the viewer's sense of viewing experience.
  • the method also analyzes the beats of the background music and adds corresponding stage special effects to the virtual image according to the beat information to present different stage effects to the audience, which further enhances the audience's viewing experience.
  • FIG. 5 shows a schematic flowchart of a specific embodiment.
  • the device for achieving split-mirror effect obtains a real image and background music, and obtains a first lens angle of view according to the real image. Among them, when the background music sounds, the real person acts according to the background music, and the real camera shoots the real person to obtain the real image.
  • the device for realizing split-mirror effect generates a three-dimensional virtual model according to the real image. Among them, the three-dimensional virtual model is obtained at the first moment by the device for realizing the split-mirror effect.
  • the split-mirror effect realization device detects the beat of the background music to obtain the beat collection of the background music, and adds the target stage special effect corresponding to the beat collection to the three-dimensional virtual model.
  • the device for implementing split-mirror effect renders the three-dimensional virtual model with the first lens angle of view to obtain a first virtual image corresponding to the first lens angle of view.
  • the device for realizing the split-mirror effect determines the time collection corresponding to the background music.
  • the time collection includes multiple time periods, and each of the multiple time periods corresponds to a lens angle.
  • the mirroring effect realization device judges whether the action information database contains action information, executes S207-S209 if the action information database does not contain action information, and executes S210-S212 if the action information database contains action information.
  • the action information is the action information of the real person in the real image
  • the action information database includes a plurality of action information, and each action information in the multiple action information corresponds to a lens angle of view.
  • the device for realizing the splitting effect determines the second lens angle corresponding to the time period at the first moment according to the time collection.
  • the device for implementing the split-mirror effect renders the three-dimensional virtual model with the second lens angle of view to obtain a second virtual image corresponding to the second lens angle of view.
  • the device for realizing split-mirror effect displays an image sequence formed according to the first virtual image and the second virtual image.
  • the device for achieving a split-mirror effect determines a third lens angle of view corresponding to the action information according to the action information.
  • the device for implementing the split-mirror effect renders the three-dimensional virtual model with the third lens angle of view to obtain a third virtual image corresponding to the third lens angle of view.
  • the device for realizing split-mirror effect displays an image sequence formed according to the first virtual image and the third virtual image.
  • an embodiment of the present application provides a schematic diagram of a splitting rule as shown in FIG. 6, and performing splitting processing and stage special effects processing on a virtual image according to the splitting rule shown in FIG.
  • the effect diagrams of the four virtual images as shown in FIGS. 7A-7D are obtained.
  • the split-lens effect realization device shoots a real person under the lens angle of view V 1 to obtain a real image I 1 (as shown in the upper left corner of Fig. 7A), and then according to the real image I 1 Obtain a three-dimensional virtual model M 1 .
  • Storyboard achieve the effect means the background music for the beat detection to determine the first minute corresponding to the pulse for B 1, and 1 was stage effects W during the first minute 1 according to the beat B, then the stage effects W 1 is added to the three-dimensional virtual model In M 1 ; the split-lens effect realization device determines the lens angle corresponding to the first minute (referred to as the time-lens angle of view) as V 1 according to the preset lens script; the split-lens effect realization device detects that the action of a real person in the first minute is The action of raising both hands to the chest and raising both hands to the chest is not in the action information database, that is, there is no lens angle of view corresponding to the action (referred to as the action lens angle of view), then the display on the device for achieving the effect of the splitter effect is shown in Figure 7A The virtual image shown in FIG. 7A and the real image I 1 have the same lens angle of view.
  • the split-lens effect realization device shoots a real person under the lens angle of view V 1 to obtain a real image I 2 (as shown in the upper left corner of Fig. 7B), and then according to the real image I 2 Obtain a three-dimensional virtual model M 2 .
  • Storyboard achieve the effect means the background music for the beat detection to determine the first 2 minutes corresponding to the tempo B 2, and 2 to give the stage effects W 2 during the first 2 minutes according to the beat B, then add stage effects W in the three-dimensional virtual model of M 2 2 ;
  • the split-lens effect realization device determines the lens angle corresponding to the second minute (referred to as the time-lens angle of view) as V 2 according to the preset lens script; the split-lens effect realization device detects that the real person’s action in the second minute is lifted up The action of raising your hands and raising your hands is not in the action information database, that is, there is no lens angle of view corresponding to the action (referred to as action lens angle of view), then the split-lens effect realization device rotates the three-dimensional virtual model M 2 to the upper left to obtain The lens angle of view is the virtual image corresponding to V 2. It can be seen that when the stage special effect W 2 is added to the three-dimensional virtual model M 2 , the virtual image shown in FIG. 7B
  • the split-lens effect realization device shoots a real person under the lens angle of view V 1 to obtain a real image I 3 (as shown in the upper left corner of Fig. 7C), and then according to the real image I 3 Obtain a three-dimensional virtual model M 3 .
  • Storyboard achieve the effect means the background music for the beat detection to determine the third minute of beats corresponding to B 3, and with stage effects W 3 at the 3rd minute according to the beat B 3, and then add stage effects W 3 in the three-dimensional virtual model M 3 ;
  • the splitting effect realization device determines that the corresponding lens angle of view (referred to as the time lens angle of view) at the third minute is V 2 according to the preset lens script; the splitting effect realization device detects that the real person’s action in the third minute is upward Lifting the left foot and lifting the left foot corresponds to the lens angle of view (referred to as the action lens angle of view) as V 3 , then the split-lens effect realization device rotates the three-dimensional virtual model M 3 to the left to obtain the lens angle corresponding to V 3 Virtual image.
  • the split-lens effect realization device shoots a real person under the lens angle of view V 1 to obtain a real image I 4 (as shown in the upper left corner of Fig. 7D), and then according to the real image I 4 Obtain a three-dimensional virtual model M 4 .
  • Storyboard achieve the effect means the background music for the beat detection to determine the first four minutes corresponding to the tempo B 4, and with the stage effects W 3 at the time of 4 minutes according to the beat B 4, and then add stage effects W in a 3D virtual model of M 4 in 4 ;
  • the splitting effect realization device determines the lens angle of view corresponding to the 3rd minute (referred to as the time lens angle of view) as V 4 according to the preset lens script; the splitting effect realization device detects that the real person’s action in the 4th minute is standing, And the lens angle of view corresponding to the action of standing (referred to as the action lens angle of view) is V 4 , at this time, the splitting effect realization device rotates the three-dimensional virtual model M 4 to the right to obtain a virtual image corresponding to the lens angle of view V 4.
  • the stage 4 is added to effect three-dimensional virtual model W M 4 in FIG. 7D so that the virtual image shown in FIG. 7C and the virtual image shown in different stage effects.
  • the splitting effect realization device provided in the embodiments of the present application may be a software device or a hardware device.
  • the splitting effect realization device is a software device
  • the splitting effect realization device can be separately deployed on a computing device in a cloud environment. It can be deployed separately on a terminal device.
  • the split-mirror effect realization device is a hardware device
  • the internal unit modules of the split-mirror effect realization device can also be divided into multiple types. Each module can be a software module, a hardware module, or part of it. It is a software module and the part is a hardware module, and this application does not limit it.
  • FIG. 8 is an exemplary division method. As shown in FIG. 8, FIG.
  • a device 800 for implementing a splitting effect provided by an embodiment of the present application including: an obtaining unit 810 configured to obtain a three-dimensional virtual model;
  • the mirror unit 820 is configured to render the three-dimensional virtual model with at least two different lens angles to obtain virtual images corresponding to at least two different lens angles.
  • the three-dimensional virtual model includes a three-dimensional virtual character model in a three-dimensional virtual scene model
  • the above-mentioned apparatus further includes: a feature extraction unit 830 and a three-dimensional virtual model generation unit 840; wherein,
  • the acquiring unit 810 is further configured to acquire a real image before acquiring the three-dimensional virtual model, where the real image includes a real person image;
  • the feature extraction unit 830 is configured to perform feature extraction on the real person image to obtain feature information, where the feature information includes Action information of a real character;
  • the three-dimensional virtual model generating unit 840 is configured to generate a three-dimensional virtual model according to the characteristic information, so that the action information of the three-dimensional virtual character model in the three-dimensional virtual model corresponds to the action information of the real character.
  • the acquiring unit is configured to acquire a video stream, and obtain at least two frames of real images according to at least two frames of images in the video stream;
  • the feature extraction unit 830 is configured to separately analyze each frame of real person images Perform feature extraction to obtain corresponding feature information.
  • the real image further includes a real scene image
  • the three-dimensional virtual model also includes a three-dimensional virtual scene model
  • the above-mentioned apparatus further includes: a three-dimensional virtual scene image construction unit 850 configured to acquire a three-dimensional virtual model in the acquiring unit Previously, a three-dimensional virtual scene image was constructed based on the real scene image.
  • the above-mentioned device further includes a lens angle acquisition unit 860 configured to obtain at least two different lens angles.
  • the lens angle of view acquisition unit 860 is configured to obtain at least two different lens angles according to at least two frames of real images.
  • the lens angle of view acquisition unit 860 is configured to obtain at least two different lens angles of view according to the action information corresponding to the at least two frames of real images, respectively.
  • the lens angle acquisition unit 860 is configured to acquire background music; determine the time collection corresponding to the background music, where the time collection includes at least two time periods; and obtain the corresponding time period in the time collection Lens angle of view.
  • At least two different lens angles include a first lens angle of view and a second lens angle of view
  • the splitter unit 820 is configured to render the three-dimensional virtual model with the first lens angle of view to obtain the first lens angle.
  • Virtual image Render the three-dimensional virtual model from the second lens perspective to obtain the second virtual image; display the image sequence formed according to the first virtual image and the second virtual image.
  • the splitting unit 820 is configured to translate or rotate the three-dimensional virtual model under the first lens angle of view to obtain the three-dimensional virtual model under the second lens angle of view; and obtain the three-dimensional virtual model under the second lens angle of view.
  • the second virtual image corresponding to the three-dimensional virtual model.
  • the mirror splitting unit 820 is configured to insert a frame of virtual image between the first virtual image and the second virtual image, so that the first virtual image is gently switched to the second virtual image, wherein, a is a positive integer.
  • the above-mentioned device further includes: a beat detection unit 870 configured to perform beat detection on the background music to obtain a beat collection of the background music, wherein the beat collection includes multiple beats, Each beat corresponds to a stage special effect; the stage special effect generation unit 880 is configured to add the target stage special effect corresponding to the beat collection to the three-dimensional virtual model.
  • a beat detection unit 870 configured to perform beat detection on the background music to obtain a beat collection of the background music, wherein the beat collection includes multiple beats, Each beat corresponds to a stage special effect
  • the stage special effect generation unit 880 is configured to add the target stage special effect corresponding to the beat collection to the three-dimensional virtual model.
  • the above-mentioned split-mirror effect realization device generates a three-dimensional virtual model according to the collected real image, and obtains multiple lens perspectives according to the collected real image, background music, and the actions of real characters, and uses multiple lens perspectives to perform the three-dimensional virtual model.
  • the corresponding lens angle of view is switched to simulate the effect of multiple virtual cameras shooting the 3D virtual model in the virtual scene, so that the user can see the 3D virtual model under different lens angles, which improves the viewer’s viewing experience .
  • the device also analyzes the beats of the background music and adds corresponding stage effects to the three-dimensional virtual model according to the beat information to present different stage effects to the audience, which further enhances the audience's live viewing experience.
  • an embodiment of the present application provides a schematic structural diagram of an electronic device 900, and the foregoing device for implementing the split-mirror effect is applied to the electronic device 900.
  • the electronic device 900 includes a processor 910, a communication interface 920, and a memory 930, where the processor 910, the communication interface 920, and the memory 930 can be coupled through a bus 940. among them,
  • the processor 910 may be a central processing unit (CPU), a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field programmable gate array (Field Programmable Gate Array, FPGA) or other programmable logic devices (Programmable Logic Device, PLD), transistor logic devices, hardware components, or any combination thereof.
  • the processor 910 may implement or execute various exemplary methods described in conjunction with the disclosure of the present application. Specifically, the processor 910 reads the program code stored in the memory 930, and cooperates with the communication interface 920 to execute part or all of the steps of the method executed by the device for implementing the split-mirror effect in the foregoing embodiment of the present application.
  • the communication interface 920 can be a wired interface or a wireless interface for communicating with other modules or devices.
  • the wired interface can be an Ethernet interface, a controller area network interface, a local interconnect network (Local Interconnect Network, LIN), and a FlexRay interface.
  • the interface can be a cellular network interface or a wireless local area network interface.
  • the aforementioned communication interface 920 may be connected to an input/output device 950, and the input/output device 950 may include other terminal devices such as a mouse, a keyboard, and a microphone.
  • the memory 930 may include a volatile memory, such as a random access memory (Random Access Memory, RAM); the memory 930 may also include a non-volatile memory (Non-Volatile Memory), such as a read-only memory (Read-Only Memory, ROM). ), flash memory, hard disk (Hard Disk Drive, HDD), or solid-state hard disk (Solid-State Drive, SSD), and the memory 930 may also include a combination of the foregoing types of memory.
  • the memory 930 may store program codes and program data.
  • the program code is composed of the codes of some or all of the units in the above-mentioned mirror effect realization device 800, for example, the code of the acquisition unit 810, the code of the mirror unit 820, the code of the feature extraction unit 830, and the 3D virtual model generation unit 840
  • the program data is data generated during the operation of the split-mirror effect realization device 800, such as real image data, three-dimensional virtual model data, lens angle data, background music data, virtual image data, and so on.
  • the bus 940 may be a Controller Area Network (CAN) or other internal bus that implements interconnection between various systems or devices in the vehicle.
  • the bus 940 can be divided into an address bus, a data bus, a control bus, and so on. For ease of representation, the figure is only represented by a thick line, but it does not mean that there is only one bus or one type of bus.
  • the electronic device 900 may include more or fewer components than those shown in FIG. 9, or may have different component configurations.
  • the embodiment of the present application also provides a computer-readable storage medium.
  • the above-mentioned computer-readable storage medium stores a computer program, and the above-mentioned computer program is executed by hardware (such as a processor, etc.) to realize part or All steps.
  • the embodiment of the present application also provides a computer program product.
  • the computer program product runs on the above-mentioned device or electronic device for realizing the split-mirror effect, it executes part or all of the steps of the method for realizing the above-mentioned split-mirror effect.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or a data center integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a storage disk, a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, an SSD).
  • a magnetic medium for example, a floppy disk, a storage disk, a magnetic tape
  • an optical medium for example, a DVD
  • a semiconductor medium for example, an SSD
  • the disclosed device may also be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components can be combined or integrated. To another system, or some features can be ignored or not implemented.
  • the displayed or discussed indirect coupling or direct coupling or communication connection between each other may be through some interfaces, indirect coupling or communication connection between devices or units, and may be in electrical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed to multiple network units. . Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions in the embodiments of the present application.
  • the functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the integrated unit may be implemented in the form of hardware or software functional unit.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of the application essentially or the part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium.
  • a number of instructions are included to enable a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage media may include, for example, various media capable of storing program codes, such as U disk, mobile hard disk, read-only memory, random access memory, magnetic disk or optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Embodiments of the present application disclose a method and device for producing a multiple camera-angle effect, and a related product. The method comprises: acquiring a three-dimensional virtual model; and rendering the three-dimensional virtual model using at least two different camera angles, so as to obtain virtual images respectively corresponding to the at least two different camera angles.

Description

一种分镜效果的实现方法、装置及相关产品Method, device and related products for realizing split-mirror effect
相关申请的交叉引用Cross-references to related applications
本申请基于申请号为201911225211.4、申请日为2019年12月3日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此以引入方式并入本申请。This application is filed based on a Chinese patent application with an application number of 201911225211.4 and an application date of December 3, 2019, and claims the priority of the Chinese patent application. The entire content of the Chinese patent application is hereby incorporated into this application by way of introduction.
技术领域Technical field
本申请涉及虚拟技术领域,尤其涉及一种分镜效果的实现方法、装置及相关产品。This application relates to the field of virtual technology, and in particular to a method, device and related products for realizing the split-mirror effect.
背景技术Background technique
近年来,“虚拟人物”频繁出现在我们的生活中,例如,人们熟知的“初音未来”、“洛天依”等虚拟偶像在音乐领域中的应用,或者虚拟主持人在新闻直播中的应用等等。由于虚拟人物可以代替真实人物在网络世界中进行活动,而且用户可以根据需求自行设置虚拟人物的外观、造型等等,因此,虚拟人物逐渐成为了一种人与人之间的交流方式。In recent years, "virtual characters" have frequently appeared in our lives, for example, the well-known applications of virtual idols such as "Hatsune Miku" and "Luo Tianyi" in the music field, or the application of virtual hosts in live news and many more. Since virtual characters can replace real characters in activities in the online world, and users can set the appearance and shape of virtual characters according to their needs, virtual characters have gradually become a way of communication between people.
目前,网络中的虚拟人物在生成过程中普遍采用运动捕获技术,通过图像识别的方法对拍摄得到的真实人物图像进行分析,从而将真实人物的动作和表情定向到虚拟人物中,使得虚拟人物可以重现真实人物的动作和表情。At present, the virtual characters in the network generally use motion capture technology in the generation process, and the real person images obtained by the image recognition method are analyzed, so as to direct the actions and expressions of the real characters to the virtual characters, so that the virtual characters can be Reproduce the movements and expressions of real characters.
发明内容Summary of the invention
本申请实施例公开了一种分镜效果的实现方法、装置及相关产品。The embodiments of the present application disclose a method, device and related products for realizing the split-mirror effect.
第一方面,本申请实施例提供了一种分镜效果的实现方法,包括:获取三维虚拟模型;以至少两个不同的镜头视角对三维虚拟模型进行渲染,得到至少两个不同的镜头视角分别对应的虚拟图像。In the first aspect, the embodiment of the present application provides a method for implementing the splitting effect, including: obtaining a three-dimensional virtual model; rendering the three-dimensional virtual model with at least two different lens angles to obtain at least two different lens angles, respectively The corresponding virtual image.
上述方法通过对获取三维虚拟模型,并以至少两个不同的镜头视角对三维虚拟模型进行渲染,从而得到至少两个不同的镜头视角分别对应的虚拟图像,使得用户可以看到不同镜头视角下的虚拟图像,为用户带来丰富的视觉体验。The above method obtains a three-dimensional virtual model and renders the three-dimensional virtual model with at least two different lens angles, so as to obtain virtual images corresponding to at least two different lens angles, so that the user can see the images under different lens angles. Virtual images provide users with a rich visual experience.
在本申请的一些可选实施例中,三维虚拟模型包括处于三维虚拟场景模型中的三维虚拟人物模型,在获取三维虚拟模型之前,上述方法还包括:获取真实图像,其中,真实图像包括真实人物图像;对真实人物图像进行特征提取得到特征信息,其中,特征信息包括真实人物的动作信息;根据特征信息生成三维虚拟模型,以使得三维虚拟模型中的三维虚拟人物模型的动作信息与真实人物的动作信息对应。In some optional embodiments of the present application, the three-dimensional virtual model includes a three-dimensional virtual character model in a three-dimensional virtual scene model. Before obtaining the three-dimensional virtual model, the above method further includes: obtaining a real image, where the real image includes a real character Image; feature extraction of real person images to obtain feature information, where the feature information includes the action information of the real person; generate a three-dimensional virtual model according to the feature information, so that the action information of the three-dimensional virtual character model in the three-dimensional virtual model is the same as that of the real person Action information correspondence.
可以看出,通过对采集得到的真实人物图像进行特征提取,从而生成三维虚拟模型,使得三维虚拟模型中的三维虚拟人物模型可以重现真实人物的面部表情和肢体动作,方便观众通过观看三维虚拟模型对应的虚拟图像便可以得知真实人物的面部表情和肢体动作,从而使得观众与真人主播实现更为灵活的互动。It can be seen that by extracting the features of the collected real person images, a 3D virtual model is generated, so that the 3D virtual person model in the 3D virtual model can reproduce the facial expressions and body movements of the real person, and it is convenient for the audience to watch the 3D virtual The virtual image corresponding to the model can learn the facial expressions and body movements of the real person, so that the audience can interact more flexibly with the live anchor.
在本申请的一些可选实施例中,获取真实图像包括:获取视频流,根据视频流中的至少两帧图像得到至少两帧真实图像;对真实人物图像进行特征提取得到特征信息,包括:分别对每一帧真实人物图像进行特征提取得到对应的特征信息。In some optional embodiments of the present application, acquiring the real image includes: acquiring a video stream, and obtaining at least two frames of real images according to at least two frames of images in the video stream; performing feature extraction on real person images to obtain feature information, including: Perform feature extraction on each frame of real person images to obtain corresponding feature information.
可以看出,三维虚拟模型可以根据采集得到的多帧真实图像实时变化,使得用户可以看到不同镜头视角下的三维虚拟模型的动态变化过程。It can be seen that the three-dimensional virtual model can be changed in real time according to the multiple frames of real images collected, so that the user can see the dynamic change process of the three-dimensional virtual model under different lens perspectives.
在本申请的一些可选实施例中,真实图像还包括真实场景图像,三维虚拟模型还包括三维虚拟场景模型;在获取三维虚拟模型之前,上述方法还包括:根据真实场景图像,构建三维虚拟场景模型。In some optional embodiments of the present application, the real image further includes a real scene image, and the three-dimensional virtual model also includes a three-dimensional virtual scene model; before obtaining the three-dimensional virtual model, the above method further includes: constructing a three-dimensional virtual scene based on the real scene image model.
可以看出,上述方法还可以利用真实场景图像来构建三维虚拟模型中的三维虚拟场景图像,相较于只能选择特定的三维虚拟场景图像来说,使得三维虚拟场景图像的选择性更多。It can be seen that the above method can also use real scene images to construct three-dimensional virtual scene images in the three-dimensional virtual model, which makes the three-dimensional virtual scene images more selective than only selecting specific three-dimensional virtual scene images.
在本申请的一些可选实施例中,获取至少两个不同的镜头视角,包括:根据至少两帧真实图像,得到至少两个不同的镜头视角。In some optional embodiments of the present application, acquiring at least two different lens angles includes: obtaining at least two different lens angles according to at least two frames of real images.
可以看出,每帧真实图像对应一个镜头视角,多帧真实图像对应多个镜头视角,因此根据至少两帧真实图像可以得到至少两帧不同的镜头视角,从而用于实现三维虚拟模型的镜头视角渲染,为用户提供丰富的视觉体验。It can be seen that each frame of real image corresponds to a lens angle, and multiple frames of real image correspond to multiple lens angles. Therefore, at least two frames of different lens angles can be obtained from at least two frames of real images, which can be used to realize the lens angle of the 3D virtual model. Rendering to provide users with a rich visual experience.
在本申请的一些可选实施例中,获取至少两个不同的镜头视角,包括:根据至少两帧真实图像分别对应的动作信息,得到至少两个不同的镜头视角。In some optional embodiments of the present application, acquiring at least two different lens angles includes: obtaining at least two different lens angles according to the action information corresponding to the at least two frames of real images.
可以看出,根据真实图像中的真实人物的动作信息来确定镜头视角,可以使得图像中放大显示对应的三维虚拟人物模型的动作,方便用户通过观看虚拟图像从而得知真实人物的动作,提高交互性与趣味性。It can be seen that determining the lens angle of view based on the action information of the real person in the real image can magnify the action of the corresponding three-dimensional virtual character model in the image, so that the user can learn the action of the real person by watching the virtual image and improve the interaction Sex and fun.
在本申请的一些可选实施例中,获取至少两个不同的镜头视角,包括:获取背景音乐;确定背景音乐对应的时间合集,其中时间合集包括至少两个时间段;获取时间合集中每一个时间段对应的镜头视角。In some optional embodiments of the present application, acquiring at least two different camera angles includes: acquiring background music; determining a time collection corresponding to the background music, where the time collection includes at least two time periods; acquiring each of the time collections The lens angle of view corresponding to the time period.
可以看出,上述方法中通过分析背景音乐,并确定背景音乐对应的时间合集,从而获取多个不同镜头视角,通过这种方法可以提高镜头视角的多样性,使得用户可以得到更为丰富的视觉体验。It can be seen that in the above method, by analyzing the background music and determining the time collection corresponding to the background music, multiple different lens perspectives can be obtained. This method can increase the diversity of the lens perspectives, so that users can get a richer vision. Experience.
在本申请的一些可选实施例中,至少两个不同的镜头视角包括第一镜头视角和第二镜头视角;以至少两个不同的镜头视角对三维虚拟模型进行渲染,得到至少两个不同的镜头视角分别对应的虚拟图像,包括:以第一镜头视角对三维虚拟模型进行渲染,得到第一虚拟图像;以第二镜头视角对三维虚拟模型进行渲染,得到第二虚拟图像;展示根据第一虚拟图像和第二虚拟图像形成的图像序列。In some optional embodiments of the present application, the at least two different lens angles include a first lens angle of view and a second lens angle of view; the three-dimensional virtual model is rendered with at least two different lens angles to obtain at least two different lens angles. The virtual images corresponding to the lens perspectives respectively include: rendering the three-dimensional virtual model with the first lens perspective to obtain the first virtual image; rendering the three-dimensional virtual model with the second lens perspective to obtain the second virtual image; An image sequence formed by the virtual image and the second virtual image.
可以看出,分别以第一镜头视角和第二镜头视角对三维虚拟模型进行渲染,可以使得用户观看到第一镜头视角下的三维虚拟模型以及第二镜头视角下的三维虚拟模型,从而为用户提供丰富的视觉体验。It can be seen that rendering the three-dimensional virtual model with the first lens perspective and the second lens perspective respectively allows the user to view the three-dimensional virtual model in the first lens perspective and the three-dimensional virtual model in the second lens perspective, thereby providing users with Provide a rich visual experience.
在本申请的一些可选实施例中,以第二镜头视角对三维虚拟模型进行渲染,得到第二虚拟图像,包括:将第一镜头视角下的三维虚拟模型进行平移或者旋转,得到第二镜头视角下的三维虚拟模型;获取第二镜头视角下的三维虚拟模型对应的第二虚拟图像。In some optional embodiments of the present application, rendering the three-dimensional virtual model in the second lens perspective to obtain the second virtual image includes: translating or rotating the three-dimensional virtual model in the first lens perspective to obtain the second lens A three-dimensional virtual model under a viewing angle; acquiring a second virtual image corresponding to the three-dimensional virtual model under a second lens perspective.
可以看出,通过将第一镜头视角下的三维虚拟模型进行平移或者旋转,可以快速且准确地得到第二镜头视角下的三维虚拟模型,也就是第二虚拟图像。It can be seen that by translating or rotating the three-dimensional virtual model under the first lens angle of view, the three-dimensional virtual model under the second lens angle of view, that is, the second virtual image, can be quickly and accurately obtained.
在本申请的一些可选实施例中,展示根据第一图像和第二虚拟图像形成的图像序列,包括:在第一虚拟图像和第二虚拟图像之间插入a帧虚拟图像,使得第一虚拟图像平缓切换至第二虚拟图像,其中,a是正整数。In some optional embodiments of the present application, displaying the image sequence formed according to the first image and the second virtual image includes: inserting a frame of virtual image between the first virtual image and the second virtual image, so that the first virtual image The image is gently switched to the second virtual image, where a is a positive integer.
可以看出,在第一虚拟图像和第二虚拟图像之间插入a帧虚拟图像,使得观众可以看到由第一虚拟图像到第二虚拟图像的整个变化过程,而不是单一的两张图像(第一虚拟图像和第二虚拟图像),从而使得观众可以适应由第一虚拟图像到第二虚拟图像所造成的视觉差的变化效果。It can be seen that the a-frame virtual image is inserted between the first virtual image and the second virtual image, so that the viewer can see the entire change process from the first virtual image to the second virtual image, instead of a single two images ( The first virtual image and the second virtual image), so that the audience can adapt to the effect of the visual difference caused by the first virtual image to the second virtual image.
在本申请的一些可选实施例中,方法还包括:对背景音乐进行节拍检测,得到背景音乐的节拍合集,其中,节拍合集包括多个节拍,多个节拍中的每一个节拍对应一个舞台特效;将节拍合集对应的目标舞台特效添加到三维虚拟模型中。In some optional embodiments of the present application, the method further includes: performing beat detection on the background music to obtain a beat collection of the background music, where the beat collection includes multiple beats, and each beat of the multiple beats corresponds to a stage special effect ; Add the target stage special effects corresponding to the beat collection to the 3D virtual model.
可以看出,根据音乐的节拍信息对虚拟人物模型所在的虚拟场景添加相应的舞台特效,从而为观众呈现出不同的舞台效果,增强了观众的观看体验度。It can be seen that according to the beat information of the music, corresponding stage effects are added to the virtual scene where the virtual character model is located, thereby presenting different stage effects to the audience and enhancing the audience's viewing experience.
第二方面,本申请实施例还提供了一种分镜效果的实现装置,包括:获取单元,配置为获取三维虚拟模型;分镜单元,配置为以至少两个不同的镜头视角对三维虚拟模型进行渲染,得到至少两个不同的镜头视角分别对应的虚拟图像。In the second aspect, an embodiment of the present application also provides a device for realizing a splitting effect, including: an acquiring unit configured to acquire a three-dimensional virtual model; and a splitting unit configured to view the three-dimensional virtual model from at least two different lens angles. Perform rendering to obtain at least two virtual images respectively corresponding to different lens angles.
在本申请的一些可选实施例中,三维虚拟模型包括处于三维虚拟场景模型中的三维虚拟人物模型,装置还包括:特征提取单元和三维虚拟模型生成单元;其中,获取单元,还配置为在获取三维虚拟模型之前,获取真实图像,其中,真实图像包括真实人物图像;特征提取单元,配置为对真实人物图像进行特征提取得到特征信息,其中,特征信息包括真实人物的动作信息;三维虚拟模型生成单元,配置为根据特征信息生成三维虚拟模型,以使得三维虚拟模型中的三维虚拟人物模型的动作信息与真实人物的动作信息对应。In some optional embodiments of the present application, the three-dimensional virtual model includes a three-dimensional virtual character model in a three-dimensional virtual scene model, and the device further includes: a feature extraction unit and a three-dimensional virtual model generation unit; wherein, the acquisition unit is also configured to Before acquiring a three-dimensional virtual model, acquire a real image, where the real image includes an image of a real person; the feature extraction unit is configured to perform feature extraction on the image of a real person to obtain feature information, where the feature information includes the action information of the real person; the three-dimensional virtual model The generating unit is configured to generate a three-dimensional virtual model according to the feature information, so that the action information of the three-dimensional virtual character model in the three-dimensional virtual model corresponds to the action information of the real character.
在本申请的一些可选实施例中,获取单元,配置为获取视频流,根据视频流中的至少两帧图像得到至少两帧真实图像;特征提取单元,配置为对每一帧真实人物图像进行特征提取得到对应的特征信息。In some optional embodiments of the present application, the obtaining unit is configured to obtain a video stream, and obtain at least two frames of real images according to at least two frames of images in the video stream; the feature extraction unit is configured to perform processing on each frame of real person images. Feature extraction obtains corresponding feature information.
在本申请的一些可选实施例中,真实图像还包括真实场景图像,三维虚拟模型还包括三维虚拟场景模型;装置还包括:三维虚拟场景图像构建单元,配置为在获取单元获取三维虚拟模型之前,根据真实场景图像,构建三维虚拟场景图像。In some optional embodiments of the present application, the real image further includes a real scene image, and the three-dimensional virtual model also includes a three-dimensional virtual scene model; the device further includes: a three-dimensional virtual scene image construction unit configured to before the acquisition unit acquires the three-dimensional virtual model , According to the real scene image, construct a three-dimensional virtual scene image.
在本申请的一些可选实施例中,装置还包括镜头视角获取单元,配置为根据至少两帧真实图像,得到至少两个不同的镜头视角。In some optional embodiments of the present application, the device further includes a lens angle acquisition unit configured to obtain at least two different lens angles according to at least two frames of real images.
在本申请的一些可选实施例中,装置还包括镜头视角获取单元,配置为根据至少两帧真实图像分别对应的动作信息,得到至少两个不同的镜头视角。In some optional embodiments of the present application, the device further includes a lens angle acquisition unit configured to obtain at least two different lens angles according to the action information corresponding to the at least two frames of real images, respectively.
在本申请的一些可选实施例中,装置还包括镜头视角获取单元,配置为获取背景音乐;确定背景音乐对应的时间合集,其中时间合集包括至少两个时间段;获取时间合集中每一个时间段对应的镜头视角。In some optional embodiments of the present application, the device further includes a lens angle acquisition unit configured to acquire background music; determine a time collection corresponding to the background music, where the time collection includes at least two time periods; and acquire each time in the time collection The lens angle of view corresponding to the segment.
在本申请的一些可选实施例中,至少两个不同的镜头视角包括第一镜头视角和第二镜头视角,分镜单元,配置为以第一镜头视角对三维虚拟模型进行渲染,得到第一虚拟图像;以第二镜头视角对三维虚拟模型进行渲染,得到第二虚拟图像;展示根据第一虚拟图像和第二虚拟图像形成的图像序列。In some optional embodiments of the present application, at least two different lens angles include a first lens angle of view and a second lens angle of view, and the splitting unit is configured to render the three-dimensional virtual model with the first lens angle of view to obtain the first lens angle. Virtual image; Render the three-dimensional virtual model from the second lens perspective to obtain the second virtual image; display the image sequence formed according to the first virtual image and the second virtual image.
在本申请的一些可选实施例中,分镜单元,配置为将第一镜头视角下的三维虚拟模型进行平移或者旋转,得到第二镜头视角下的三维虚拟模型;获取第二镜头视角下的三维虚拟模型对应的第二虚拟图像。In some optional embodiments of the present application, the splitting unit is configured to translate or rotate the three-dimensional virtual model under the first lens angle of view to obtain the three-dimensional virtual model under the second lens angle of view; and obtain the three-dimensional virtual model under the second lens angle of view. The second virtual image corresponding to the three-dimensional virtual model.
在本申请的一些可选实施例中,分镜单元,配置为在第一虚拟图像和第二虚拟图像之间插入a帧虚拟图像,使得第一虚拟图像平缓切换至第二虚拟图像,其中,a是正整数。In some optional embodiments of the present application, the mirror splitting unit is configured to insert a frame of virtual image between the first virtual image and the second virtual image, so that the first virtual image is gently switched to the second virtual image, wherein, a is a positive integer.
在本申请的一些可选实施例中,装置还包括:节拍检测单元和舞台特效生成单元;其中,节拍检测单元,配置为对背景音乐进行节拍检测,得到背景音乐的节拍合集,其中,节拍合集包括多个节拍,多个节拍中的每一个节拍对应一个舞台特效;舞台特效生成单元,配置为将节拍合集对应的目标舞台特效添加到三维虚拟模型中。In some optional embodiments of the present application, the device further includes: a beat detection unit and a stage special effect generation unit; wherein the beat detection unit is configured to perform beat detection on the background music to obtain a beat collection of the background music, wherein the beat collection Including multiple beats, each of the multiple beats corresponds to a stage special effect; the stage special effect generation unit is configured to add the target stage special effect corresponding to the beat collection to the three-dimensional virtual model.
第三方面,本申请实施例提供了一种电子设备,包括:处理器、通信接口以及存储器;存储器用于存储指令,处理器用于执行指令,通信接口用于在处理器的控制下与其他设备进行通信,其中,处理器执行指令时使得电子设备实现如上述第一方面中的任一项方法。In the third aspect, an embodiment of the present application provides an electronic device, including: a processor, a communication interface, and a memory; the memory is used to store instructions, the processor is used to execute instructions, and the communication interface is used to communicate with other devices under the control of the processor. Communicating, wherein the processor executes the instruction to enable the electronic device to implement any one of the methods in the first aspect described above.
第四方面,本申请实施例提供了一种计算机可读存储介质,存储有计算机程序,上述计算机程序被硬件执行以实现上述第一方面中的任一项方法。In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium that stores a computer program, and the computer program is executed by hardware to implement any one of the methods in the first aspect.
第五方面,本申请实施例提供了一种计算机程序产品,当计算机程序产品被计算机读取并执行时,如上述第一方面中的任一项方法被执行。In the fifth aspect, the embodiments of the present application provide a computer program product. When the computer program product is read and executed by a computer, any one of the methods in the above-mentioned first aspect is executed.
附图说明Description of the drawings
为了更清楚地说明本申请实施例或背景技术中的技术方案,下面将对本申请实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly describe the technical solutions in the embodiments of this application or the background art, the following will briefly introduce the drawings needed in the description of the embodiments of this application. Obviously, the drawings in the following description are some of the present application. Embodiments, for those of ordinary skill in the art, without creative work, other drawings can be obtained based on these drawings.
图1是本申请实施例提供的一种具体应用场景的示意图;Fig. 1 is a schematic diagram of a specific application scenario provided by an embodiment of the present application;
图2是本申请实施例提供的一种可能的三维虚拟模型的示意图;FIG. 2 is a schematic diagram of a possible three-dimensional virtual model provided by an embodiment of the present application;
图3是本申请实施例提供的一种分镜效果实现方法的流程示意图;FIG. 3 is a schematic flowchart of a method for realizing a split-mirror effect provided by an embodiment of the present application;
图4是本申请实施例提供的一种插值曲线的示意图;FIG. 4 is a schematic diagram of an interpolation curve provided by an embodiment of the present application;
图5是本申请实施例提供的一种具体实施例的流程示意图;FIG. 5 is a schematic flowchart of a specific embodiment provided by an embodiment of the present application;
图6是本申请实施例提供的一种分镜规则示意图;FIG. 6 is a schematic diagram of a splitting rule provided by an embodiment of the present application;
图7A是本申请实施例提供的一种可能的虚拟图像的效果图;FIG. 7A is an effect diagram of a possible virtual image provided by an embodiment of the present application;
图7B是本申请实施例提供的一种可能的虚拟图像的效果图;FIG. 7B is an effect diagram of a possible virtual image provided by an embodiment of the present application;
图7C是本申请实施例提供的一种可能的虚拟图像的效果图;FIG. 7C is an effect diagram of a possible virtual image provided by an embodiment of the present application;
图7D是本申请实施例提供的一种可能的虚拟图像的效果图;FIG. 7D is an effect diagram of a possible virtual image provided by an embodiment of the present application;
图8是本申请实施例提供的一种分镜效果的实现装置的结构示意图;FIG. 8 is a schematic structural diagram of a device for implementing a split-mirror effect provided by an embodiment of the present application;
图9是本申请实施例提供的一种电子设备的结构示意图。FIG. 9 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
具体实施方式Detailed ways
本申请实施例中使用的术语仅用于对本申请的具体实施例进行解释,而非旨在限定本申请。The terms used in the embodiments of the present application are only used to explain the specific embodiments of the present application, and are not intended to limit the present application.
本申请实施例提供的一种分镜效果的实现方法、装置及相关产品可以应用在社交、娱乐以及教育等多个领域,比如说,可以用于虚拟直播、虚拟社区中进行社交互动,也可以用于举办虚拟演唱会,还可以应用于课堂教学等等。为了方便理解本申请实施例,下面以虚拟直播为例,对本申请实施例的具体应用场景进行详细说明。The method, device and related products for realizing the split-mirror effect provided by the embodiments of the present application can be applied in many fields such as social interaction, entertainment, and education. For example, it can be used for virtual live broadcast, social interaction in virtual communities, or Used to hold virtual concerts, can also be used in classroom teaching and so on. In order to facilitate the understanding of the embodiments of the present application, the following takes virtual live broadcast as an example to describe the specific application scenarios of the embodiments of the present application in detail.
虚拟直播,是一种在直播平台上利用虚拟人物代替真人主播进行直播的方式。由于虚拟人物具有丰富的表现力,也更加符合社交网络的传播环境,因此,虚拟直播产业发展迅猛。在虚拟直播的过程中,通常利用面部表情捕捉、动作捕捉以及声音处理等计算机技术,将真人主播的面部表情和动作套用在虚拟人物模型上,从而实现观众与虚拟主播在视频网站或者社交网站中的互动。Virtual live broadcast is a way to use virtual characters instead of live anchors to conduct live broadcasts on a live broadcast platform. Because virtual characters have rich expressive power and are more in line with the communication environment of social networks, the virtual live broadcast industry is developing rapidly. In the process of virtual live broadcast, computer technologies such as facial expression capture, motion capture, and sound processing are usually used to apply the facial expressions and actions of the live anchor to the virtual character model, so as to realize the audience and the virtual anchor in the video website or social networking website. of interaction.
为了节省直播成本以及后期制作费用,用户通常直接使用手机、平板电脑等终端设备进行直播。请参见图1,图1是本申请实施例提供的一种具体应用场景的示意图,在如图1示出的直播过程中,摄像设备110对真人主播进行拍摄,并将采集到的真实人物图像通过网络传送至服务器120中进行处理,服务器120再将生成的虚拟图像发送至用户终端130,从而使得不同的观众通过对应的用户终端130观看到整个直播过程。In order to save live broadcast costs and post-production costs, users usually directly use mobile phones, tablet computers and other terminal devices for live broadcasts. Please refer to FIG. 1. FIG. 1 is a schematic diagram of a specific application scenario provided by an embodiment of the present application. In the live broadcast process shown in FIG. The server 120 transmits the generated virtual image to the user terminal 130 through the network for processing, so that different viewers can watch the entire live broadcast process through the corresponding user terminal 130.
可以看出,这种方式的虚拟直播虽然成本较低,但是由于只有的单个摄像设备110对真人主播进行拍摄,因此生成的虚拟主播的姿态与摄像设备110与真人主播之间的相对位置有关,也就是说,观众只能看到特定视角下的虚拟人物,而这个特定视角取决于摄像设备110与真人主播之间的相对位置,从而使得呈现出的直播效果不尽如人意。例如,在虚拟直播过程中常常出现虚拟主播的动作僵硬、镜头切换画面不流畅或者镜头画 面单调枯燥等问题,从而造成观众的视觉疲劳,无法令观众体会到身临其境的感受。It can be seen that although the cost of virtual live broadcast in this way is relatively low, since only a single camera device 110 shoots the live anchor, the posture of the generated virtual anchor is related to the relative position between the camera device 110 and the live anchor. That is to say, the audience can only see the virtual character under a specific angle of view, and this specific angle of view depends on the relative position between the camera device 110 and the live broadcaster, so that the live broadcast effect presented is unsatisfactory. For example, in the process of virtual live broadcast, there are often problems such as stiff movements of virtual anchors, unsmooth shot switching screens, or monotonous and boring shots, which cause visual fatigue of the audience and make it impossible for the audience to experience the immersive experience.
类似的,在其他应用场景中,例如,直播教学场景;教学过程中老师通过线上教学的形式为学生教授知识,但是这种教学方法通常是枯燥乏味的,视频中的老师无法实时得知学生对知识点的掌握情况,学生也只能看到单一视角画面中的老师或者教学讲义,容易造成学生的疲惫感,与老师现场教学相比视频教学的教学效果大打折扣。又例如,在举办演唱会的过程中可能由于天气、场地等限制,造成演唱会无法如期举办时,歌手可以在录音室中举办虚拟演唱会,以模拟真实演唱会的情景,为了实现真实演唱会的情景,通常需要搭设多台摄像机对歌手进行拍摄,这种虚拟演唱会的举办方式操作复杂且浪费成本,而且利用多台摄像机进行拍摄可以得到多个镜头下的画面,这就可能存在镜头切换不流畅的问题,从而使得用户无法适应不同镜头画面在切换时所造成的视觉差。Similarly, in other application scenarios, for example, live teaching scenes; in the teaching process, the teacher teaches students knowledge through online teaching, but this teaching method is usually boring, and the teacher in the video cannot know the students in real time For the mastery of knowledge points, students can only see the teacher or teaching handouts in a single perspective, which can easily cause students' fatigue. Compared with the teacher's on-site teaching, the teaching effect of video teaching is greatly reduced. For another example, when the concert may not be held as scheduled due to weather and venue restrictions during the concert, the singer can hold a virtual concert in the recording studio to simulate the scene of a real concert, in order to achieve a real concert It is usually necessary to set up multiple cameras to shoot the singer. This kind of virtual concert is complicated to operate and wastes costs. Moreover, the use of multiple cameras for shooting can get pictures under multiple lenses, which may cause lens switching. The problem of non-smoothness makes users unable to adapt to the visual difference caused by switching between different lenses.
为了解决上述应用场景中经常出现的画面镜头视角单一以及镜头切换画面不流畅等问题,本申请实施例提供了一种用于实现分镜效果的方法,该方法根据采集得到的真实图像生成三维虚拟模型,并根据背景音乐或者真实人物的动作得到多个不同的镜头视角,然后以多个不同的镜头视角对三维虚拟模型进行渲染,得到多个不同的镜头视角分别对应的虚拟图像,从而模拟出在虚拟场景中有多个虚拟相机对三维虚拟模型进行拍摄的效果,提高了观众的观看体验感。另外,该方法还通过对背景音乐的节拍进行解析,并根据节拍信息在三维虚拟模型中添加对应的舞台特效,为观众呈现出不同的舞台效果,进一步增强了观众的观看体验感。In order to solve the problems of single lens angle of view and unsmooth lens switching images that often occur in the above application scenarios, an embodiment of the present application provides a method for realizing the splitting effect. The method generates a three-dimensional virtual image based on the collected real image. Model, and obtain multiple different lens perspectives according to background music or the actions of real characters, and then render the three-dimensional virtual model with multiple different lens perspectives to obtain virtual images corresponding to multiple different lens perspectives, thereby simulating In the virtual scene, there are multiple virtual cameras to shoot the three-dimensional virtual model, which improves the viewer's viewing experience. In addition, the method also analyzes the beats of the background music and adds corresponding stage effects to the three-dimensional virtual model according to the beat information to present different stage effects to the audience, which further enhances the audience's viewing experience.
下面,首先解释本申请实施例中由真实图像生成三维虚拟模型的具体过程。In the following, the specific process of generating a three-dimensional virtual model from a real image in an embodiment of the present application is first explained.
在本申请实施例中,三维虚拟模型包括处于三维虚拟场景中的三维虚拟人物模型。以图2为例,图2示出的一种可能的三维虚拟模型的示意图,根据图2所示的三维虚拟模型可以看到三维虚拟人物模型的双手举到胸前,为了突出对比效果,图2的左上角还展示了由分镜效果实现装置采集得到的真实图像,其中,真实人物也是双手举到胸前。换句话说,三维虚拟人物模型与真实人物的动作一致。可以理解,上述图2仅仅是一种举例,在实际应用中,分镜效果实现装置采集得到真实图像可以是三维图像,也可以是二维图像,真实图像中人物的数量可以是一个,也可以是多个,真实人物的动作可以是双手举到胸前,也可以是抬起左脚或者其他动作等等,相应的,由真实人物图像生成的三维虚拟模型中三维虚拟人物模型的数量可以是一个,也可以是多个,三维虚拟人物模型的动作可以是双手举到胸前,也可以是抬起左脚或者其他动作等等,此处不作具体限定。In the embodiment of the present application, the three-dimensional virtual model includes a three-dimensional virtual character model in a three-dimensional virtual scene. Taking Figure 2 as an example, Figure 2 shows a schematic diagram of a possible three-dimensional virtual model. According to the three-dimensional virtual model shown in Figure 2, it can be seen that the hands of the three-dimensional virtual character model are raised to the chest. In order to highlight the contrast effect, the figure The upper left corner of 2 also shows the real image collected by the split-mirror effect realization device, in which the real person is also raising his hands to his chest. In other words, the three-dimensional virtual character model is consistent with the actions of the real character. It can be understood that the above-mentioned Figure 2 is only an example. In practical applications, the real image collected by the device for implementing the split-mirror effect can be a three-dimensional image or a two-dimensional image. The number of characters in the real image can be one or There are multiple. The action of the real character can be raising both hands to the chest, raising the left foot or other actions, etc. Correspondingly, the number of 3D virtual character models in the 3D virtual model generated from the real character image can be One or more than one. The action of the three-dimensional virtual character model can be raising both hands to the chest, raising the left foot or other actions, etc., which are not specifically limited here.
在本申请实施例中,分镜效果实现装置对真实人物进行拍摄,得到多帧真实图像I 1,I 2,…,I n,并按照时间顺序对真实图像I 1,I 2,…,I n分别进行特征提取,从而得到多个对应的三维虚拟模型M 1,M 2,…,M n,其中n是正整数,并且真实图像I 1,I 2,…,I n与三维虚拟模型M 1,M 2,…,M n之间存在一一对应的关系,也就是说,一帧真实图像用于生成一个三维虚拟模型。示例性的,以真实图像I i生成三维虚拟模型M i为例,一个三维虚拟模型 是可以这样得到的: Embodiment, the sub-mirror effect device for implementing real people shooting, to obtain a plurality of frames real image I 1, I 2, ..., I n-, and in chronological order I on the real image 1, I 2 in the present application embodiment, ..., I Perform feature extraction on n to obtain multiple corresponding three-dimensional virtual models M 1 , M 2 ,..., M n , where n is a positive integer, and the real images I 1 , I 2 ,..., I n and the three-dimensional virtual model M 1 There is a one-to-one correspondence between ,M 2 ,...,M n , that is, one frame of real image is used to generate a three-dimensional virtual model. Exemplarily, taking a real image I i to generate a three-dimensional virtual model M i as an example, a three-dimensional virtual model can be obtained as follows:
步骤一,分镜效果实现装置获取真实图像I iStep one, the device for achieving the split-mirror effect obtains the real image I i .
其中,真实图像I i中包括真实人物图像,并且i是正整数,1≤i≤n。 Wherein, the real image I i includes real person images, and i is a positive integer, 1≤i≤n.
步骤二,分镜效果实现装置对真实图像I i中的真实人物图像进行特征提取,得到特征信息。其中,特征信息包括真实人物的动作信息。 Step 2: The device for implementing the split-mirror effect performs feature extraction on the real person image in the real image I i to obtain feature information. Among them, the feature information includes action information of real characters.
其中,获取真实图像包括:获取视频流,根据视频流中的至少两帧图像得到至少两帧真实图像;相应的,对所述真实人物图像进行特征提取得到特征信息,包括:分别对每一帧所述真实人物图像进行特征提取得到对应的特征信息。Wherein, obtaining a real image includes: obtaining a video stream, and obtaining at least two frames of real images according to at least two frames of images in the video stream; correspondingly, performing feature extraction on the real person image to obtain feature information includes: separately for each frame Feature extraction is performed on the real person image to obtain corresponding feature information.
可以理解的,特征信息用于控制三维虚拟人物模型的姿态,特征信息中的动作信息包括面部表情特征以及肢体动作特征,面部表情特征用于描述人物的各种情绪状态,例如,高兴、悲伤、惊讶、恐惧、愤怒或者厌恶等等,肢体动作特征用于描述真实人物的动作状态,例如,举起左手、抬起右脚或者跳跃等等。另外,特征信息还可以包括人物信息,其中,人物信息包括真实人物的多个人体关键点及其对应的位置信息,人体关键点包括人脸关键点和人体骨骼关键点,位置特征包括真实人物的人体关键点的位置坐标。It is understandable that the feature information is used to control the posture of the three-dimensional virtual character model. The action information in the feature information includes facial expression features and body action features. Facial expression features are used to describe various emotional states of the character, such as happy, sad, Surprise, fear, anger or disgust, etc., physical movement characteristics are used to describe the movement state of real characters, for example, raising the left hand, raising the right foot, or jumping. In addition, the feature information can also include character information, where the character information includes multiple key points of the human body of the real person and their corresponding position information. The key points of the human body include key points of the face and the key points of the human skeleton, and the position features include the key points of the real person The position coordinates of the key points of the human body.
可选的,分镜效果实现装置通过对真实图像I i进行图像分割,提取得到真实图像I i中的真实人物图像;再对提取得到的真实人物图像进行关键点检测得到上述多个人体关键点以及多个人体关键点的位置信息,其中,上述人体关键点包括人脸关键点和人体骨骼关键点,上述人体关键点具体可以位于人体的头部区域、脖子区域、肩膀区域、脊柱区域、腰部区域、臀部区域、手腕区域、手臂区域、膝盖区域、腿部区域、脚腕区域以及脚掌区域等等;通过对人脸关键点以及人脸关键点的位置信息进行分析,得到真实图像I i中真实人物的面部表情特征;通过对人体骨骼关键点以及人体骨骼关键点的位置信息进行分析,得到真实图像I i中真实人物的骨骼特征,从而得到真实人物的肢体动作特征。 Optionally, the split-mirror effect realization device extracts and obtains the real person image in the real image I i by performing image segmentation on the real image I i ; and then performs key point detection on the extracted real person image to obtain the aforementioned multiple human key points And the position information of multiple key points of the human body, where the key points of the human body include key points of the face and the key points of the bones of the human body. The key points of the human body may be located in the head area, neck area, shoulder area, spine area, and waist of the human body. Area, hip area, wrist area, arm area, knee area, leg area, ankle area, and sole area, etc.; by analyzing the key points of the face and the position information of the key points of the face, the real image I i is obtained Facial expression characteristics of real characters; by analyzing the key points of the human bones and the position information of the key points of the human bones, the bone characteristics of the real characters in the real image I i are obtained, so as to obtain the physical movement characteristics of the real characters.
可选的,分镜效果实现装置将真实图像I i输入神经网络中进行特征提取,经过多个卷积层的计算后,提取得到上述多个人体关键点信息。其中,神经网络是通过大量的训练得到的,神经网络可以是卷积神经网络(Convolution Neural Network,CNN),也可以是反向传播神经网络(Back Propagation Neural Network,BPNN),还可以是生成对抗网络(Generative Adversarial Network,GAN)或者循环神经网络(Recurrent Neural Network,RNN)等等,此处不作具体限定。需要说明的,上述人体特征的提取过程可以在同一个神经网络中进行,也可以在不同神经网络中进行。例如,分镜效果实现装置可以利用CNN提取人脸关键点,得到人体面部表情特征;也可以利用BPNN提取人体骨骼关键点,得到人体骨骼特征以及肢体动作特征,此处不作具体限定。另外,上述用于驱动三维虚拟人物模型的特征信息的示例仅仅用于进行举例,在实际应用中还可以包括其他特 征信息,此处不作具体限定。 Optionally, the device for realizing the split-mirror effect inputs the real image I i into the neural network for feature extraction, and after calculation of multiple convolutional layers, the multiple key point information of the human body is extracted. Among them, the neural network is obtained through a large amount of training. The neural network can be a Convolution Neural Network (CNN), a Back Propagation Neural Network (BPNN), or a generated confrontation Network (Generative Adversarial Network, GAN) or Recurrent Neural Network (Recurrent Neural Network, RNN), etc., which are not specifically limited here. It should be noted that the above-mentioned extraction process of human body features can be performed in the same neural network or in different neural networks. For example, the device for implementing the split-mirror effect can use CNN to extract key points of a human face to obtain facial expression features; it can also use BPNN to extract key points of human bones to obtain human bone features and limb movement features, which are not specifically limited here. In addition, the above example of the feature information used to drive the three-dimensional virtual character model is only used as an example, and other feature information may also be included in practical applications, which is not specifically limited here.
步骤三,分镜效果实现装置根据特征信息生成三维虚拟模型M i中的三维虚拟人物模型,以使得三维虚拟模型M i中的三维虚拟人物模型与真实图像I i中真实人物的动作信息对应。 Step 3: The split-mirror effect realization device generates the three- dimensional virtual character model in the three-dimensional virtual model M i according to the characteristic information, so that the three-dimensional virtual character model in the three-dimensional virtual model M i corresponds to the action information of the real character in the real image I i.
可选的,分镜效果实现装置通过上述特征信息建立真实人物的人体关键点到虚拟人物模型的人体关键点之间的映射关系;再根据映射关系控制虚拟人物模型的表情和姿态,从而使得虚拟人物模型的面部表情和肢体动作与真实人物的面部表情和肢体动作一致。Optionally, the split-mirror effect realization device establishes a mapping relationship between the key points of the human body of the real person and the key points of the human body of the virtual character model through the above-mentioned feature information; and then controls the expression and posture of the virtual character model according to the mapping relationship, thereby making the virtual The facial expressions and body movements of the character model are consistent with the facial expressions and body movements of the real characters.
可选的,分镜效果实现装置分别对真实人物的人体关键点进行序号标注,得到真实人物的人体关键点的标注信息,其中,人体关键点与标注信息一一对应;再根据真实人物的人体关键点的标注信息来标注虚拟人物模型中的人体关键点。例如,真实人物的左手手腕的标注信息是1号,则三维虚拟人物模型的左手手腕的标注信息也是1号,真实人物的左手手臂的标注信息是2号,则三维虚拟人物模型的左手手腕的标注信息也是2号等等;再将真实人物的人体关键点标注信息与三维虚拟人物模型的人体关键点标注信息进行匹配,并将真实人物的人体关键点位置信息映射到对应的三维虚拟人物模型的人体关键点中,从而使得三维虚拟人物模型可以重现真实人物的面部表情和肢体动作。Optionally, the split-mirror effect realization device respectively performs serial number labeling on the key points of the human body of the real person to obtain the label information of the key points of the human body of the real person. Among them, the key points of the human body correspond to the label information one by one; The annotation information of the key points is used to mark the key points of the human body in the virtual character model. For example, if the label information of the left wrist of the real person is No. 1, the label information of the left wrist of the three-dimensional virtual character model is also No. 1, and the label information of the left arm of the real character is No. 2, then the left wrist of the three-dimensional virtual character model The annotation information is also No. 2 and so on; then the key point annotation information of the human body of the real person is matched with the key point annotation information of the human body of the three-dimensional virtual character model, and the position information of the key point of the human body of the real character is mapped to the corresponding three-dimensional virtual character model In the key points of the human body, the three-dimensional virtual character model can reproduce the facial expressions and body movements of real characters.
在本申请实施例中,真实图像I i还包括真实场景图像,三维虚拟模型M i还包括三维虚拟场景模型,上述根据真实图像I i生成三维虚拟模型M i的方法还包括:根据真实图像I i中的真实场景图像,构建三维虚拟模型M i中的三维虚拟场景。 In the embodiment of the present application, the real image I i also includes a real scene image, and the three-dimensional virtual model M i also includes a three-dimensional virtual scene model. The above-mentioned method for generating a three-dimensional virtual model M i based on the real image I i further includes: real scene image i, M i construct a three-dimensional virtual model of the three-dimensional virtual scene.
可选的,分镜效果实现装置首先对真实图像I i进行图像分割,得到真实图像I i中的真实场景图像;再提取真实场景图像中的场景特征,例如,真实场景中物体的位置特征、形状特征以及大小特征等等;根据场景特征构建三维虚拟模型M i中的三维虚拟场景模型,使得三维虚拟模型M i中的三维虚拟场景模型可以高度还原真实图像I i中的真实场景图像。 Optionally, the device for realizing the split-mirror effect first performs image segmentation on the real image I i to obtain the real scene image in the real image I i ; then extracts the scene features in the real scene image, for example, the position features of the objects in the real scene, Shape feature, size feature, etc.; construct the three- dimensional virtual scene model in the three-dimensional virtual model M i according to the scene feature, so that the three-dimensional virtual scene model in the three-dimensional virtual model M i can highly restore the real scene image in the real image I i.
为了简便陈述,上述仅仅说明了由真实图像I i生成三维虚拟模型M i的过程,实际上,三维虚拟模型M 1,M 2,…,M i-1,M i+1,…,M n的生成过程与三维虚拟模型M i的生成过程类似,此处不再展开赘述。 For the sake of simplicity, the above only illustrates the process of generating a three-dimensional virtual model M i from a real image I i . In fact, the three-dimensional virtual model M 1 , M 2 ,...,M i-1 ,M i+1 ,...,M n the generation process of the three-dimensional virtual model generation process is similar to M i, will not expand further described herein.
需要说明的,三维虚拟模型中的三维虚拟场景模型可以根据真实图像中的真实场景图像构建,也可以是用户自定义的三维虚拟场景模型;三维虚拟模型中三维虚拟人物模型的五官外貌可以由真实图像中的真实人物图像的五官构建,也可以是用户自定义的五官外貌,此处不作具体限定。It should be noted that the 3D virtual scene model in the 3D virtual model can be constructed based on the real scene image in the real image, or it can be a user-defined 3D virtual scene model; the facial features of the 3D virtual character model in the 3D virtual model can be changed from real The facial features of the real person image in the image can also be user-defined facial features, which is not specifically limited here.
接下来,对本申请实施例中涉及的以多个不同的镜头视角对三维虚拟模型M 1,M 2,…,M n中的每一个三维虚拟模型进行镜头视角渲染,使得观众可以看到同一个三维虚拟模型在不同镜头视角下的虚拟图像进行详细说明。以真实图像I i生成的三维虚 拟模型M i为例,分别使用k个不同的镜头对三维虚拟模型M i进行渲染,得到k个不同镜头视角下的虚拟图像Q i1,Q i2,…,Q ik,其中k≥2,从而实现分镜切换的效果,其具体过程可以表述如下: Next, perform lens perspective rendering for each of the three-dimensional virtual models M 1 , M 2 ,..., M n with multiple different lens perspectives involved in the embodiments of the present application, so that the audience can see the same one The virtual images of the three-dimensional virtual model under different lens perspectives are described in detail. I i to generate a real image of an example three-dimensional virtual model M i, respectively, using k different three-dimensional virtual scene rendering model M i, to obtain the virtual image Q i1 k at different camera angles, Q i2, ..., Q ik , where k≥2, so as to achieve the effect of split-mirror switching, the specific process can be expressed as follows:
如图3所示,图3是本申请实施例提供的一种分镜效果实现方法的流程示意图。本实施方式的分镜效果实现方法包括但不限于以下步骤:As shown in FIG. 3, FIG. 3 is a schematic flowchart of a method for realizing a split-mirror effect provided by an embodiment of the present application. The method for realizing the splitting effect of this embodiment includes but not limited to the following steps:
S101、分镜效果实现装置获取三维虚拟模型。S101. The device for achieving split-mirror effect obtains a three-dimensional virtual model.
在本申请实施例中,三维虚拟模型用于模拟真实人物和真实场景,三维虚拟模型包括处于三维虚拟场景模型中的三维虚拟人物模型,三维虚拟模型是根据真实图像生成的。其中,三维虚拟人物模型是根据真实图像包括的真实人物图像生成的,三维虚拟模型中的三维虚拟人物模型用于模拟真实图像中的真实人物,并且三维虚拟人物模型的动作与真实人物的动作对应。三维虚拟场景模型可以是根据真实图像包括的真实场景图像构建的,也可以是预设的三维虚拟场景模型。当三维虚拟场景模型是由真实场景图像构建得到的,则三维虚拟场景模型可用于模拟真实图像中的真实场景。In the embodiments of the present application, the three-dimensional virtual model is used to simulate real characters and real scenes. The three-dimensional virtual model includes a three-dimensional virtual character model in a three-dimensional virtual scene model, and the three-dimensional virtual model is generated based on a real image. Among them, the three-dimensional virtual character model is generated based on the real character image included in the real image, the three-dimensional virtual character model in the three-dimensional virtual model is used to simulate the real character in the real image, and the actions of the three-dimensional virtual character model correspond to the actions of the real character . The three-dimensional virtual scene model may be constructed based on the real scene image included in the real image, or may be a preset three-dimensional virtual scene model. When the three-dimensional virtual scene model is constructed from the real scene image, the three-dimensional virtual scene model can be used to simulate the real scene in the real image.
S102、分镜效果实现装置获取至少两个不同的镜头视角。S102. The device for achieving a split-mirror effect obtains at least two different lens angles of view.
在本申请实施例中,镜头视角用于表示相机在拍摄物体时相机相对于被摄物体的位置。例如,相机在物体的正上方进行拍摄时可以得到物体的俯视图。假设相机位于物体的正上方对应的镜头视角为V,则利用该相机拍摄得到的图像展示了镜头视角V下的物体,也就是物体的俯视图。In the embodiments of the present application, the angle of view of the lens is used to indicate the position of the camera relative to the object when the camera is shooting the object. For example, the camera can get a top view of the object when shooting directly above the object. Assuming that the camera is located directly above the object, the corresponding lens angle of view is V, then the image captured by the camera shows the object under the lens angle of V, that is, the top view of the object.
在一些可选的实施例中,获取至少两个不同的镜头视角包括:根据至少两帧真实图像,得到至少两个不同的镜头视角。其中,真实图像可以是由真实相机拍摄得到的,真实相机相对于真实人物的位置可能是多个,由多个处于不同位置的真实相机拍摄得到的多张真实图像展示了多个不同镜头视角下的真实人物。In some optional embodiments, obtaining at least two different lens angles includes: obtaining at least two different lens angles according to at least two frames of real images. Among them, the real image can be taken by a real camera, the position of the real camera relative to the real person may be multiple, and the multiple real images taken by multiple real cameras at different positions show multiple different lens perspectives. Real people.
在另一些可选的实施例中,获取至少两个不同的镜头视角包括:根据至少两帧真实图像分别对应的动作信息,得到至少两个不同的镜头视角。其中,动作信息包括真实图像中真实人物的肢体动作以及面部表情。其中,肢体动作包括很多种,肢体动作例如可以是举起右手、抬起左脚、跳跃等动作中的一种或者多种,面部表情同样也包括很多种,面部表情例如可以是微笑、流泪、恼怒等面部表情中的一种或者多种。本实施例中对肢体动作和面部表情的示例不限于上述描述。In some other optional embodiments, obtaining at least two different lens angles includes: obtaining at least two different lens angles according to the action information corresponding to the at least two frames of real images. Among them, the motion information includes the body motions and facial expressions of real characters in real images. Among them, the body movements include many kinds. For example, the body movements can be one or more of raising the right hand, raising the left foot, jumping, etc. The facial expressions also include many kinds. The facial expressions can be, for example, smiling, tearing, etc. One or more of facial expressions such as anger. Examples of body movements and facial expressions in this embodiment are not limited to the above description.
在本申请实施例中,一个动作或者多种动作的组合对应一个镜头视角。例如,当真实人物微笑且跳跃时对应的镜头视角为V 1,当真实人物只跳跃时对应的镜头视角可以是镜头视角V 1,也可以是镜头视角V 2等等,同样的,当真实人物只微笑时对应的镜头视角可以是镜头视角V 1,也可以是镜头视角V 2,还可以是镜头视角V 3等等。 In the embodiment of the present application, one action or a combination of multiple actions corresponds to one lens angle of view. For example, when a real person smiles and jumps, the corresponding lens angle of view is V 1 , when the real person only jumps, the corresponding lens angle of view can be the lens angle of view V 1 , or the lens angle of view V 2, etc., the same, when the real person When only smiling, the corresponding lens angle of view can be the lens angle of view V 1 , the lens angle of view V 2 , or the lens angle of view V 3, and so on.
在又一些可选的实施例中,获取至少两个不同的镜头视角包括:获取背景音乐;确定背景音乐对应的时间合集,其中时间合集包括至少两个时间段;获取时间合集中每一个时间段对应的镜头视角。其中,真实图像可以是一段视频流中的一帧或者多帧,视频 流中包括图像信息和背景音乐信息,其中,一帧图像与一帧音乐对应。背景音乐信息包括背景音乐以及对应的时间合集,时间合集包括至少两个时间段,每个时间段对应一个镜头视角。In still other optional embodiments, obtaining at least two different camera angles includes: obtaining background music; determining a time collection corresponding to the background music, where the time collection includes at least two time periods; obtaining each time period in the time collection Corresponding lens angle of view. The real image may be one or more frames in a video stream. The video stream includes image information and background music information, where one frame of image corresponds to one frame of music. The background music information includes background music and a corresponding time collection. The time collection includes at least two time periods, and each time period corresponds to a lens angle.
S103、分镜效果实现装置以至少两个不同的镜头视角对三维虚拟模型进行渲染,得到至少两个不同的镜头视角分别对应的虚拟图像。S103. The device for implementing the split-mirror effect renders the three-dimensional virtual model with at least two different lens angles to obtain virtual images corresponding to at least two different lens angles respectively.
在本申请实施例中,上述至少两个不同的镜头视角包括第一镜头视角和第二镜头视角,以至少两个不同的镜头视角对所述三维虚拟模型进行渲染,得到至少两个不同的镜头视角分别对应的虚拟图像包括:S1031、以第一镜头视角对三维虚拟模型进行渲染,得到第一虚拟图像;S1032、以第二镜头视角对三维虚拟模型进行渲染,得到第二虚拟图像。In the embodiment of the present application, the aforementioned at least two different lens angles include a first lens angle of view and a second lens angle of view, and the three-dimensional virtual model is rendered with at least two different lens angles to obtain at least two different lenses The virtual images corresponding to the viewing angles respectively include: S1031, rendering the three-dimensional virtual model with the first lens perspective to obtain the first virtual image; S1032, rendering the three-dimensional virtual model with the second lens perspective to obtain the second virtual image.
在本申请实施例中,以第二镜头视角对三维虚拟模型进行渲染,得到第二虚拟图像包括:将第一镜头视角下的三维虚拟模型进行平移或者旋转,得到第二镜头视角下的三维虚拟模型;获取第二镜头视角下的三维虚拟模型对应的第二虚拟图像。In the embodiment of the present application, rendering the three-dimensional virtual model from the second lens perspective to obtain the second virtual image includes: translating or rotating the three-dimensional virtual model from the first lens perspective to obtain the three-dimensional virtual model from the second lens perspective. Model; Acquire the second virtual image corresponding to the three-dimensional virtual model under the second lens perspective.
可以理解的,第一镜头视角可以是根据真实图像得到的,也可以是根据真实图像对应的动作信息得到的,还可以是根据背景音乐对应的时间合集得到的;同样的,第二镜头视角可以是根据真实图像得到的,也可以是根据真实图像对应的动作信息得到的,还可以是根据背景音乐对应的时间合集得到的,本申请实施例中不作具体限定。It is understandable that the first lens angle can be obtained based on the real image, it can also be obtained based on the action information corresponding to the real image, or it can be obtained based on the time collection corresponding to the background music; similarly, the second lens angle can be It is obtained based on the real image, or based on the action information corresponding to the real image, or based on the time collection corresponding to the background music, which is not specifically limited in the embodiment of the present application.
S1033、展示根据第一虚拟图像和第二虚拟图像形成的图像序列。S1033. Display an image sequence formed according to the first virtual image and the second virtual image.
在本申请实施例中,上述展示根据第一图像和第二虚拟图像形成的图像序列包括:在第一虚拟图像和所述第二虚拟图像之间插入a帧虚拟图像,使得第一虚拟图像平缓切换至第二虚拟图像,其中,a是正整数。In the embodiment of the present application, the above display of the image sequence formed according to the first image and the second virtual image includes: inserting a frame of virtual image between the first virtual image and the second virtual image, so that the first virtual image is smooth Switch to the second virtual image, where a is a positive integer.
可选的,在第一虚拟图像与第二虚拟图像之间插入a帧虚拟图像P 1,P 2,...,P a,使得第一虚拟图像平缓切换至第二虚拟图像,其中a帧虚拟图像P 1,P 2,...,P a插入的时间点为b 1,b 2,...,b a,时间点b 1,b 2,...,b a形成的曲线的斜率值满足先单调递减后单调递增的函数,并且a是正整数。 Optionally, insert a frame of virtual images P 1 , P 2 ,..., P a between the first virtual image and the second virtual image, so that the first virtual image is gently switched to the second virtual image, where a frame The time points at which the virtual images P 1 , P 2 ,..., P a are inserted are b 1 , b 2 ,..., b a , and the time points b 1 , b 2 ,..., b a The slope value satisfies the function of monotonically decreasing first and then increasing monotonically, and a is a positive integer.
举例说明,图4示出了一种插值曲线的示意图,如图4所示,分镜效果实现装置在第1分钟时获得第一虚拟图像,在第2分钟时获得第二虚拟图像,且第一虚拟图像呈现的是三维虚拟模型的正视图,第二虚拟图像呈现的是三维虚拟模型的左视图。为了使得观众可以看到流畅的镜头切换画面,分镜效果实现装置在第1分钟与第2分钟之间插入多个时间点,并且在每一个时间点处插入一帧虚拟图像,例如,在1.4分钟时插入虚拟图像P 1,在第1.65分钟时插入虚拟图像P 2,在第1.8分钟时插入虚拟图像P 3,在第1.85分钟插入虚拟图像P 4,其中,虚拟图像P 1呈现的是将三维虚拟模型向左旋转30度的效果,虚拟图像P 2呈现的是将三维虚拟模型向左旋转50度的效果,虚拟图像P 3和虚拟图像P 4呈现的均是将三维虚拟模型向左旋转90度的效果,使得观众可以看到三维虚拟模型由正视图逐渐变换到左视图的整个过程,而不是单一的两张图像(三维虚拟模型的正 视图和三维虚拟模型的左视图),从而使得观众可以适应从正视图切换到左视图的视觉差的变化效果。 For example, FIG. 4 shows a schematic diagram of an interpolation curve. As shown in FIG. 4, the device for realizing the split-mirror effect obtains the first virtual image at the first minute, and the second virtual image at the second minute. One virtual image presents the front view of the three-dimensional virtual model, and the second virtual image presents the left view of the three-dimensional virtual model. In order to allow the audience to see a smooth shot switching screen, the split-lens effect realization device inserts multiple time points between the first minute and the second minute, and inserts a virtual image at each time point, for example, in 1.4 inserting the virtual image P is minutes 1, inserting the virtual image P in the first 1.65 minutes 2, insertion of the virtual image P at 1.8 minutes 3, insertion of the virtual image P 4 in the first 1.85 minutes, wherein the virtual image P 1 presented is The effect of rotating the three-dimensional virtual model to the left by 30 degrees, the virtual image P 2 presents the effect of rotating the three-dimensional virtual model to the left by 50 degrees, the virtual image P 3 and the virtual image P 4 both present the effect of rotating the three-dimensional virtual model to the left The 90-degree effect allows the audience to see the entire process of the 3D virtual model gradually changing from the front view to the left view, instead of a single two images (the front view of the 3D virtual model and the left view of the 3D virtual model), thus making The audience can adapt to the changing effect of the visual difference when switching from the front view to the left view.
在本申请的一些可选实施例中,对本申请实施例中提到的利用舞台特效对三维虚拟模型进行渲染,从而为观众呈现出不同的舞台效果进行详细说明,具体包括以下步骤:In some optional embodiments of this application, the use of stage special effects mentioned in the embodiments of this application to render a three-dimensional virtual model to present different stage effects to the audience is described in detail, which specifically includes the following steps:
步骤一,分镜效果实现装置对背景音乐进行节拍检测,得到背景音乐的节拍合集。Step 1: The device for realizing the split-mirror effect detects the beats of the background music, and obtains a collection of beats of the background music.
其中,节拍合集包括多个节拍,多个节拍中的每一个节拍对应一个舞台特效。可选的,分镜效果实现装置可以利用着色器和粒子特效分别对三维虚拟模型进行渲染处理,例如,着色器可用于实现虚拟舞台背面的聚光灯旋转效果以及虚拟舞台本身的音效波浪效果,粒子特效用于在三维虚拟模型中添加如火花、落叶、流星等类似的视觉效果。Among them, the beat collection includes multiple beats, and each beat of the multiple beats corresponds to a stage special effect. Optionally, the split-mirror effect realization device can use shaders and particle special effects to respectively render the 3D virtual model. For example, the shader can be used to realize the spotlight rotation effect on the back of the virtual stage and the sound wave effect of the virtual stage itself, and particle special effects. It is used to add similar visual effects such as sparks, fallen leaves, meteors, etc. to the 3D virtual model.
步骤二,分镜效果实现装置将节拍合集对应的目标舞台特效添加到三维虚拟模型中。Step 2: The split-mirror effect realization device adds the target stage special effects corresponding to the beat collection to the three-dimensional virtual model.
上述方法通过根据采集得到的真实图像生成三维虚拟模型,并根据采集得到的真实图像、背景音乐以及真实人物的动作进行相应的镜头视角切换,从而模拟出在虚拟场景中有多个虚拟相机对三维虚拟模型进行拍摄的效果,提高了观众的观看体验感。另外,该方法还通过对背景音乐的节拍进行解析,并根据节拍信息在虚拟图像中添加对应的舞台特效,为观众呈现出不同的舞台效果,进一步增强了观众的观看体验感。The above method generates a three-dimensional virtual model based on the collected real images, and switches the corresponding lens perspective according to the collected real images, background music, and the actions of real characters, thereby simulating that there are multiple virtual cameras in the virtual scene. The effect of shooting the virtual model improves the viewer's sense of viewing experience. In addition, the method also analyzes the beats of the background music and adds corresponding stage special effects to the virtual image according to the beat information to present different stage effects to the audience, which further enhances the audience's viewing experience.
为了便于理解上述实施例涉及的分镜效果实现方法,下面通过举例的方式详细地说明本申请实施例的分镜效果实现方法。In order to facilitate the understanding of the method for realizing the splitting effect involved in the foregoing embodiments, the method for realizing the splitting effect of the embodiment of the present application will be described in detail below by way of examples.
请参见图5,图5示出了一种具体的实施例的流程示意图。Please refer to FIG. 5, which shows a schematic flowchart of a specific embodiment.
S201、分镜效果实现装置获取真实图像以及背景音乐,并根据真实图像获得第一镜头视角。其中,当背景音乐响起时,真实人物根据背景音乐进行动作,真实相机对真实人物进行拍摄得到真实图像。S201. The device for achieving split-mirror effect obtains a real image and background music, and obtains a first lens angle of view according to the real image. Among them, when the background music sounds, the real person acts according to the background music, and the real camera shoots the real person to obtain the real image.
S202、分镜效果实现装置根据真实图像生成三维虚拟模型。其中,三维虚拟模型是分镜效果实现装置在第一时刻获取得到的。S202. The device for realizing split-mirror effect generates a three-dimensional virtual model according to the real image. Among them, the three-dimensional virtual model is obtained at the first moment by the device for realizing the split-mirror effect.
S203、分镜效果实现装置对背景音乐进行节拍检测,得到背景音乐的节拍合集,并将节拍合集对应的目标舞台特效添加到三维虚拟模型中。S203. The split-mirror effect realization device detects the beat of the background music to obtain the beat collection of the background music, and adds the target stage special effect corresponding to the beat collection to the three-dimensional virtual model.
S204、分镜效果实现装置以第一镜头视角对三维虚拟模型进行渲染,得到第一镜头视角对应的第一虚拟图像。S204. The device for implementing split-mirror effect renders the three-dimensional virtual model with the first lens angle of view to obtain a first virtual image corresponding to the first lens angle of view.
S205、分镜效果实现装置确定背景音乐对应的时间合集。S205. The device for realizing the split-mirror effect determines the time collection corresponding to the background music.
其中,时间合集包括多个时间段,多个时间段中的每个时间段对应一个镜头视角。Wherein, the time collection includes multiple time periods, and each of the multiple time periods corresponds to a lens angle.
S206、分镜效果实现装置判断动作信息库中是否包含有动作信息,在动作信息库中不包含动作信息的情况下执行S207-S209,在动作信息库中包含动作信息的情况下执行S210-S212。其中,动作信息是真实图像中真实人物的动作信息,动作信息库包括多个动作信息,多个动作信息中的每个动作信息对应一个镜头视角。S206. The mirroring effect realization device judges whether the action information database contains action information, executes S207-S209 if the action information database does not contain action information, and executes S210-S212 if the action information database contains action information. . Among them, the action information is the action information of the real person in the real image, and the action information database includes a plurality of action information, and each action information in the multiple action information corresponds to a lens angle of view.
S207、分镜效果实现装置根据时间合集,确定第一时刻所处的时间段对应的第二镜头视角。S207. The device for realizing the splitting effect determines the second lens angle corresponding to the time period at the first moment according to the time collection.
S208、分镜效果实现装置以第二镜头视角对三维虚拟模型进行渲染,得到第二镜头视角对应的第二虚拟图像。S208. The device for implementing the split-mirror effect renders the three-dimensional virtual model with the second lens angle of view to obtain a second virtual image corresponding to the second lens angle of view.
S209、分镜效果实现装置展示根据第一虚拟图像和第二虚拟图像形成的图像序列。S209. The device for realizing split-mirror effect displays an image sequence formed according to the first virtual image and the second virtual image.
S210、分镜效果实现装置根据动作信息,确定与动作信息对应的第三镜头视角。S210. The device for achieving a split-mirror effect determines a third lens angle of view corresponding to the action information according to the action information.
S211、分镜效果实现装置以第三镜头视角对三维虚拟模型进行渲染,得到第三镜头视角对应的第三虚拟图像。S211. The device for implementing the split-mirror effect renders the three-dimensional virtual model with the third lens angle of view to obtain a third virtual image corresponding to the third lens angle of view.
S212、分镜效果实现装置展示根据第一虚拟图像和第三虚拟图像形成的图像序列。S212. The device for realizing split-mirror effect displays an image sequence formed according to the first virtual image and the third virtual image.
根据如图5所述的方法,本申请实施例提供了如图6所示的一种分镜规则示意图,根据图6示出的分镜规则对虚拟图像进行分镜处理以及舞台特效处理,可以得到如图7A-7D示出的四种虚拟图像的效果图。According to the method described in FIG. 5, an embodiment of the present application provides a schematic diagram of a splitting rule as shown in FIG. 6, and performing splitting processing and stage special effects processing on a virtual image according to the splitting rule shown in FIG. The effect diagrams of the four virtual images as shown in FIGS. 7A-7D are obtained.
如图7A所示,在第1分钟时,分镜效果实现装置在镜头视角V 1下对真实人物进行拍摄,得到真实图像I 1(如图7A左上角所示),然后根据真实图像I 1得到三维虚拟模型M 1。分镜效果实现装置对背景音乐进行节拍检测,确定第1分钟对应的节拍为B 1,并根据节拍B 1得到第1分钟时的舞台特效W 1,然后将舞台特效W 1添加到三维虚拟模型M 1中;分镜效果实现装置根据预设的镜头脚本确定第1分钟对应的镜头视角(简称为时间镜头视角)为V 1;分镜效果实现装置检测到真实人物在第1分钟的动作是双手举到胸前,并且双手举到胸前这个动作不在动作信息库中,即不存在动作对应的镜头视角(简称为动作镜头视角),则此时分镜效果实现装置上显示如图7A所示的虚拟图像,其中,图7A所示虚拟图像和真实图像I 1的镜头视角相同。 As shown in Fig. 7A, at the first minute, the split-lens effect realization device shoots a real person under the lens angle of view V 1 to obtain a real image I 1 (as shown in the upper left corner of Fig. 7A), and then according to the real image I 1 Obtain a three-dimensional virtual model M 1 . Storyboard achieve the effect means the background music for the beat detection to determine the first minute corresponding to the pulse for B 1, and 1 was stage effects W during the first minute 1 according to the beat B, then the stage effects W 1 is added to the three-dimensional virtual model In M 1 ; the split-lens effect realization device determines the lens angle corresponding to the first minute (referred to as the time-lens angle of view) as V 1 according to the preset lens script; the split-lens effect realization device detects that the action of a real person in the first minute is The action of raising both hands to the chest and raising both hands to the chest is not in the action information database, that is, there is no lens angle of view corresponding to the action (referred to as the action lens angle of view), then the display on the device for achieving the effect of the splitter effect is shown in Figure 7A The virtual image shown in FIG. 7A and the real image I 1 have the same lens angle of view.
如图7B所示,在第2分钟时,分镜效果实现装置在镜头视角V 1下对真实人物进行拍摄,得到真实图像I 2(如图7B左上角所示),然后根据真实图像I 2得到三维虚拟模型M 2。分镜效果实现装置对背景音乐进行节拍检测,确定第2分钟对应的节拍B 2,并根据节拍B 2得到第2分钟时的舞台特效W 2,然后在三维虚拟模型M 2中添加舞台特效W 2;分镜效果实现装置根据预设的镜头脚本确定第2分钟对应的镜头视角(简称为时间镜头视角)为V 2;分镜效果实现装置检测到真实人物在第2分钟的动作是向上抬起双手,并且向上抬起双手这个动作不在动作信息库中,即不存在动作对应的镜头视角(简称为动作镜头视角),则此时分镜效果实现装置将三维虚拟模型M 2向左上方旋转得到镜头视角为V 2对应的虚拟图像。可以看出,当在三维虚拟模型M 2中添加舞台特效W 2时,图7B示出的虚拟图像比图7A示出的虚拟图像中增添了灯光效果。 As shown in Fig. 7B, at the second minute, the split-lens effect realization device shoots a real person under the lens angle of view V 1 to obtain a real image I 2 (as shown in the upper left corner of Fig. 7B), and then according to the real image I 2 Obtain a three-dimensional virtual model M 2 . Storyboard achieve the effect means the background music for the beat detection to determine the first 2 minutes corresponding to the tempo B 2, and 2 to give the stage effects W 2 during the first 2 minutes according to the beat B, then add stage effects W in the three-dimensional virtual model of M 2 2 ; The split-lens effect realization device determines the lens angle corresponding to the second minute (referred to as the time-lens angle of view) as V 2 according to the preset lens script; the split-lens effect realization device detects that the real person’s action in the second minute is lifted up The action of raising your hands and raising your hands is not in the action information database, that is, there is no lens angle of view corresponding to the action (referred to as action lens angle of view), then the split-lens effect realization device rotates the three-dimensional virtual model M 2 to the upper left to obtain The lens angle of view is the virtual image corresponding to V 2. It can be seen that when the stage special effect W 2 is added to the three-dimensional virtual model M 2 , the virtual image shown in FIG. 7B has a lighting effect added to the virtual image shown in FIG. 7A.
如图7C所示,在第3分钟时,分镜效果实现装置在镜头视角V 1下对真实人物进行拍摄,得到真实图像I 3(如图7C左上角所示),然后根据真实图像I 3得到三维虚拟模型M 3。分镜效果实现装置对背景音乐进行节拍检测,确定第3分钟对应的节拍B 3,并根据节拍B 3得到第3分钟时的舞台特效W 3,然后在三维虚拟模型M 3中添加舞台特效W 3;分镜效果实现装置根据预设的镜头脚本确定第3分钟对应的镜头视角(简称为时间镜头 视角)为V 2;分镜效果实现装置检测到真实人物在第3分钟的动作是向上抬起左脚,并且抬起左脚这个动作对应的镜头视角(简称为动作镜头视角)为V 3,则此时分镜效果实现装置将三维虚拟模型M 3向左旋转得到镜头视角为V 3对应的虚拟图像。可以看出,当在三维虚拟模型M 3中添加舞台特效W 3时,图7C示出的虚拟图像与图7B示出的虚拟图像中的灯光效果不同,并且图7C示出的虚拟图像中呈现有音效波浪效果。 As shown in Fig. 7C, at the 3rd minute, the split-lens effect realization device shoots a real person under the lens angle of view V 1 to obtain a real image I 3 (as shown in the upper left corner of Fig. 7C), and then according to the real image I 3 Obtain a three-dimensional virtual model M 3 . Storyboard achieve the effect means the background music for the beat detection to determine the third minute of beats corresponding to B 3, and with stage effects W 3 at the 3rd minute according to the beat B 3, and then add stage effects W 3 in the three-dimensional virtual model M 3 ; The splitting effect realization device determines that the corresponding lens angle of view (referred to as the time lens angle of view) at the third minute is V 2 according to the preset lens script; the splitting effect realization device detects that the real person’s action in the third minute is upward Lifting the left foot and lifting the left foot corresponds to the lens angle of view (referred to as the action lens angle of view) as V 3 , then the split-lens effect realization device rotates the three-dimensional virtual model M 3 to the left to obtain the lens angle corresponding to V 3 Virtual image. It can be seen that when the stage special effect W 3 is added to the three-dimensional virtual model M 3 , the virtual image shown in FIG. 7C is different from the lighting effect in the virtual image shown in FIG. 7B, and the virtual image shown in FIG. 7C appears There is a sound wave effect.
如图7D所示,在第4分钟时,分镜效果实现装置在镜头视角V 1下对真实人物进行拍摄,得到真实图像I 4(如图7D左上角所示),然后根据真实图像I 4得到三维虚拟模型M 4。分镜效果实现装置对背景音乐进行节拍检测,确定第4分钟对应的节拍B 4,并根据节拍B 4得到第4分钟时的舞台特效W 3,然后在三维虚拟模型M 4中添加舞台特效W 4;分镜效果实现装置根据预设的镜头脚本确定第3分钟对应的镜头视角(简称为时间镜头视角)为V 4;分镜效果实现装置检测到真实人物在第4分钟的动作是站立,并且站立这个动作对应的镜头视角(简称为动作镜头视角)为V 4,则此时分镜效果实现装置将三维虚拟模型M 4向右旋转得到镜头视角为V 4对应的虚拟图像。可以看出,当在三维虚拟模型M 4中添加舞台特效W 4时,使得图7D示出的虚拟图像与图7C示出的虚拟图像中舞台效果不相同。 As shown in Fig. 7D, at the 4th minute, the split-lens effect realization device shoots a real person under the lens angle of view V 1 to obtain a real image I 4 (as shown in the upper left corner of Fig. 7D), and then according to the real image I 4 Obtain a three-dimensional virtual model M 4 . Storyboard achieve the effect means the background music for the beat detection to determine the first four minutes corresponding to the tempo B 4, and with the stage effects W 3 at the time of 4 minutes according to the beat B 4, and then add stage effects W in a 3D virtual model of M 4 in 4 ; The splitting effect realization device determines the lens angle of view corresponding to the 3rd minute (referred to as the time lens angle of view) as V 4 according to the preset lens script; the splitting effect realization device detects that the real person’s action in the 4th minute is standing, And the lens angle of view corresponding to the action of standing (referred to as the action lens angle of view) is V 4 , at this time, the splitting effect realization device rotates the three-dimensional virtual model M 4 to the right to obtain a virtual image corresponding to the lens angle of view V 4. As can be seen, when the stage 4 is added to effect three-dimensional virtual model W M 4 in FIG. 7D so that the virtual image shown in FIG. 7C and the virtual image shown in different stage effects.
本申请实施例提供的分镜效果实现装置可以是软件装置也可以是硬件装置,当分镜效果实现装置为软件装置时,分镜效果实现装置可以单独部署在云环境下的一个计算设备上,也可以单独部署在一个终端设备上,当分镜效果实现装置是硬件设备时,分镜效果实现装置内部的单元模块也可以有多种划分,各个模块可以是软件模块也可以是硬件模块,也可以部分是软件模块部分是硬件模块,本申请不对其进行限制。图8为一种示例性的划分方式,如图8所示,图8是本申请实施例提供的一种分镜效果的实现装置800,包括:获取单元810,配置为获取三维虚拟模型;分镜单元820,配置为以至少两个不同的镜头视角对三维虚拟模型进行渲染,得到至少两个不同的镜头视角分别对应的虚拟图像。The splitting effect realization device provided in the embodiments of the present application may be a software device or a hardware device. When the splitting effect realization device is a software device, the splitting effect realization device can be separately deployed on a computing device in a cloud environment. It can be deployed separately on a terminal device. When the split-mirror effect realization device is a hardware device, the internal unit modules of the split-mirror effect realization device can also be divided into multiple types. Each module can be a software module, a hardware module, or part of it. It is a software module and the part is a hardware module, and this application does not limit it. FIG. 8 is an exemplary division method. As shown in FIG. 8, FIG. 8 is a device 800 for implementing a splitting effect provided by an embodiment of the present application, including: an obtaining unit 810 configured to obtain a three-dimensional virtual model; The mirror unit 820 is configured to render the three-dimensional virtual model with at least two different lens angles to obtain virtual images corresponding to at least two different lens angles.
在本申请一些可选实施例中,三维虚拟模型包括处于三维虚拟场景模型中的三维虚拟人物模型,上述装置还包括:特征提取单元830和三维虚拟模型生成单元840;其中,In some optional embodiments of the present application, the three-dimensional virtual model includes a three-dimensional virtual character model in a three-dimensional virtual scene model, and the above-mentioned apparatus further includes: a feature extraction unit 830 and a three-dimensional virtual model generation unit 840; wherein,
获取单元810,还配置为在获取三维虚拟模型之前,获取真实图像,其中,真实图像包括真实人物图像;特征提取单元830,配置为对真实人物图像进行特征提取得到特征信息,其中,特征信息包括真实人物的动作信息;三维虚拟模型生成单元840,配置为根据特征信息生成三维虚拟模型,以使得三维虚拟模型中的三维虚拟人物模型的动作信息与真实人物的动作信息对应。The acquiring unit 810 is further configured to acquire a real image before acquiring the three-dimensional virtual model, where the real image includes a real person image; the feature extraction unit 830 is configured to perform feature extraction on the real person image to obtain feature information, where the feature information includes Action information of a real character; the three-dimensional virtual model generating unit 840 is configured to generate a three-dimensional virtual model according to the characteristic information, so that the action information of the three-dimensional virtual character model in the three-dimensional virtual model corresponds to the action information of the real character.
在本申请一些可选实施例中,获取单元,配置为获取视频流,根据视频流中的至少两帧图像得到至少两帧真实图像;特征提取单元830,配置为分别对每一帧真实人物图像进行特征提取得到对应的特征信息。In some optional embodiments of the present application, the acquiring unit is configured to acquire a video stream, and obtain at least two frames of real images according to at least two frames of images in the video stream; the feature extraction unit 830 is configured to separately analyze each frame of real person images Perform feature extraction to obtain corresponding feature information.
在本申请一些可选实施例中,真实图像还包括真实场景图像,三维虚拟模型还包括三维虚拟场景模型;上述装置还包括:三维虚拟场景图像构建单元850,配置为在获取单元获取三维虚拟模型之前,根据真实场景图像,构建三维虚拟场景图像。In some optional embodiments of the present application, the real image further includes a real scene image, and the three-dimensional virtual model also includes a three-dimensional virtual scene model; the above-mentioned apparatus further includes: a three-dimensional virtual scene image construction unit 850 configured to acquire a three-dimensional virtual model in the acquiring unit Previously, a three-dimensional virtual scene image was constructed based on the real scene image.
在本申请一些可选实施例中,上述装置还包括镜头视角获取单元860,配置为获取至少两个不同的镜头视角。具体的,在一些可选实施方式中,镜头视角获取单元860,配置为根据至少两帧真实图像,得到至少两个不同的镜头视角。In some optional embodiments of the present application, the above-mentioned device further includes a lens angle acquisition unit 860 configured to obtain at least two different lens angles. Specifically, in some optional embodiments, the lens angle of view acquisition unit 860 is configured to obtain at least two different lens angles according to at least two frames of real images.
在本申请一些可选实施例中,镜头视角获取单元860,配置为根据至少两帧真实图像分别对应的动作信息,得到至少两个不同的镜头视角。In some optional embodiments of the present application, the lens angle of view acquisition unit 860 is configured to obtain at least two different lens angles of view according to the action information corresponding to the at least two frames of real images, respectively.
在本申请一些可选实施例中,镜头视角获取单元860,配置为获取背景音乐;确定背景音乐对应的时间合集,其中时间合集包括至少两个时间段;获取时间合集中每一个时间段对应的镜头视角。In some optional embodiments of the present application, the lens angle acquisition unit 860 is configured to acquire background music; determine the time collection corresponding to the background music, where the time collection includes at least two time periods; and obtain the corresponding time period in the time collection Lens angle of view.
在本申请一些可选实施例中,至少两个不同的镜头视角包括第一镜头视角和第二镜头视角,分镜单元820,配置为以第一镜头视角对三维虚拟模型进行渲染,得到第一虚拟图像;以第二镜头视角对三维虚拟模型进行渲染,得到第二虚拟图像;展示根据第一虚拟图像和第二虚拟图像形成的图像序列。In some optional embodiments of the present application, at least two different lens angles include a first lens angle of view and a second lens angle of view, and the splitter unit 820 is configured to render the three-dimensional virtual model with the first lens angle of view to obtain the first lens angle. Virtual image; Render the three-dimensional virtual model from the second lens perspective to obtain the second virtual image; display the image sequence formed according to the first virtual image and the second virtual image.
在本申请一些可选实施例中,分镜单元820,配置为将第一镜头视角下的三维虚拟模型进行平移或者旋转,得到第二镜头视角下的三维虚拟模型;获取第二镜头视角下的三维虚拟模型对应的第二虚拟图像。In some optional embodiments of the present application, the splitting unit 820 is configured to translate or rotate the three-dimensional virtual model under the first lens angle of view to obtain the three-dimensional virtual model under the second lens angle of view; and obtain the three-dimensional virtual model under the second lens angle of view. The second virtual image corresponding to the three-dimensional virtual model.
在本申请一些可选实施例中,分镜单元820,配置为在第一虚拟图像和第二虚拟图像之间插入a帧虚拟图像,使得第一虚拟图像平缓切换至第二虚拟图像,其中,a是正整数。In some optional embodiments of the present application, the mirror splitting unit 820 is configured to insert a frame of virtual image between the first virtual image and the second virtual image, so that the first virtual image is gently switched to the second virtual image, wherein, a is a positive integer.
在本申请一些可选实施例中,上述装置还包括:节拍检测单元870,配置为对背景音乐进行节拍检测,得到背景音乐的节拍合集,其中,节拍合集包括多个节拍,多个节拍中的每一个节拍对应一个舞台特效;舞台特效生成单元880,配置为将节拍合集对应的目标舞台特效添加到三维虚拟模型中。In some optional embodiments of the present application, the above-mentioned device further includes: a beat detection unit 870 configured to perform beat detection on the background music to obtain a beat collection of the background music, wherein the beat collection includes multiple beats, Each beat corresponds to a stage special effect; the stage special effect generation unit 880 is configured to add the target stage special effect corresponding to the beat collection to the three-dimensional virtual model.
上述分镜效果实现装置通过根据采集得到的真实图像生成三维虚拟模型,并根据采集得到的真实图像、背景音乐以及真实人物的动作得到多个镜头视角,并利用多个镜头视角对三维虚拟模型进行相应的镜头视角切换,从而模拟出在虚拟场景中有多个虚拟相机对三维虚拟模型进行拍摄的效果,使得用户可以看到多个不同镜头视角下的三维虚拟模型,提高了观众的观看体验感。另外,该装置还通过对背景音乐的节拍进行解析,并根据节拍信息在三维虚拟模型中添加对应的舞台特效,为观众呈现出不同的舞台效果,进一步增强了观众的直播观看体验感。The above-mentioned split-mirror effect realization device generates a three-dimensional virtual model according to the collected real image, and obtains multiple lens perspectives according to the collected real image, background music, and the actions of real characters, and uses multiple lens perspectives to perform the three-dimensional virtual model. The corresponding lens angle of view is switched to simulate the effect of multiple virtual cameras shooting the 3D virtual model in the virtual scene, so that the user can see the 3D virtual model under different lens angles, which improves the viewer’s viewing experience . In addition, the device also analyzes the beats of the background music and adds corresponding stage effects to the three-dimensional virtual model according to the beat information to present different stage effects to the audience, which further enhances the audience's live viewing experience.
参见图9,本申请实施例提供了电子设备900的结构示意图,前述中的分镜效果实现装置应用于电子设备900中。电子设备900包括:处理器910、通信接口920以及存储器930,其中,处理器910、通信接口920以及存储器930可通过总线940进行耦合。其中,Referring to FIG. 9, an embodiment of the present application provides a schematic structural diagram of an electronic device 900, and the foregoing device for implementing the split-mirror effect is applied to the electronic device 900. The electronic device 900 includes a processor 910, a communication interface 920, and a memory 930, where the processor 910, the communication interface 920, and the memory 930 can be coupled through a bus 940. among them,
处理器910可以是中央处理器(Central Processing Unit,CPU),通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application-Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件(Programmable Logic Device,PLD)、晶体管逻辑器件、硬件部件或者其任意组合。处理器910可以实现或执行结合本申请公开内容所描述的各种示例性的方法。具体的,处理器910读取存储器930中存储的程序代码,并与通信接口920配合执行本申请上述实施例中由分镜效果实现装置执行的方法的部分或者全部步骤。The processor 910 may be a central processing unit (CPU), a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field programmable gate array (Field Programmable Gate Array, FPGA) or other programmable logic devices (Programmable Logic Device, PLD), transistor logic devices, hardware components, or any combination thereof. The processor 910 may implement or execute various exemplary methods described in conjunction with the disclosure of the present application. Specifically, the processor 910 reads the program code stored in the memory 930, and cooperates with the communication interface 920 to execute part or all of the steps of the method executed by the device for implementing the split-mirror effect in the foregoing embodiment of the present application.
通信接口920可以为有线接口或无线接口,用于与其他模块或设备进行通信,有线接口可以是以太接口、控制器局域网络接口、局域互联网络(Local Interconnect Network,LIN)以及FlexRay接口,无线接口可以是蜂窝网络接口或使用无线局域网接口等。具体的,上述通信接口920可以与输入输出设备950相连接,输入输出设备950可以包括鼠标、键盘、麦克风等其他终端设备。The communication interface 920 can be a wired interface or a wireless interface for communicating with other modules or devices. The wired interface can be an Ethernet interface, a controller area network interface, a local interconnect network (Local Interconnect Network, LIN), and a FlexRay interface. The interface can be a cellular network interface or a wireless local area network interface. Specifically, the aforementioned communication interface 920 may be connected to an input/output device 950, and the input/output device 950 may include other terminal devices such as a mouse, a keyboard, and a microphone.
存储器930可以包括易失性存储器,例如随机存取存储器(Random Access Memory,RAM);存储器930也可以包括非易失性存储器(Non-Volatile Memory),例如只读存储器(Read-Only Memory,ROM)、快闪存储器、硬盘(Hard Disk Drive,HDD)或固态硬盘(Solid-State Drive,SSD),存储器930还可以包括上述种类的存储器的组合。存储器930可以存储有程序代码以及程序数据。其中,程序代码由上述分镜效果实现装置800中的部分或者全部单元的代码组成,例如,获取单元810的代码、分镜单元820的代码、特征提取单元830的代码、三维虚拟模型生成单元840的代码、三维虚拟场景图像构建单元850的代码、镜头视角获取单元860的代码、节拍检测单元870的代码以及舞台特效生成单元880的代码等等。程序数据由分镜效果实现装置800在运行过程中产生的数据,例如,真实图像数据、三维虚拟模型数据、镜头视角数据、背景音乐数据以及虚拟图像数据等等。The memory 930 may include a volatile memory, such as a random access memory (Random Access Memory, RAM); the memory 930 may also include a non-volatile memory (Non-Volatile Memory), such as a read-only memory (Read-Only Memory, ROM). ), flash memory, hard disk (Hard Disk Drive, HDD), or solid-state hard disk (Solid-State Drive, SSD), and the memory 930 may also include a combination of the foregoing types of memory. The memory 930 may store program codes and program data. Wherein, the program code is composed of the codes of some or all of the units in the above-mentioned mirror effect realization device 800, for example, the code of the acquisition unit 810, the code of the mirror unit 820, the code of the feature extraction unit 830, and the 3D virtual model generation unit 840 The code of the 3D virtual scene image construction unit 850, the code of the lens angle acquisition unit 860, the code of the beat detection unit 870, the code of the stage special effect generation unit 880, and so on. The program data is data generated during the operation of the split-mirror effect realization device 800, such as real image data, three-dimensional virtual model data, lens angle data, background music data, virtual image data, and so on.
总线940可以是控制器局域网络(Controller Area Network,CAN)或其他实现车内各个系统或设备之间互连的内部总线。总线940可以分为地址总线、数据总线、控制总线等。为了便于表示,图中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。The bus 940 may be a Controller Area Network (CAN) or other internal bus that implements interconnection between various systems or devices in the vehicle. The bus 940 can be divided into an address bus, a data bus, a control bus, and so on. For ease of representation, the figure is only represented by a thick line, but it does not mean that there is only one bus or one type of bus.
应当理解,电子设备900可能包含相比于图9展示的更多或者更少的组件,或者有不同的组件配置方式。It should be understood that the electronic device 900 may include more or fewer components than those shown in FIG. 9, or may have different component configurations.
本申请实施例还提供了一种计算机可读存储介质,上述计算机可读存储介质存储有计算机程序,上述计算机程序被硬件(例如处理器等)执行,以实现上述分镜效果实现方法中部分或全部步骤。The embodiment of the present application also provides a computer-readable storage medium. The above-mentioned computer-readable storage medium stores a computer program, and the above-mentioned computer program is executed by hardware (such as a processor, etc.) to realize part or All steps.
本申请实施例还提供了一种计算机程序产品,当上述计算机程序产品在上述分镜效果实现装置或者电子设备上运行时,执行上述分镜效果实现方法的部分或全部步骤。The embodiment of the present application also provides a computer program product. When the computer program product runs on the above-mentioned device or electronic device for realizing the split-mirror effect, it executes part or all of the steps of the method for realizing the above-mentioned split-mirror effect.
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序 产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线)或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、存储盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如SSD)等。在所述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented by software, it can be implemented in the form of a computer program product in whole or in part. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the processes or functions described in the embodiments of the present application are generated in whole or in part. The computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices. The computer instructions may be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center. Transmission to another website site, computer, server or data center via wired (such as coaxial cable, optical fiber, digital subscriber line) or wireless (such as infrared, wireless, microwave, etc.). The computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or a data center integrated with one or more available media. The usable medium may be a magnetic medium (for example, a floppy disk, a storage disk, a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, an SSD). In the embodiments, the description of each embodiment has its own focus. For parts that are not described in detail in an embodiment, reference may be made to related descriptions of other embodiments.
在本申请所提供的几个实施例中,应该理解到,所揭露的装置,也可以通过其它的方式实现。例如以上所描述的装置实施例仅是示意性的,例如所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可结合或者可以集成到另一个系统,或一些特征可以忽略或不执行。另一点,所显示或讨论的相互之间的间接耦合或者直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed device may also be implemented in other ways. For example, the device embodiments described above are only illustrative. For example, the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components can be combined or integrated. To another system, or some features can be ignored or not implemented. In addition, the displayed or discussed indirect coupling or direct coupling or communication connection between each other may be through some interfaces, indirect coupling or communication connection between devices or units, and may be in electrical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者,也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本申请实施例的方案的目的。The units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed to multiple network units. . Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions in the embodiments of the present application.
另外,在本申请各实施例中的各功能单元可集成在一个处理单元中,也可以是各单元单独物理存在,也可以是两个或两个以上单元集成在一个单元中。所述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。In addition, the functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit. The integrated unit may be implemented in the form of hardware or software functional unit.
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质例如可包括:U盘、移动硬盘、只读存储器、随机存取存储器、磁碟或光盘等各种可存储程序代码的介质。If the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium. Based on this understanding, the technical solution of the application essentially or the part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium. A number of instructions are included to enable a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the various embodiments of the present application. The aforementioned storage media may include, for example, various media capable of storing program codes, such as U disk, mobile hard disk, read-only memory, random access memory, magnetic disk or optical disk.
以上所述,仅为本申请实施例的可选实施方式,但本申请实施例的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到各种等效的修改或替换,这些修改或替换都应涵盖在本申请的保护范围之内。因此,本申请实施例的保护范围应以权利要求的保护范围为准。The above are only optional implementations of the embodiments of the present application, but the protection scope of the embodiments of the present application is not limited thereto. Any person skilled in the art can easily think of within the technical scope disclosed in the present application. Various equivalent modifications or replacements shall be covered within the protection scope of this application. Therefore, the protection scope of the embodiments of the present application should be subject to the protection scope of the claims.

Claims (25)

  1. 一种分镜效果的实现方法,包括:A method for realizing the split-mirror effect includes:
    获取三维虚拟模型;Obtain a three-dimensional virtual model;
    以至少两个不同的镜头视角对所述三维虚拟模型进行渲染,得到至少两个不同的镜头视角分别对应的虚拟图像。The three-dimensional virtual model is rendered with at least two different lens angles to obtain virtual images corresponding to at least two different lens angles.
  2. 根据权利要求1所述的方法,其中,所述三维虚拟模型包括处于三维虚拟场景模型中的三维虚拟人物模型,在所述获取三维虚拟模型之前,所述方法还包括:The method according to claim 1, wherein the three-dimensional virtual model comprises a three-dimensional virtual character model in a three-dimensional virtual scene model, and before the obtaining the three-dimensional virtual model, the method further comprises:
    获取真实图像,其中,所述真实图像包括真实人物图像;Acquiring a real image, where the real image includes an image of a real person;
    对所述真实人物图像进行特征提取得到特征信息,其中,所述特征信息包括所述真实人物的动作信息;Performing feature extraction on the real person image to obtain feature information, where the feature information includes action information of the real person;
    根据所述特征信息生成所述三维虚拟模型,以使得所述三维虚拟模型中的所述三维虚拟人物模型的动作信息与所述真实人物的动作信息对应。The three-dimensional virtual model is generated according to the characteristic information, so that the action information of the three-dimensional virtual character model in the three-dimensional virtual model corresponds to the action information of the real character.
  3. 根据权利要求2所述的方法,其中,所述获取真实图像包括:The method according to claim 2, wherein said obtaining a real image comprises:
    获取视频流,根据所述视频流中的至少两帧图像得到至少两帧所述真实图像;Obtaining a video stream, and obtaining at least two frames of the real image according to at least two frames of images in the video stream;
    所述对所述真实人物图像进行特征提取得到特征信息,包括:The performing feature extraction on the real person image to obtain feature information includes:
    分别对每一帧所述真实人物图像进行特征提取得到对应的特征信息。Perform feature extraction on each frame of the real person image to obtain corresponding feature information.
  4. 根据权利要求3所述的方法,其中,所述真实图像还包括真实场景图像,所述三维虚拟模型还包括所述三维虚拟场景模型;在所述获取三维虚拟模型之前,所述方法还包括:The method according to claim 3, wherein the real image further comprises a real scene image, and the three-dimensional virtual model further comprises the three-dimensional virtual scene model; before the obtaining the three-dimensional virtual model, the method further comprises:
    根据所述真实场景图像,构建所述三维虚拟场景模型。According to the real scene image, the three-dimensional virtual scene model is constructed.
  5. 根据权利要求3或4所述的方法,其中,获取所述至少两个不同的镜头视角,包括:The method according to claim 3 or 4, wherein acquiring the at least two different lens angles includes:
    根据所述至少两帧所述真实图像,得到所述至少两个不同的镜头视角。According to the at least two frames of the real image, the at least two different lens angles of view are obtained.
  6. 根据权利要求3或4所述的方法,其中,获取所述至少两个不同的镜头视角,包括:The method according to claim 3 or 4, wherein acquiring the at least two different lens angles includes:
    根据所述至少两帧所述真实图像分别对应的动作信息,得到所述至少两个不同的镜头视角。The at least two different lens angles of view are obtained according to the action information corresponding to the at least two frames of the real images respectively.
  7. 根据权利要求3或4所述的方法,其中,获取所述至少两个不同的镜头视角,包括:The method according to claim 3 or 4, wherein acquiring the at least two different lens angles includes:
    获取背景音乐;Get background music;
    确定所述背景音乐对应的时间合集,其中,所述时间合集包括至少两个时间段;Determining a time collection corresponding to the background music, wherein the time collection includes at least two time periods;
    获取所述时间合集中每一个时间段对应的镜头视角。Obtain the lens angle of view corresponding to each time period in the time collection.
  8. 根据权利要求1所述的方法,其中,所述至少两个不同的镜头视角包括第一镜头视角和第二镜头视角;所述以至少两个不同的镜头视角对所述三维虚拟模型进行渲 染,得到至少两个不同的镜头视角分别对应的虚拟图像,包括:The method according to claim 1, wherein the at least two different lens angles include a first lens angle of view and a second lens angle of view; the rendering of the three-dimensional virtual model with at least two different lens angles, Obtain at least two virtual images corresponding to different lens angles, including:
    以所述第一镜头视角对所述三维虚拟模型进行渲染,得到第一虚拟图像;Rendering the three-dimensional virtual model with the first lens perspective to obtain a first virtual image;
    以所述第二镜头视角对所述三维虚拟模型进行渲染,得到第二虚拟图像;Rendering the three-dimensional virtual model with the second lens perspective to obtain a second virtual image;
    展示根据所述第一虚拟图像和所述第二虚拟图像形成的图像序列。The image sequence formed according to the first virtual image and the second virtual image is displayed.
  9. 根据权利要求8所述的方法,其中,所述以所述第二镜头视角对所述三维虚拟模型进行渲染,得到第二虚拟图像,包括:The method according to claim 8, wherein said rendering said three-dimensional virtual model with said second lens angle of view to obtain a second virtual image comprises:
    将所述第一镜头视角下的所述三维虚拟模型进行平移或者旋转,得到所述第二镜头视角下的所述三维虚拟模型;Translate or rotate the three-dimensional virtual model in the first lens angle of view to obtain the three-dimensional virtual model in the second lens angle of view;
    获取所述第二镜头视角下的所述三维虚拟模型对应的所述第二虚拟图像。Acquiring the second virtual image corresponding to the three-dimensional virtual model in the second lens angle of view.
  10. 根据权利要求9所述的方法,其中,所述展示根据所述第一图像和所述第二虚拟图像形成的图像序列,包括:The method according to claim 9, wherein the presenting the image sequence formed according to the first image and the second virtual image comprises:
    在所述第一虚拟图像和所述第二虚拟图像之间插入a帧虚拟图像,使得所述第一虚拟图像平缓切换至所述第二虚拟图像,其中,a是正整数。A frame of virtual image is inserted between the first virtual image and the second virtual image, so that the first virtual image is gently switched to the second virtual image, where a is a positive integer.
  11. 根据权利要求7至10任一项权利要求所述的方法,其中,所述方法还包括:The method according to any one of claims 7 to 10, wherein the method further comprises:
    对所述背景音乐进行节拍检测,得到所述背景音乐的节拍合集,其中,所述节拍合集包括多个节拍,所述多个节拍中的每一个节拍对应一个舞台特效;Performing beat detection on the background music to obtain a beat collection of the background music, wherein the beat collection includes multiple beats, and each beat in the multiple beats corresponds to a stage special effect;
    将所述节拍合集对应的目标舞台特效添加到所述三维虚拟模型中。The target stage special effect corresponding to the beat collection is added to the three-dimensional virtual model.
  12. 一种分镜效果的实现装置,包括:A device for realizing split-mirror effect, including:
    获取单元,配置为获取三维虚拟模型;The obtaining unit is configured to obtain a three-dimensional virtual model;
    分镜单元,配置为以至少两个不同的镜头视角对所述三维虚拟模型进行渲染,得到至少两个不同的镜头视角分别对应的虚拟图像。The mirror splitting unit is configured to render the three-dimensional virtual model with at least two different lens angles to obtain virtual images corresponding to at least two different lens angles.
  13. 根据权利要求12所述的装置,其中,所述三维虚拟模型包括处于三维虚拟场景模型中的三维虚拟人物模型;所述装置还包括:特征提取单元和三维虚拟模型生成单元;其中,所述获取单元,还配置为在获取三维虚拟模型之前,获取真实图像,其中,所述真实图像包括真实人物图像;The device according to claim 12, wherein the three-dimensional virtual model comprises a three-dimensional virtual character model in a three-dimensional virtual scene model; the device further comprises: a feature extraction unit and a three-dimensional virtual model generation unit; wherein, the acquisition The unit is further configured to acquire a real image before acquiring the three-dimensional virtual model, wherein the real image includes an image of a real person;
    所述特征提取单元,配置为对所述真实人物图像进行特征提取得到特征信息,其中,所述特征信息包括所述真实人物的动作信息;The feature extraction unit is configured to perform feature extraction on the real person image to obtain feature information, where the feature information includes action information of the real person;
    所述三维虚拟模型生成单元,配置为根据所述特征信息生成所述三维虚拟模型,以使得所述三维虚拟模型中的所述三维虚拟人物模型的动作信息与所述真实人物的动作信息对应。The three-dimensional virtual model generating unit is configured to generate the three-dimensional virtual model according to the characteristic information, so that the action information of the three-dimensional virtual character model in the three-dimensional virtual model corresponds to the action information of the real character.
  14. 根据权利要求13所述的装置,其中,所述获取单元,配置为获取视频流,根据所述视频流中的至少两帧图像得到至少两帧所述真实图像;The apparatus according to claim 13, wherein the acquisition unit is configured to acquire a video stream, and obtain at least two frames of the real image according to at least two frames of the image in the video stream;
    所述特征提取单元,配置为分别对每一帧所述真实人物图像进行特征提取得到对应的特征信息。The feature extraction unit is configured to perform feature extraction on each frame of the real person image to obtain corresponding feature information.
  15. 根据权利要求14所述的装置,其中,所述真实图像还包括真实场景图像,所述三维虚拟模型还包括所述三维虚拟场景模型;The apparatus according to claim 14, wherein the real image further comprises a real scene image, and the three-dimensional virtual model further comprises the three-dimensional virtual scene model;
    所述装置还包括三维虚拟场景图像构建单元,配置为在所述获取单元获取三维虚拟模型之前,根据所述真实场景图像,构建所述三维虚拟场景模型。The device further includes a three-dimensional virtual scene image construction unit configured to construct the three-dimensional virtual scene model according to the real scene image before the acquisition unit acquires the three-dimensional virtual model.
  16. 根据权利要求14或15所述的装置,其中,所述装置还包括镜头视角获取单元,配置为根据所述至少两帧所述真实图像,得到所述至少两个不同的镜头视角。The device according to claim 14 or 15, wherein the device further comprises a lens angle acquisition unit configured to obtain the at least two different lens angles according to the at least two frames of the real image.
  17. 根据权利要求14或15所述的装置,其中,所述装置还包括镜头视角获取单元,配置为根据所述至少两帧所述真实图像分别对应的动作信息,得到所述至少两个不同的镜头视角。The device according to claim 14 or 15, wherein the device further comprises a lens angle acquisition unit configured to obtain the at least two different lenses according to the action information corresponding to the at least two frames of the real images. Perspective.
  18. 根据权利要求14或15所述的装置,其中,所述装置还包括镜头视角获取单元,配置为获取背景音乐;确定所述背景音乐对应的时间合集,其中,所述时间合集包括至少两个时间段;获取所述时间合集中每一个时间段对应的镜头视角。The device according to claim 14 or 15, wherein the device further comprises a lens angle acquisition unit configured to acquire background music; determine a time collection corresponding to the background music, wherein the time collection includes at least two times Segment; obtain the lens angle of view corresponding to each time segment in the time collection.
  19. 根据权利要求12所述的装置,其中,所述至少两个不同的镜头视角包括第一镜头视角和第二镜头视角;所述分镜单元,配置为以所述第一镜头视角对所述三维虚拟模型进行渲染,得到第一虚拟图像;以所述第二镜头视角对所述三维虚拟模型进行渲染,得到第二虚拟图像;展示根据所述第一虚拟图像和所述第二虚拟图像形成的图像序列。The device according to claim 12, wherein the at least two different lens angles include a first lens angle of view and a second lens angle of view; the splitting unit is configured to view the three-dimensional view from the first lens angle of view. The virtual model is rendered to obtain a first virtual image; the three-dimensional virtual model is rendered with the second lens angle of view to obtain a second virtual image; it is displayed based on the first virtual image and the second virtual image Image sequence.
  20. 根据权利要求19所述的装置,其中,所述分镜单元,配置为将所述第一镜头视角下的所述三维虚拟模型进行平移或者旋转,得到所述第二镜头视角下的所述三维虚拟模型;获取所述第二镜头视角下的所述三维虚拟模型对应的所述第二虚拟图像。The device according to claim 19, wherein the splitting unit is configured to translate or rotate the three-dimensional virtual model in the first lens angle of view to obtain the three-dimensional virtual model in the second lens angle of view. Virtual model; acquiring the second virtual image corresponding to the three-dimensional virtual model in the second lens angle of view.
  21. 根据权利要求20所述的装置,其中,所述分镜单元,配置为在所述第一虚拟图像和所述第二虚拟图像之间插入a帧虚拟图像,使得所述第一虚拟图像平缓切换至所述第二虚拟图像,其中,a是正整数。The device according to claim 20, wherein the splitting unit is configured to insert a frame of virtual image between the first virtual image and the second virtual image, so that the first virtual image is switched smoothly To the second virtual image, where a is a positive integer.
  22. 根据权利要求18至21任一项所述的装置,其中,所述装置还包括:节拍检测单元和舞台特效生成单元;其中,所述节拍检测单元,配置为对所述背景音乐进行节拍检测,得到所述背景音乐的节拍合集,其中,所述节拍合集包括多个节拍,所述多个节拍中的每一个节拍对应一个舞台特效;The device according to any one of claims 18 to 21, wherein the device further comprises: a beat detection unit and a stage special effect generation unit; wherein the beat detection unit is configured to perform beat detection on the background music, Obtaining a collection of beats of the background music, wherein the collection of beats includes multiple beats, and each beat of the multiple beats corresponds to a stage special effect;
    所述舞台特效生成单元,配置为将所述节拍合集对应的目标舞台特效添加到所述三维虚拟模型中。The stage special effect generating unit is configured to add the target stage special effect corresponding to the beat collection to the three-dimensional virtual model.
  23. 一种电子设备,所述电子设备包括:处理器、通信接口以及存储器;所述存储器用于存储指令,所述处理器用于执行所述指令,所述通信接口用于在所述处理器的控制下与其他设备进行通信,其中,所述处理器执行所述指令时实现权利要求1至11任一项权利要求所述的方法。An electronic device comprising: a processor, a communication interface, and a memory; the memory is used to store instructions, the processor is used to execute the instructions, and the communication interface is used to control the processor Communicate with other devices under the following conditions, wherein the processor implements the method according to any one of claims 1 to 11 when the processor executes the instructions.
  24. 一种计算机可读存储介质,存储有计算机程序,所述计算机程序被硬件执行以实现权利要求1至11任一项权利要求所述的方法。A computer-readable storage medium storing a computer program, and the computer program is executed by hardware to implement the method according to any one of claims 1 to 11.
  25. 一种计算机程序产品,所述计算机程序产品被计算机读取并执行以实现权利要求1至11任一项权利要求所述的方法。A computer program product that is read and executed by a computer to implement the method described in any one of claims 1 to 11.
PCT/CN2020/082545 2019-12-03 2020-03-31 Method and device for producing multiple camera-angle effect, and related product WO2021109376A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
KR1020227018465A KR20220093342A (en) 2019-12-03 2020-03-31 Method, device and related products for implementing split mirror effect
JP2022528715A JP7457806B2 (en) 2019-12-03 2020-03-31 Lens division realization method, device and related products

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911225211.4A CN111080759B (en) 2019-12-03 2019-12-03 Method and device for realizing split mirror effect and related product
CN201911225211.4 2019-12-03

Publications (1)

Publication Number Publication Date
WO2021109376A1 true WO2021109376A1 (en) 2021-06-10

Family

ID=70312713

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/082545 WO2021109376A1 (en) 2019-12-03 2020-03-31 Method and device for producing multiple camera-angle effect, and related product

Country Status (5)

Country Link
JP (1) JP7457806B2 (en)
KR (1) KR20220093342A (en)
CN (1) CN111080759B (en)
TW (1) TWI752502B (en)
WO (1) WO2021109376A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113630646A (en) * 2021-07-29 2021-11-09 北京沃东天骏信息技术有限公司 Data processing method and device, equipment and storage medium
CN114900743A (en) * 2022-04-28 2022-08-12 中德(珠海)人工智能研究院有限公司 Scene rendering transition method and system based on video plug flow
CN115883814A (en) * 2023-02-23 2023-03-31 阿里巴巴(中国)有限公司 Method, device and equipment for playing real-time video stream

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI762375B (en) * 2021-07-09 2022-04-21 國立臺灣大學 Semantic segmentation failure detection system
CN114157879A (en) * 2021-11-25 2022-03-08 广州林电智能科技有限公司 Full scene virtual live broadcast processing equipment
CN114630173A (en) * 2022-03-03 2022-06-14 北京字跳网络技术有限公司 Virtual object driving method and device, electronic equipment and readable storage medium
CN114745598B (en) * 2022-04-12 2024-03-19 北京字跳网络技术有限公司 Video data display method and device, electronic equipment and storage medium
CN117014651A (en) * 2022-04-29 2023-11-07 北京字跳网络技术有限公司 Video generation method and device
CN115442542B (en) * 2022-11-09 2023-04-07 北京天图万境科技有限公司 Method and device for splitting mirror

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106157359A (en) * 2015-04-23 2016-11-23 中国科学院宁波材料技术与工程研究所 A kind of method for designing of virtual scene experiencing system
CN106295955A (en) * 2016-07-27 2017-01-04 邓耀华 A kind of client based on augmented reality is to the footwear custom-built system of factory and implementation method
US10068376B2 (en) * 2016-01-11 2018-09-04 Microsoft Technology Licensing, Llc Updating mixed reality thumbnails
CN108604121A (en) * 2016-05-10 2018-09-28 谷歌有限责任公司 Both hands object manipulation in virtual reality
CN108830894A (en) * 2018-06-19 2018-11-16 亮风台(上海)信息科技有限公司 Remote guide method, apparatus, terminal and storage medium based on augmented reality

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201333882A (en) * 2012-02-14 2013-08-16 Univ Nat Taiwan Augmented reality apparatus and method thereof
US20150049078A1 (en) * 2013-08-15 2015-02-19 Mep Tech, Inc. Multiple perspective interactive image projection
CN106385576B (en) * 2016-09-07 2017-12-08 深圳超多维科技有限公司 Stereoscopic Virtual Reality live broadcasting method, device and electronic equipment
CN107103645B (en) * 2017-04-27 2018-07-20 腾讯科技(深圳)有限公司 virtual reality media file generation method and device
CN107194979A (en) * 2017-05-11 2017-09-22 上海微漫网络科技有限公司 The Scene Composition methods and system of a kind of virtual role
US10278001B2 (en) * 2017-05-12 2019-04-30 Microsoft Technology Licensing, Llc Multiple listener cloud render with enhanced instant replay
JP6469279B1 (en) 2018-04-12 2019-02-13 株式会社バーチャルキャスト Content distribution server, content distribution system, content distribution method and program
CN108538095A (en) * 2018-04-25 2018-09-14 惠州卫生职业技术学院 Medical teaching system and method based on virtual reality technology
JP6595043B1 (en) 2018-05-29 2019-10-23 株式会社コロプラ GAME PROGRAM, METHOD, AND INFORMATION PROCESSING DEVICE
CN108961376A (en) * 2018-06-21 2018-12-07 珠海金山网络游戏科技有限公司 The method and system of real-time rendering three-dimensional scenic in virtual idol live streaming
CN108833740B (en) * 2018-06-21 2021-03-30 珠海金山网络游戏科技有限公司 Real-time prompter method and device based on three-dimensional animation live broadcast
CN108877838B (en) * 2018-07-17 2021-04-02 黑盒子科技(北京)有限公司 Music special effect matching method and device
JP6538942B1 (en) * 2018-07-26 2019-07-03 株式会社Cygames INFORMATION PROCESSING PROGRAM, SERVER, INFORMATION PROCESSING SYSTEM, AND INFORMATION PROCESSING APPARATUS
CN110139115B (en) * 2019-04-30 2020-06-09 广州虎牙信息科技有限公司 Method and device for controlling virtual image posture based on key points and electronic equipment
CN110335334A (en) * 2019-07-04 2019-10-15 北京字节跳动网络技术有限公司 Avatars drive display methods, device, electronic equipment and storage medium
CN110427110B (en) * 2019-08-01 2023-04-18 广州方硅信息技术有限公司 Live broadcast method and device and live broadcast server

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106157359A (en) * 2015-04-23 2016-11-23 中国科学院宁波材料技术与工程研究所 A kind of method for designing of virtual scene experiencing system
US10068376B2 (en) * 2016-01-11 2018-09-04 Microsoft Technology Licensing, Llc Updating mixed reality thumbnails
CN108604121A (en) * 2016-05-10 2018-09-28 谷歌有限责任公司 Both hands object manipulation in virtual reality
CN106295955A (en) * 2016-07-27 2017-01-04 邓耀华 A kind of client based on augmented reality is to the footwear custom-built system of factory and implementation method
CN108830894A (en) * 2018-06-19 2018-11-16 亮风台(上海)信息科技有限公司 Remote guide method, apparatus, terminal and storage medium based on augmented reality

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113630646A (en) * 2021-07-29 2021-11-09 北京沃东天骏信息技术有限公司 Data processing method and device, equipment and storage medium
CN114900743A (en) * 2022-04-28 2022-08-12 中德(珠海)人工智能研究院有限公司 Scene rendering transition method and system based on video plug flow
CN115883814A (en) * 2023-02-23 2023-03-31 阿里巴巴(中国)有限公司 Method, device and equipment for playing real-time video stream

Also Published As

Publication number Publication date
CN111080759B (en) 2022-12-27
JP2023501832A (en) 2023-01-19
JP7457806B2 (en) 2024-03-28
KR20220093342A (en) 2022-07-05
TWI752502B (en) 2022-01-11
TW202123178A (en) 2021-06-16
CN111080759A (en) 2020-04-28

Similar Documents

Publication Publication Date Title
WO2021109376A1 (en) Method and device for producing multiple camera-angle effect, and related product
CN111970535B (en) Virtual live broadcast method, device, system and storage medium
KR102503413B1 (en) Animation interaction method, device, equipment and storage medium
WO2022001593A1 (en) Video generation method and apparatus, storage medium and computer device
US9654734B1 (en) Virtual conference room
CN113240782B (en) Streaming media generation method and device based on virtual roles
CN110968736B (en) Video generation method and device, electronic equipment and storage medium
US20160110922A1 (en) Method and system for enhancing communication by using augmented reality
KR102491140B1 (en) Method and apparatus for generating virtual avatar
JP6683864B1 (en) Content control system, content control method, and content control program
CN109035415B (en) Virtual model processing method, device, equipment and computer readable storage medium
CN113840049A (en) Image processing method, video flow scene switching method, device, equipment and medium
US20230368461A1 (en) Method and apparatus for processing action of virtual object, and storage medium
CN114363689B (en) Live broadcast control method and device, storage medium and electronic equipment
US20240163528A1 (en) Video data generation method and apparatus, electronic device, and readable storage medium
US10955911B2 (en) Gazed virtual object identification module, a system for implementing gaze translucency, and a related method
KR102200239B1 (en) Real-time computer graphics video broadcasting service system
CN108320331A (en) A kind of method and apparatus for the augmented reality video information generating user's scene
JP2001051579A (en) Method and device for displaying video and recording medium recording video display program
JP2021009351A (en) Content control system, content control method, and content control program
JP2021006886A (en) Content control system, content control method, and content control program
WO2023029289A1 (en) Model evaluation method and apparatus, storage medium, and electronic device
KR102622709B1 (en) Method and Apparatus for generating 360 degree image including 3-dimensional virtual object based on 2-dimensional image
WO2022160867A1 (en) Remote reproduction method, system, and apparatus, device, medium, and program product
Arita et al. Non-verbal human communication using avatars in a virtual space

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20897576

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022528715

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 20227018465

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 12.10.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20897576

Country of ref document: EP

Kind code of ref document: A1