WO2021109376A1 - Method and device for producing multiple camera-angle effect, and related product - Google Patents
Method and device for producing multiple camera-angle effect, and related product Download PDFInfo
- Publication number
- WO2021109376A1 WO2021109376A1 PCT/CN2020/082545 CN2020082545W WO2021109376A1 WO 2021109376 A1 WO2021109376 A1 WO 2021109376A1 CN 2020082545 W CN2020082545 W CN 2020082545W WO 2021109376 A1 WO2021109376 A1 WO 2021109376A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- dimensional virtual
- real
- model
- virtual
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/205—3D [Three Dimensional] animation driven by audio data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/003—Navigation within 3D models or images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
Definitions
- This application relates to the field of virtual technology, and in particular to a method, device and related products for realizing the split-mirror effect.
- the virtual characters in the network generally use motion capture technology in the generation process, and the real person images obtained by the image recognition method are analyzed, so as to direct the actions and expressions of the real characters to the virtual characters, so that the virtual characters can be Reproduce the movements and expressions of real characters.
- the embodiments of the present application disclose a method, device and related products for realizing the split-mirror effect.
- the embodiment of the present application provides a method for implementing the splitting effect, including: obtaining a three-dimensional virtual model; rendering the three-dimensional virtual model with at least two different lens angles to obtain at least two different lens angles, respectively The corresponding virtual image.
- the above method obtains a three-dimensional virtual model and renders the three-dimensional virtual model with at least two different lens angles, so as to obtain virtual images corresponding to at least two different lens angles, so that the user can see the images under different lens angles.
- Virtual images provide users with a rich visual experience.
- the three-dimensional virtual model includes a three-dimensional virtual character model in a three-dimensional virtual scene model.
- the above method further includes: obtaining a real image, where the real image includes a real character Image; feature extraction of real person images to obtain feature information, where the feature information includes the action information of the real person; generate a three-dimensional virtual model according to the feature information, so that the action information of the three-dimensional virtual character model in the three-dimensional virtual model is the same as that of the real person Action information correspondence.
- a 3D virtual model is generated, so that the 3D virtual person model in the 3D virtual model can reproduce the facial expressions and body movements of the real person, and it is convenient for the audience to watch the 3D virtual
- the virtual image corresponding to the model can learn the facial expressions and body movements of the real person, so that the audience can interact more flexibly with the live anchor.
- acquiring the real image includes: acquiring a video stream, and obtaining at least two frames of real images according to at least two frames of images in the video stream; performing feature extraction on real person images to obtain feature information, including: Perform feature extraction on each frame of real person images to obtain corresponding feature information.
- the three-dimensional virtual model can be changed in real time according to the multiple frames of real images collected, so that the user can see the dynamic change process of the three-dimensional virtual model under different lens perspectives.
- the real image further includes a real scene image
- the three-dimensional virtual model also includes a three-dimensional virtual scene model; before obtaining the three-dimensional virtual model, the above method further includes: constructing a three-dimensional virtual scene based on the real scene image model.
- the above method can also use real scene images to construct three-dimensional virtual scene images in the three-dimensional virtual model, which makes the three-dimensional virtual scene images more selective than only selecting specific three-dimensional virtual scene images.
- acquiring at least two different lens angles includes: obtaining at least two different lens angles according to at least two frames of real images.
- each frame of real image corresponds to a lens angle
- multiple frames of real image correspond to multiple lens angles. Therefore, at least two frames of different lens angles can be obtained from at least two frames of real images, which can be used to realize the lens angle of the 3D virtual model. Rendering to provide users with a rich visual experience.
- acquiring at least two different lens angles includes: obtaining at least two different lens angles according to the action information corresponding to the at least two frames of real images.
- determining the lens angle of view based on the action information of the real person in the real image can magnify the action of the corresponding three-dimensional virtual character model in the image, so that the user can learn the action of the real person by watching the virtual image and improve the interaction Sex and fun.
- acquiring at least two different camera angles includes: acquiring background music; determining a time collection corresponding to the background music, where the time collection includes at least two time periods; acquiring each of the time collections The lens angle of view corresponding to the time period.
- the at least two different lens angles include a first lens angle of view and a second lens angle of view; the three-dimensional virtual model is rendered with at least two different lens angles to obtain at least two different lens angles.
- the virtual images corresponding to the lens perspectives respectively include: rendering the three-dimensional virtual model with the first lens perspective to obtain the first virtual image; rendering the three-dimensional virtual model with the second lens perspective to obtain the second virtual image; An image sequence formed by the virtual image and the second virtual image.
- rendering the three-dimensional virtual model with the first lens perspective and the second lens perspective respectively allows the user to view the three-dimensional virtual model in the first lens perspective and the three-dimensional virtual model in the second lens perspective, thereby providing users with Provide a rich visual experience.
- rendering the three-dimensional virtual model in the second lens perspective to obtain the second virtual image includes: translating or rotating the three-dimensional virtual model in the first lens perspective to obtain the second lens A three-dimensional virtual model under a viewing angle; acquiring a second virtual image corresponding to the three-dimensional virtual model under a second lens perspective.
- the three-dimensional virtual model under the second lens angle of view that is, the second virtual image
- displaying the image sequence formed according to the first image and the second virtual image includes: inserting a frame of virtual image between the first virtual image and the second virtual image, so that the first virtual image The image is gently switched to the second virtual image, where a is a positive integer.
- the a-frame virtual image is inserted between the first virtual image and the second virtual image, so that the viewer can see the entire change process from the first virtual image to the second virtual image, instead of a single two images ( The first virtual image and the second virtual image), so that the audience can adapt to the effect of the visual difference caused by the first virtual image to the second virtual image.
- the method further includes: performing beat detection on the background music to obtain a beat collection of the background music, where the beat collection includes multiple beats, and each beat of the multiple beats corresponds to a stage special effect ; Add the target stage special effects corresponding to the beat collection to the 3D virtual model.
- stage effects are added to the virtual scene where the virtual character model is located, thereby presenting different stage effects to the audience and enhancing the audience's viewing experience.
- an embodiment of the present application also provides a device for realizing a splitting effect, including: an acquiring unit configured to acquire a three-dimensional virtual model; and a splitting unit configured to view the three-dimensional virtual model from at least two different lens angles. Perform rendering to obtain at least two virtual images respectively corresponding to different lens angles.
- the three-dimensional virtual model includes a three-dimensional virtual character model in a three-dimensional virtual scene model
- the device further includes: a feature extraction unit and a three-dimensional virtual model generation unit; wherein, the acquisition unit is also configured to Before acquiring a three-dimensional virtual model, acquire a real image, where the real image includes an image of a real person; the feature extraction unit is configured to perform feature extraction on the image of a real person to obtain feature information, where the feature information includes the action information of the real person; the three-dimensional virtual model The generating unit is configured to generate a three-dimensional virtual model according to the feature information, so that the action information of the three-dimensional virtual character model in the three-dimensional virtual model corresponds to the action information of the real character.
- the obtaining unit is configured to obtain a video stream, and obtain at least two frames of real images according to at least two frames of images in the video stream; the feature extraction unit is configured to perform processing on each frame of real person images. Feature extraction obtains corresponding feature information.
- the real image further includes a real scene image
- the three-dimensional virtual model also includes a three-dimensional virtual scene model
- the device further includes: a three-dimensional virtual scene image construction unit configured to before the acquisition unit acquires the three-dimensional virtual model , According to the real scene image, construct a three-dimensional virtual scene image.
- the device further includes a lens angle acquisition unit configured to obtain at least two different lens angles according to at least two frames of real images.
- the device further includes a lens angle acquisition unit configured to obtain at least two different lens angles according to the action information corresponding to the at least two frames of real images, respectively.
- the device further includes a lens angle acquisition unit configured to acquire background music; determine a time collection corresponding to the background music, where the time collection includes at least two time periods; and acquire each time in the time collection The lens angle of view corresponding to the segment.
- a lens angle acquisition unit configured to acquire background music; determine a time collection corresponding to the background music, where the time collection includes at least two time periods; and acquire each time in the time collection The lens angle of view corresponding to the segment.
- At least two different lens angles include a first lens angle of view and a second lens angle of view
- the splitting unit is configured to render the three-dimensional virtual model with the first lens angle of view to obtain the first lens angle.
- Virtual image Render the three-dimensional virtual model from the second lens perspective to obtain the second virtual image; display the image sequence formed according to the first virtual image and the second virtual image.
- the splitting unit is configured to translate or rotate the three-dimensional virtual model under the first lens angle of view to obtain the three-dimensional virtual model under the second lens angle of view; and obtain the three-dimensional virtual model under the second lens angle of view.
- the second virtual image corresponding to the three-dimensional virtual model.
- the mirror splitting unit is configured to insert a frame of virtual image between the first virtual image and the second virtual image, so that the first virtual image is gently switched to the second virtual image, wherein, a is a positive integer.
- the device further includes: a beat detection unit and a stage special effect generation unit; wherein the beat detection unit is configured to perform beat detection on the background music to obtain a beat collection of the background music, wherein the beat collection Including multiple beats, each of the multiple beats corresponds to a stage special effect; the stage special effect generation unit is configured to add the target stage special effect corresponding to the beat collection to the three-dimensional virtual model.
- an embodiment of the present application provides an electronic device, including: a processor, a communication interface, and a memory; the memory is used to store instructions, the processor is used to execute instructions, and the communication interface is used to communicate with other devices under the control of the processor. Communicating, wherein the processor executes the instruction to enable the electronic device to implement any one of the methods in the first aspect described above.
- an embodiment of the present application provides a computer-readable storage medium that stores a computer program, and the computer program is executed by hardware to implement any one of the methods in the first aspect.
- the embodiments of the present application provide a computer program product.
- the computer program product is read and executed by a computer, any one of the methods in the above-mentioned first aspect is executed.
- Fig. 1 is a schematic diagram of a specific application scenario provided by an embodiment of the present application.
- FIG. 2 is a schematic diagram of a possible three-dimensional virtual model provided by an embodiment of the present application.
- FIG. 3 is a schematic flowchart of a method for realizing a split-mirror effect provided by an embodiment of the present application
- FIG. 4 is a schematic diagram of an interpolation curve provided by an embodiment of the present application.
- FIG. 5 is a schematic flowchart of a specific embodiment provided by an embodiment of the present application.
- FIG. 6 is a schematic diagram of a splitting rule provided by an embodiment of the present application.
- FIG. 7A is an effect diagram of a possible virtual image provided by an embodiment of the present application.
- FIG. 7B is an effect diagram of a possible virtual image provided by an embodiment of the present application.
- FIG. 7C is an effect diagram of a possible virtual image provided by an embodiment of the present application.
- FIG. 7D is an effect diagram of a possible virtual image provided by an embodiment of the present application.
- FIG. 8 is a schematic structural diagram of a device for implementing a split-mirror effect provided by an embodiment of the present application.
- FIG. 9 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
- the method, device and related products for realizing the split-mirror effect provided by the embodiments of the present application can be applied in many fields such as social interaction, entertainment, and education. For example, it can be used for virtual live broadcast, social interaction in virtual communities, or Used to hold virtual concerts, can also be used in classroom teaching and so on.
- the following takes virtual live broadcast as an example to describe the specific application scenarios of the embodiments of the present application in detail.
- Virtual live broadcast is a way to use virtual characters instead of live anchors to conduct live broadcasts on a live broadcast platform. Because virtual characters have rich expressive power and are more in line with the communication environment of social networks, the virtual live broadcast industry is developing rapidly.
- computer technologies such as facial expression capture, motion capture, and sound processing are usually used to apply the facial expressions and actions of the live anchor to the virtual character model, so as to realize the audience and the virtual anchor in the video website or social networking website. of interaction.
- FIG. 1 is a schematic diagram of a specific application scenario provided by an embodiment of the present application.
- the server 120 transmits the generated virtual image to the user terminal 130 through the network for processing, so that different viewers can watch the entire live broadcast process through the corresponding user terminal 130.
- the posture of the generated virtual anchor is related to the relative position between the camera device 110 and the live anchor. That is to say, the audience can only see the virtual character under a specific angle of view, and this specific angle of view depends on the relative position between the camera device 110 and the live broadcaster, so that the live broadcast effect presented is unsatisfactory.
- problems such as stiff movements of virtual anchors, unsmooth shot switching screens, or monotonous and boring shots, which cause visual fatigue of the audience and make it impossible for the audience to experience the immersive experience.
- the teacher teaches students knowledge through online teaching, but this teaching method is usually boring, and the teacher in the video cannot know the students in real time
- students can only see the teacher or teaching handouts in a single perspective, which can easily cause students' fatigue.
- the teaching effect of video teaching is greatly reduced.
- the singer can hold a virtual concert in the recording studio to simulate the scene of a real concert, in order to achieve a real concert It is usually necessary to set up multiple cameras to shoot the singer. This kind of virtual concert is complicated to operate and wastes costs.
- the use of multiple cameras for shooting can get pictures under multiple lenses, which may cause lens switching. The problem of non-smoothness makes users unable to adapt to the visual difference caused by switching between different lenses.
- an embodiment of the present application provides a method for realizing the splitting effect.
- the method generates a three-dimensional virtual image based on the collected real image. Model, and obtain multiple different lens perspectives according to background music or the actions of real characters, and then render the three-dimensional virtual model with multiple different lens perspectives to obtain virtual images corresponding to multiple different lens perspectives, thereby simulating In the virtual scene, there are multiple virtual cameras to shoot the three-dimensional virtual model, which improves the viewer's viewing experience.
- the method also analyzes the beats of the background music and adds corresponding stage effects to the three-dimensional virtual model according to the beat information to present different stage effects to the audience, which further enhances the audience's viewing experience.
- the three-dimensional virtual model includes a three-dimensional virtual character model in a three-dimensional virtual scene.
- Figure 2 shows a schematic diagram of a possible three-dimensional virtual model.
- the hands of the three-dimensional virtual character model are raised to the chest.
- the figure The upper left corner of 2 also shows the real image collected by the split-mirror effect realization device, in which the real person is also raising his hands to his chest.
- the three-dimensional virtual character model is consistent with the actions of the real character. It can be understood that the above-mentioned Figure 2 is only an example.
- the real image collected by the device for implementing the split-mirror effect can be a three-dimensional image or a two-dimensional image.
- the number of characters in the real image can be one or There are multiple.
- the action of the real character can be raising both hands to the chest, raising the left foot or other actions, etc.
- the number of 3D virtual character models in the 3D virtual model generated from the real character image can be One or more than one.
- the action of the three-dimensional virtual character model can be raising both hands to the chest, raising the left foot or other actions, etc., which are not specifically limited here.
- the sub-mirror effect device for implementing real people shooting to obtain a plurality of frames real image I 1, I 2, ..., I n-, and in chronological order I on the real image 1, I 2 in the present application embodiment, ..., I Perform feature extraction on n to obtain multiple corresponding three-dimensional virtual models M 1 , M 2 ,..., M n , where n is a positive integer, and the real images I 1 , I 2 ,..., I n and the three-dimensional virtual model M 1
- n is a positive integer
- There is a one-to-one correspondence between ,M 2 ,...,M n that is, one frame of real image is used to generate a three-dimensional virtual model.
- a three-dimensional virtual model can be obtained as follows:
- Step one the device for achieving the split-mirror effect obtains the real image I i .
- the real image I i includes real person images, and i is a positive integer, 1 ⁇ i ⁇ n.
- Step 2 The device for implementing the split-mirror effect performs feature extraction on the real person image in the real image I i to obtain feature information.
- the feature information includes action information of real characters.
- obtaining a real image includes: obtaining a video stream, and obtaining at least two frames of real images according to at least two frames of images in the video stream; correspondingly, performing feature extraction on the real person image to obtain feature information includes: separately for each frame Feature extraction is performed on the real person image to obtain corresponding feature information.
- the feature information is used to control the posture of the three-dimensional virtual character model.
- the action information in the feature information includes facial expression features and body action features. Facial expression features are used to describe various emotional states of the character, such as happy, sad, Surprise, fear, anger or disgust, etc., physical movement characteristics are used to describe the movement state of real characters, for example, raising the left hand, raising the right foot, or jumping.
- the feature information can also include character information, where the character information includes multiple key points of the human body of the real person and their corresponding position information.
- the key points of the human body include key points of the face and the key points of the human skeleton, and the position features include the key points of the real person The position coordinates of the key points of the human body.
- the split-mirror effect realization device extracts and obtains the real person image in the real image I i by performing image segmentation on the real image I i ; and then performs key point detection on the extracted real person image to obtain the aforementioned multiple human key points And the position information of multiple key points of the human body, where the key points of the human body include key points of the face and the key points of the bones of the human body.
- the key points of the human body may be located in the head area, neck area, shoulder area, spine area, and waist of the human body.
- the device for realizing the split-mirror effect inputs the real image I i into the neural network for feature extraction, and after calculation of multiple convolutional layers, the multiple key point information of the human body is extracted.
- the neural network is obtained through a large amount of training.
- the neural network can be a Convolution Neural Network (CNN), a Back Propagation Neural Network (BPNN), or a generated confrontation Network (Generative Adversarial Network, GAN) or Recurrent Neural Network (Recurrent Neural Network, RNN), etc., which are not specifically limited here.
- CNN Convolution Neural Network
- BPNN Back Propagation Neural Network
- GAN Geneative Adversarial Network
- RNN Recurrent Neural Network
- the device for implementing the split-mirror effect can use CNN to extract key points of a human face to obtain facial expression features; it can also use BPNN to extract key points of human bones to obtain human bone features and limb movement features, which are not specifically limited here.
- CNN to extract key points of a human face to obtain facial expression features
- BPNN to extract key points of human bones to obtain human bone features and limb movement features
- the above example of the feature information used to drive the three-dimensional virtual character model is only used as an example, and other feature information may also be included in practical applications, which is not specifically limited here.
- Step 3 The split-mirror effect realization device generates the three- dimensional virtual character model in the three-dimensional virtual model M i according to the characteristic information, so that the three-dimensional virtual character model in the three-dimensional virtual model M i corresponds to the action information of the real character in the real image I i.
- the split-mirror effect realization device establishes a mapping relationship between the key points of the human body of the real person and the key points of the human body of the virtual character model through the above-mentioned feature information; and then controls the expression and posture of the virtual character model according to the mapping relationship, thereby making the virtual
- the facial expressions and body movements of the character model are consistent with the facial expressions and body movements of the real characters.
- the split-mirror effect realization device respectively performs serial number labeling on the key points of the human body of the real person to obtain the label information of the key points of the human body of the real person.
- the key points of the human body correspond to the label information one by one;
- the annotation information of the key points is used to mark the key points of the human body in the virtual character model. For example, if the label information of the left wrist of the real person is No. 1, the label information of the left wrist of the three-dimensional virtual character model is also No. 1, and the label information of the left arm of the real character is No. 2, then the left wrist of the three-dimensional virtual character model
- the annotation information is also No.
- the key point annotation information of the human body of the real person is matched with the key point annotation information of the human body of the three-dimensional virtual character model, and the position information of the key point of the human body of the real character is mapped to the corresponding three-dimensional virtual character model
- the three-dimensional virtual character model can reproduce the facial expressions and body movements of real characters.
- the real image I i also includes a real scene image
- the three-dimensional virtual model M i also includes a three-dimensional virtual scene model.
- the above-mentioned method for generating a three-dimensional virtual model M i based on the real image I i further includes: real scene image i, M i construct a three-dimensional virtual model of the three-dimensional virtual scene.
- the device for realizing the split-mirror effect first performs image segmentation on the real image I i to obtain the real scene image in the real image I i ; then extracts the scene features in the real scene image, for example, the position features of the objects in the real scene, Shape feature, size feature, etc.; construct the three- dimensional virtual scene model in the three-dimensional virtual model M i according to the scene feature, so that the three-dimensional virtual scene model in the three-dimensional virtual model M i can highly restore the real scene image in the real image I i.
- the above only illustrates the process of generating a three-dimensional virtual model M i from a real image I i .
- the three-dimensional virtual model M 1 , M 2 ,...,M i-1 ,M i+1 ,...,M n the generation process of the three-dimensional virtual model generation process is similar to M i, will not expand further described herein.
- the 3D virtual scene model in the 3D virtual model can be constructed based on the real scene image in the real image, or it can be a user-defined 3D virtual scene model; the facial features of the 3D virtual character model in the 3D virtual model can be changed from real The facial features of the real person image in the image can also be user-defined facial features, which is not specifically limited here.
- FIG. 3 is a schematic flowchart of a method for realizing a split-mirror effect provided by an embodiment of the present application.
- the method for realizing the splitting effect of this embodiment includes but not limited to the following steps:
- the device for achieving split-mirror effect obtains a three-dimensional virtual model.
- the three-dimensional virtual model is used to simulate real characters and real scenes.
- the three-dimensional virtual model includes a three-dimensional virtual character model in a three-dimensional virtual scene model, and the three-dimensional virtual model is generated based on a real image.
- the three-dimensional virtual character model is generated based on the real character image included in the real image
- the three-dimensional virtual character model in the three-dimensional virtual model is used to simulate the real character in the real image
- the actions of the three-dimensional virtual character model correspond to the actions of the real character .
- the three-dimensional virtual scene model may be constructed based on the real scene image included in the real image, or may be a preset three-dimensional virtual scene model. When the three-dimensional virtual scene model is constructed from the real scene image, the three-dimensional virtual scene model can be used to simulate the real scene in the real image.
- the device for achieving a split-mirror effect obtains at least two different lens angles of view.
- the angle of view of the lens is used to indicate the position of the camera relative to the object when the camera is shooting the object.
- the camera can get a top view of the object when shooting directly above the object.
- the corresponding lens angle of view is V
- the image captured by the camera shows the object under the lens angle of V, that is, the top view of the object.
- obtaining at least two different lens angles includes: obtaining at least two different lens angles according to at least two frames of real images.
- the real image can be taken by a real camera
- the position of the real camera relative to the real person may be multiple
- the multiple real images taken by multiple real cameras at different positions show multiple different lens perspectives. Real people.
- obtaining at least two different lens angles includes: obtaining at least two different lens angles according to the action information corresponding to the at least two frames of real images.
- the motion information includes the body motions and facial expressions of real characters in real images.
- the body movements include many kinds.
- the body movements can be one or more of raising the right hand, raising the left foot, jumping, etc.
- the facial expressions also include many kinds.
- the facial expressions can be, for example, smiling, tearing, etc.
- One or more of facial expressions such as anger. Examples of body movements and facial expressions in this embodiment are not limited to the above description.
- one action or a combination of multiple actions corresponds to one lens angle of view.
- the corresponding lens angle of view is V 1
- the corresponding lens angle of view can be the lens angle of view V 1 , or the lens angle of view V 2, etc., the same
- the corresponding lens angle of view can be the lens angle of view V 1 , the lens angle of view V 2 , or the lens angle of view V 3, and so on.
- obtaining at least two different camera angles includes: obtaining background music; determining a time collection corresponding to the background music, where the time collection includes at least two time periods; obtaining each time period in the time collection Corresponding lens angle of view.
- the real image may be one or more frames in a video stream.
- the video stream includes image information and background music information, where one frame of image corresponds to one frame of music.
- the background music information includes background music and a corresponding time collection.
- the time collection includes at least two time periods, and each time period corresponds to a lens angle.
- the device for implementing the split-mirror effect renders the three-dimensional virtual model with at least two different lens angles to obtain virtual images corresponding to at least two different lens angles respectively.
- the aforementioned at least two different lens angles include a first lens angle of view and a second lens angle of view
- the three-dimensional virtual model is rendered with at least two different lens angles to obtain at least two different lenses
- the virtual images corresponding to the viewing angles respectively include: S1031, rendering the three-dimensional virtual model with the first lens perspective to obtain the first virtual image; S1032, rendering the three-dimensional virtual model with the second lens perspective to obtain the second virtual image.
- rendering the three-dimensional virtual model from the second lens perspective to obtain the second virtual image includes: translating or rotating the three-dimensional virtual model from the first lens perspective to obtain the three-dimensional virtual model from the second lens perspective. Model; Acquire the second virtual image corresponding to the three-dimensional virtual model under the second lens perspective.
- the first lens angle can be obtained based on the real image, it can also be obtained based on the action information corresponding to the real image, or it can be obtained based on the time collection corresponding to the background music; similarly, the second lens angle can be It is obtained based on the real image, or based on the action information corresponding to the real image, or based on the time collection corresponding to the background music, which is not specifically limited in the embodiment of the present application.
- the above display of the image sequence formed according to the first image and the second virtual image includes: inserting a frame of virtual image between the first virtual image and the second virtual image, so that the first virtual image is smooth Switch to the second virtual image, where a is a positive integer.
- a frame of virtual images P 1 , P 2 ,..., P a between the first virtual image and the second virtual image, so that the first virtual image is gently switched to the second virtual image, where a frame
- the time points at which the virtual images P 1 , P 2 ,..., P a are inserted are b 1 , b 2 ,..., b a , and the time points b 1 , b 2 ,..., b a
- the slope value satisfies the function of monotonically decreasing first and then increasing monotonically, and a is a positive integer.
- FIG. 4 shows a schematic diagram of an interpolation curve.
- the device for realizing the split-mirror effect obtains the first virtual image at the first minute, and the second virtual image at the second minute.
- One virtual image presents the front view of the three-dimensional virtual model, and the second virtual image presents the left view of the three-dimensional virtual model.
- the split-lens effect realization device inserts multiple time points between the first minute and the second minute, and inserts a virtual image at each time point, for example, in 1.4 inserting the virtual image P is minutes 1, inserting the virtual image P in the first 1.65 minutes 2, insertion of the virtual image P at 1.8 minutes 3, insertion of the virtual image P 4 in the first 1.85 minutes, wherein the virtual image P 1 presented is
- the virtual image P 2 presents the effect of rotating the three-dimensional virtual model to the left by 50 degrees
- the virtual image P 3 and the virtual image P 4 both present the effect of rotating the three-dimensional virtual model to the left
- the 90-degree effect allows the audience to see the entire process of the 3D virtual model gradually changing from the front view to the left view, instead of a single two images (the front view of the 3D virtual model and the left view of the 3D virtual model), thus making The audience can adapt to the changing effect of the visual difference when switching from
- stage special effects mentioned in the embodiments of this application to render a three-dimensional virtual model to present different stage effects to the audience is described in detail, which specifically includes the following steps:
- Step 1 The device for realizing the split-mirror effect detects the beats of the background music, and obtains a collection of beats of the background music.
- the beat collection includes multiple beats, and each beat of the multiple beats corresponds to a stage special effect.
- the split-mirror effect realization device can use shaders and particle special effects to respectively render the 3D virtual model.
- the shader can be used to realize the spotlight rotation effect on the back of the virtual stage and the sound wave effect of the virtual stage itself, and particle special effects. It is used to add similar visual effects such as sparks, fallen leaves, meteors, etc. to the 3D virtual model.
- Step 2 The split-mirror effect realization device adds the target stage special effects corresponding to the beat collection to the three-dimensional virtual model.
- the above method generates a three-dimensional virtual model based on the collected real images, and switches the corresponding lens perspective according to the collected real images, background music, and the actions of real characters, thereby simulating that there are multiple virtual cameras in the virtual scene.
- the effect of shooting the virtual model improves the viewer's sense of viewing experience.
- the method also analyzes the beats of the background music and adds corresponding stage special effects to the virtual image according to the beat information to present different stage effects to the audience, which further enhances the audience's viewing experience.
- FIG. 5 shows a schematic flowchart of a specific embodiment.
- the device for achieving split-mirror effect obtains a real image and background music, and obtains a first lens angle of view according to the real image. Among them, when the background music sounds, the real person acts according to the background music, and the real camera shoots the real person to obtain the real image.
- the device for realizing split-mirror effect generates a three-dimensional virtual model according to the real image. Among them, the three-dimensional virtual model is obtained at the first moment by the device for realizing the split-mirror effect.
- the split-mirror effect realization device detects the beat of the background music to obtain the beat collection of the background music, and adds the target stage special effect corresponding to the beat collection to the three-dimensional virtual model.
- the device for implementing split-mirror effect renders the three-dimensional virtual model with the first lens angle of view to obtain a first virtual image corresponding to the first lens angle of view.
- the device for realizing the split-mirror effect determines the time collection corresponding to the background music.
- the time collection includes multiple time periods, and each of the multiple time periods corresponds to a lens angle.
- the mirroring effect realization device judges whether the action information database contains action information, executes S207-S209 if the action information database does not contain action information, and executes S210-S212 if the action information database contains action information.
- the action information is the action information of the real person in the real image
- the action information database includes a plurality of action information, and each action information in the multiple action information corresponds to a lens angle of view.
- the device for realizing the splitting effect determines the second lens angle corresponding to the time period at the first moment according to the time collection.
- the device for implementing the split-mirror effect renders the three-dimensional virtual model with the second lens angle of view to obtain a second virtual image corresponding to the second lens angle of view.
- the device for realizing split-mirror effect displays an image sequence formed according to the first virtual image and the second virtual image.
- the device for achieving a split-mirror effect determines a third lens angle of view corresponding to the action information according to the action information.
- the device for implementing the split-mirror effect renders the three-dimensional virtual model with the third lens angle of view to obtain a third virtual image corresponding to the third lens angle of view.
- the device for realizing split-mirror effect displays an image sequence formed according to the first virtual image and the third virtual image.
- an embodiment of the present application provides a schematic diagram of a splitting rule as shown in FIG. 6, and performing splitting processing and stage special effects processing on a virtual image according to the splitting rule shown in FIG.
- the effect diagrams of the four virtual images as shown in FIGS. 7A-7D are obtained.
- the split-lens effect realization device shoots a real person under the lens angle of view V 1 to obtain a real image I 1 (as shown in the upper left corner of Fig. 7A), and then according to the real image I 1 Obtain a three-dimensional virtual model M 1 .
- Storyboard achieve the effect means the background music for the beat detection to determine the first minute corresponding to the pulse for B 1, and 1 was stage effects W during the first minute 1 according to the beat B, then the stage effects W 1 is added to the three-dimensional virtual model In M 1 ; the split-lens effect realization device determines the lens angle corresponding to the first minute (referred to as the time-lens angle of view) as V 1 according to the preset lens script; the split-lens effect realization device detects that the action of a real person in the first minute is The action of raising both hands to the chest and raising both hands to the chest is not in the action information database, that is, there is no lens angle of view corresponding to the action (referred to as the action lens angle of view), then the display on the device for achieving the effect of the splitter effect is shown in Figure 7A The virtual image shown in FIG. 7A and the real image I 1 have the same lens angle of view.
- the split-lens effect realization device shoots a real person under the lens angle of view V 1 to obtain a real image I 2 (as shown in the upper left corner of Fig. 7B), and then according to the real image I 2 Obtain a three-dimensional virtual model M 2 .
- Storyboard achieve the effect means the background music for the beat detection to determine the first 2 minutes corresponding to the tempo B 2, and 2 to give the stage effects W 2 during the first 2 minutes according to the beat B, then add stage effects W in the three-dimensional virtual model of M 2 2 ;
- the split-lens effect realization device determines the lens angle corresponding to the second minute (referred to as the time-lens angle of view) as V 2 according to the preset lens script; the split-lens effect realization device detects that the real person’s action in the second minute is lifted up The action of raising your hands and raising your hands is not in the action information database, that is, there is no lens angle of view corresponding to the action (referred to as action lens angle of view), then the split-lens effect realization device rotates the three-dimensional virtual model M 2 to the upper left to obtain The lens angle of view is the virtual image corresponding to V 2. It can be seen that when the stage special effect W 2 is added to the three-dimensional virtual model M 2 , the virtual image shown in FIG. 7B
- the split-lens effect realization device shoots a real person under the lens angle of view V 1 to obtain a real image I 3 (as shown in the upper left corner of Fig. 7C), and then according to the real image I 3 Obtain a three-dimensional virtual model M 3 .
- Storyboard achieve the effect means the background music for the beat detection to determine the third minute of beats corresponding to B 3, and with stage effects W 3 at the 3rd minute according to the beat B 3, and then add stage effects W 3 in the three-dimensional virtual model M 3 ;
- the splitting effect realization device determines that the corresponding lens angle of view (referred to as the time lens angle of view) at the third minute is V 2 according to the preset lens script; the splitting effect realization device detects that the real person’s action in the third minute is upward Lifting the left foot and lifting the left foot corresponds to the lens angle of view (referred to as the action lens angle of view) as V 3 , then the split-lens effect realization device rotates the three-dimensional virtual model M 3 to the left to obtain the lens angle corresponding to V 3 Virtual image.
- the split-lens effect realization device shoots a real person under the lens angle of view V 1 to obtain a real image I 4 (as shown in the upper left corner of Fig. 7D), and then according to the real image I 4 Obtain a three-dimensional virtual model M 4 .
- Storyboard achieve the effect means the background music for the beat detection to determine the first four minutes corresponding to the tempo B 4, and with the stage effects W 3 at the time of 4 minutes according to the beat B 4, and then add stage effects W in a 3D virtual model of M 4 in 4 ;
- the splitting effect realization device determines the lens angle of view corresponding to the 3rd minute (referred to as the time lens angle of view) as V 4 according to the preset lens script; the splitting effect realization device detects that the real person’s action in the 4th minute is standing, And the lens angle of view corresponding to the action of standing (referred to as the action lens angle of view) is V 4 , at this time, the splitting effect realization device rotates the three-dimensional virtual model M 4 to the right to obtain a virtual image corresponding to the lens angle of view V 4.
- the stage 4 is added to effect three-dimensional virtual model W M 4 in FIG. 7D so that the virtual image shown in FIG. 7C and the virtual image shown in different stage effects.
- the splitting effect realization device provided in the embodiments of the present application may be a software device or a hardware device.
- the splitting effect realization device is a software device
- the splitting effect realization device can be separately deployed on a computing device in a cloud environment. It can be deployed separately on a terminal device.
- the split-mirror effect realization device is a hardware device
- the internal unit modules of the split-mirror effect realization device can also be divided into multiple types. Each module can be a software module, a hardware module, or part of it. It is a software module and the part is a hardware module, and this application does not limit it.
- FIG. 8 is an exemplary division method. As shown in FIG. 8, FIG.
- a device 800 for implementing a splitting effect provided by an embodiment of the present application including: an obtaining unit 810 configured to obtain a three-dimensional virtual model;
- the mirror unit 820 is configured to render the three-dimensional virtual model with at least two different lens angles to obtain virtual images corresponding to at least two different lens angles.
- the three-dimensional virtual model includes a three-dimensional virtual character model in a three-dimensional virtual scene model
- the above-mentioned apparatus further includes: a feature extraction unit 830 and a three-dimensional virtual model generation unit 840; wherein,
- the acquiring unit 810 is further configured to acquire a real image before acquiring the three-dimensional virtual model, where the real image includes a real person image;
- the feature extraction unit 830 is configured to perform feature extraction on the real person image to obtain feature information, where the feature information includes Action information of a real character;
- the three-dimensional virtual model generating unit 840 is configured to generate a three-dimensional virtual model according to the characteristic information, so that the action information of the three-dimensional virtual character model in the three-dimensional virtual model corresponds to the action information of the real character.
- the acquiring unit is configured to acquire a video stream, and obtain at least two frames of real images according to at least two frames of images in the video stream;
- the feature extraction unit 830 is configured to separately analyze each frame of real person images Perform feature extraction to obtain corresponding feature information.
- the real image further includes a real scene image
- the three-dimensional virtual model also includes a three-dimensional virtual scene model
- the above-mentioned apparatus further includes: a three-dimensional virtual scene image construction unit 850 configured to acquire a three-dimensional virtual model in the acquiring unit Previously, a three-dimensional virtual scene image was constructed based on the real scene image.
- the above-mentioned device further includes a lens angle acquisition unit 860 configured to obtain at least two different lens angles.
- the lens angle of view acquisition unit 860 is configured to obtain at least two different lens angles according to at least two frames of real images.
- the lens angle of view acquisition unit 860 is configured to obtain at least two different lens angles of view according to the action information corresponding to the at least two frames of real images, respectively.
- the lens angle acquisition unit 860 is configured to acquire background music; determine the time collection corresponding to the background music, where the time collection includes at least two time periods; and obtain the corresponding time period in the time collection Lens angle of view.
- At least two different lens angles include a first lens angle of view and a second lens angle of view
- the splitter unit 820 is configured to render the three-dimensional virtual model with the first lens angle of view to obtain the first lens angle.
- Virtual image Render the three-dimensional virtual model from the second lens perspective to obtain the second virtual image; display the image sequence formed according to the first virtual image and the second virtual image.
- the splitting unit 820 is configured to translate or rotate the three-dimensional virtual model under the first lens angle of view to obtain the three-dimensional virtual model under the second lens angle of view; and obtain the three-dimensional virtual model under the second lens angle of view.
- the second virtual image corresponding to the three-dimensional virtual model.
- the mirror splitting unit 820 is configured to insert a frame of virtual image between the first virtual image and the second virtual image, so that the first virtual image is gently switched to the second virtual image, wherein, a is a positive integer.
- the above-mentioned device further includes: a beat detection unit 870 configured to perform beat detection on the background music to obtain a beat collection of the background music, wherein the beat collection includes multiple beats, Each beat corresponds to a stage special effect; the stage special effect generation unit 880 is configured to add the target stage special effect corresponding to the beat collection to the three-dimensional virtual model.
- a beat detection unit 870 configured to perform beat detection on the background music to obtain a beat collection of the background music, wherein the beat collection includes multiple beats, Each beat corresponds to a stage special effect
- the stage special effect generation unit 880 is configured to add the target stage special effect corresponding to the beat collection to the three-dimensional virtual model.
- the above-mentioned split-mirror effect realization device generates a three-dimensional virtual model according to the collected real image, and obtains multiple lens perspectives according to the collected real image, background music, and the actions of real characters, and uses multiple lens perspectives to perform the three-dimensional virtual model.
- the corresponding lens angle of view is switched to simulate the effect of multiple virtual cameras shooting the 3D virtual model in the virtual scene, so that the user can see the 3D virtual model under different lens angles, which improves the viewer’s viewing experience .
- the device also analyzes the beats of the background music and adds corresponding stage effects to the three-dimensional virtual model according to the beat information to present different stage effects to the audience, which further enhances the audience's live viewing experience.
- an embodiment of the present application provides a schematic structural diagram of an electronic device 900, and the foregoing device for implementing the split-mirror effect is applied to the electronic device 900.
- the electronic device 900 includes a processor 910, a communication interface 920, and a memory 930, where the processor 910, the communication interface 920, and the memory 930 can be coupled through a bus 940. among them,
- the processor 910 may be a central processing unit (CPU), a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field programmable gate array (Field Programmable Gate Array, FPGA) or other programmable logic devices (Programmable Logic Device, PLD), transistor logic devices, hardware components, or any combination thereof.
- the processor 910 may implement or execute various exemplary methods described in conjunction with the disclosure of the present application. Specifically, the processor 910 reads the program code stored in the memory 930, and cooperates with the communication interface 920 to execute part or all of the steps of the method executed by the device for implementing the split-mirror effect in the foregoing embodiment of the present application.
- the communication interface 920 can be a wired interface or a wireless interface for communicating with other modules or devices.
- the wired interface can be an Ethernet interface, a controller area network interface, a local interconnect network (Local Interconnect Network, LIN), and a FlexRay interface.
- the interface can be a cellular network interface or a wireless local area network interface.
- the aforementioned communication interface 920 may be connected to an input/output device 950, and the input/output device 950 may include other terminal devices such as a mouse, a keyboard, and a microphone.
- the memory 930 may include a volatile memory, such as a random access memory (Random Access Memory, RAM); the memory 930 may also include a non-volatile memory (Non-Volatile Memory), such as a read-only memory (Read-Only Memory, ROM). ), flash memory, hard disk (Hard Disk Drive, HDD), or solid-state hard disk (Solid-State Drive, SSD), and the memory 930 may also include a combination of the foregoing types of memory.
- the memory 930 may store program codes and program data.
- the program code is composed of the codes of some or all of the units in the above-mentioned mirror effect realization device 800, for example, the code of the acquisition unit 810, the code of the mirror unit 820, the code of the feature extraction unit 830, and the 3D virtual model generation unit 840
- the program data is data generated during the operation of the split-mirror effect realization device 800, such as real image data, three-dimensional virtual model data, lens angle data, background music data, virtual image data, and so on.
- the bus 940 may be a Controller Area Network (CAN) or other internal bus that implements interconnection between various systems or devices in the vehicle.
- the bus 940 can be divided into an address bus, a data bus, a control bus, and so on. For ease of representation, the figure is only represented by a thick line, but it does not mean that there is only one bus or one type of bus.
- the electronic device 900 may include more or fewer components than those shown in FIG. 9, or may have different component configurations.
- the embodiment of the present application also provides a computer-readable storage medium.
- the above-mentioned computer-readable storage medium stores a computer program, and the above-mentioned computer program is executed by hardware (such as a processor, etc.) to realize part or All steps.
- the embodiment of the present application also provides a computer program product.
- the computer program product runs on the above-mentioned device or electronic device for realizing the split-mirror effect, it executes part or all of the steps of the method for realizing the above-mentioned split-mirror effect.
- the computer program product includes one or more computer instructions.
- the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
- the computer instructions may be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center.
- the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or a data center integrated with one or more available media.
- the usable medium may be a magnetic medium (for example, a floppy disk, a storage disk, a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, an SSD).
- a magnetic medium for example, a floppy disk, a storage disk, a magnetic tape
- an optical medium for example, a DVD
- a semiconductor medium for example, an SSD
- the disclosed device may also be implemented in other ways.
- the device embodiments described above are only illustrative.
- the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components can be combined or integrated. To another system, or some features can be ignored or not implemented.
- the displayed or discussed indirect coupling or direct coupling or communication connection between each other may be through some interfaces, indirect coupling or communication connection between devices or units, and may be in electrical or other forms.
- the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed to multiple network units. . Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions in the embodiments of the present application.
- the functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
- the integrated unit may be implemented in the form of hardware or software functional unit.
- the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
- the technical solution of the application essentially or the part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium.
- a number of instructions are included to enable a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the various embodiments of the present application.
- the aforementioned storage media may include, for example, various media capable of storing program codes, such as U disk, mobile hard disk, read-only memory, random access memory, magnetic disk or optical disk.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
Claims (25)
- 一种分镜效果的实现方法,包括:A method for realizing the split-mirror effect includes:获取三维虚拟模型;Obtain a three-dimensional virtual model;以至少两个不同的镜头视角对所述三维虚拟模型进行渲染,得到至少两个不同的镜头视角分别对应的虚拟图像。The three-dimensional virtual model is rendered with at least two different lens angles to obtain virtual images corresponding to at least two different lens angles.
- 根据权利要求1所述的方法,其中,所述三维虚拟模型包括处于三维虚拟场景模型中的三维虚拟人物模型,在所述获取三维虚拟模型之前,所述方法还包括:The method according to claim 1, wherein the three-dimensional virtual model comprises a three-dimensional virtual character model in a three-dimensional virtual scene model, and before the obtaining the three-dimensional virtual model, the method further comprises:获取真实图像,其中,所述真实图像包括真实人物图像;Acquiring a real image, where the real image includes an image of a real person;对所述真实人物图像进行特征提取得到特征信息,其中,所述特征信息包括所述真实人物的动作信息;Performing feature extraction on the real person image to obtain feature information, where the feature information includes action information of the real person;根据所述特征信息生成所述三维虚拟模型,以使得所述三维虚拟模型中的所述三维虚拟人物模型的动作信息与所述真实人物的动作信息对应。The three-dimensional virtual model is generated according to the characteristic information, so that the action information of the three-dimensional virtual character model in the three-dimensional virtual model corresponds to the action information of the real character.
- 根据权利要求2所述的方法,其中,所述获取真实图像包括:The method according to claim 2, wherein said obtaining a real image comprises:获取视频流,根据所述视频流中的至少两帧图像得到至少两帧所述真实图像;Obtaining a video stream, and obtaining at least two frames of the real image according to at least two frames of images in the video stream;所述对所述真实人物图像进行特征提取得到特征信息,包括:The performing feature extraction on the real person image to obtain feature information includes:分别对每一帧所述真实人物图像进行特征提取得到对应的特征信息。Perform feature extraction on each frame of the real person image to obtain corresponding feature information.
- 根据权利要求3所述的方法,其中,所述真实图像还包括真实场景图像,所述三维虚拟模型还包括所述三维虚拟场景模型;在所述获取三维虚拟模型之前,所述方法还包括:The method according to claim 3, wherein the real image further comprises a real scene image, and the three-dimensional virtual model further comprises the three-dimensional virtual scene model; before the obtaining the three-dimensional virtual model, the method further comprises:根据所述真实场景图像,构建所述三维虚拟场景模型。According to the real scene image, the three-dimensional virtual scene model is constructed.
- 根据权利要求3或4所述的方法,其中,获取所述至少两个不同的镜头视角,包括:The method according to claim 3 or 4, wherein acquiring the at least two different lens angles includes:根据所述至少两帧所述真实图像,得到所述至少两个不同的镜头视角。According to the at least two frames of the real image, the at least two different lens angles of view are obtained.
- 根据权利要求3或4所述的方法,其中,获取所述至少两个不同的镜头视角,包括:The method according to claim 3 or 4, wherein acquiring the at least two different lens angles includes:根据所述至少两帧所述真实图像分别对应的动作信息,得到所述至少两个不同的镜头视角。The at least two different lens angles of view are obtained according to the action information corresponding to the at least two frames of the real images respectively.
- 根据权利要求3或4所述的方法,其中,获取所述至少两个不同的镜头视角,包括:The method according to claim 3 or 4, wherein acquiring the at least two different lens angles includes:获取背景音乐;Get background music;确定所述背景音乐对应的时间合集,其中,所述时间合集包括至少两个时间段;Determining a time collection corresponding to the background music, wherein the time collection includes at least two time periods;获取所述时间合集中每一个时间段对应的镜头视角。Obtain the lens angle of view corresponding to each time period in the time collection.
- 根据权利要求1所述的方法,其中,所述至少两个不同的镜头视角包括第一镜头视角和第二镜头视角;所述以至少两个不同的镜头视角对所述三维虚拟模型进行渲 染,得到至少两个不同的镜头视角分别对应的虚拟图像,包括:The method according to claim 1, wherein the at least two different lens angles include a first lens angle of view and a second lens angle of view; the rendering of the three-dimensional virtual model with at least two different lens angles, Obtain at least two virtual images corresponding to different lens angles, including:以所述第一镜头视角对所述三维虚拟模型进行渲染,得到第一虚拟图像;Rendering the three-dimensional virtual model with the first lens perspective to obtain a first virtual image;以所述第二镜头视角对所述三维虚拟模型进行渲染,得到第二虚拟图像;Rendering the three-dimensional virtual model with the second lens perspective to obtain a second virtual image;展示根据所述第一虚拟图像和所述第二虚拟图像形成的图像序列。The image sequence formed according to the first virtual image and the second virtual image is displayed.
- 根据权利要求8所述的方法,其中,所述以所述第二镜头视角对所述三维虚拟模型进行渲染,得到第二虚拟图像,包括:The method according to claim 8, wherein said rendering said three-dimensional virtual model with said second lens angle of view to obtain a second virtual image comprises:将所述第一镜头视角下的所述三维虚拟模型进行平移或者旋转,得到所述第二镜头视角下的所述三维虚拟模型;Translate or rotate the three-dimensional virtual model in the first lens angle of view to obtain the three-dimensional virtual model in the second lens angle of view;获取所述第二镜头视角下的所述三维虚拟模型对应的所述第二虚拟图像。Acquiring the second virtual image corresponding to the three-dimensional virtual model in the second lens angle of view.
- 根据权利要求9所述的方法,其中,所述展示根据所述第一图像和所述第二虚拟图像形成的图像序列,包括:The method according to claim 9, wherein the presenting the image sequence formed according to the first image and the second virtual image comprises:在所述第一虚拟图像和所述第二虚拟图像之间插入a帧虚拟图像,使得所述第一虚拟图像平缓切换至所述第二虚拟图像,其中,a是正整数。A frame of virtual image is inserted between the first virtual image and the second virtual image, so that the first virtual image is gently switched to the second virtual image, where a is a positive integer.
- 根据权利要求7至10任一项权利要求所述的方法,其中,所述方法还包括:The method according to any one of claims 7 to 10, wherein the method further comprises:对所述背景音乐进行节拍检测,得到所述背景音乐的节拍合集,其中,所述节拍合集包括多个节拍,所述多个节拍中的每一个节拍对应一个舞台特效;Performing beat detection on the background music to obtain a beat collection of the background music, wherein the beat collection includes multiple beats, and each beat in the multiple beats corresponds to a stage special effect;将所述节拍合集对应的目标舞台特效添加到所述三维虚拟模型中。The target stage special effect corresponding to the beat collection is added to the three-dimensional virtual model.
- 一种分镜效果的实现装置,包括:A device for realizing split-mirror effect, including:获取单元,配置为获取三维虚拟模型;The obtaining unit is configured to obtain a three-dimensional virtual model;分镜单元,配置为以至少两个不同的镜头视角对所述三维虚拟模型进行渲染,得到至少两个不同的镜头视角分别对应的虚拟图像。The mirror splitting unit is configured to render the three-dimensional virtual model with at least two different lens angles to obtain virtual images corresponding to at least two different lens angles.
- 根据权利要求12所述的装置,其中,所述三维虚拟模型包括处于三维虚拟场景模型中的三维虚拟人物模型;所述装置还包括:特征提取单元和三维虚拟模型生成单元;其中,所述获取单元,还配置为在获取三维虚拟模型之前,获取真实图像,其中,所述真实图像包括真实人物图像;The device according to claim 12, wherein the three-dimensional virtual model comprises a three-dimensional virtual character model in a three-dimensional virtual scene model; the device further comprises: a feature extraction unit and a three-dimensional virtual model generation unit; wherein, the acquisition The unit is further configured to acquire a real image before acquiring the three-dimensional virtual model, wherein the real image includes an image of a real person;所述特征提取单元,配置为对所述真实人物图像进行特征提取得到特征信息,其中,所述特征信息包括所述真实人物的动作信息;The feature extraction unit is configured to perform feature extraction on the real person image to obtain feature information, where the feature information includes action information of the real person;所述三维虚拟模型生成单元,配置为根据所述特征信息生成所述三维虚拟模型,以使得所述三维虚拟模型中的所述三维虚拟人物模型的动作信息与所述真实人物的动作信息对应。The three-dimensional virtual model generating unit is configured to generate the three-dimensional virtual model according to the characteristic information, so that the action information of the three-dimensional virtual character model in the three-dimensional virtual model corresponds to the action information of the real character.
- 根据权利要求13所述的装置,其中,所述获取单元,配置为获取视频流,根据所述视频流中的至少两帧图像得到至少两帧所述真实图像;The apparatus according to claim 13, wherein the acquisition unit is configured to acquire a video stream, and obtain at least two frames of the real image according to at least two frames of the image in the video stream;所述特征提取单元,配置为分别对每一帧所述真实人物图像进行特征提取得到对应的特征信息。The feature extraction unit is configured to perform feature extraction on each frame of the real person image to obtain corresponding feature information.
- 根据权利要求14所述的装置,其中,所述真实图像还包括真实场景图像,所述三维虚拟模型还包括所述三维虚拟场景模型;The apparatus according to claim 14, wherein the real image further comprises a real scene image, and the three-dimensional virtual model further comprises the three-dimensional virtual scene model;所述装置还包括三维虚拟场景图像构建单元,配置为在所述获取单元获取三维虚拟模型之前,根据所述真实场景图像,构建所述三维虚拟场景模型。The device further includes a three-dimensional virtual scene image construction unit configured to construct the three-dimensional virtual scene model according to the real scene image before the acquisition unit acquires the three-dimensional virtual model.
- 根据权利要求14或15所述的装置,其中,所述装置还包括镜头视角获取单元,配置为根据所述至少两帧所述真实图像,得到所述至少两个不同的镜头视角。The device according to claim 14 or 15, wherein the device further comprises a lens angle acquisition unit configured to obtain the at least two different lens angles according to the at least two frames of the real image.
- 根据权利要求14或15所述的装置,其中,所述装置还包括镜头视角获取单元,配置为根据所述至少两帧所述真实图像分别对应的动作信息,得到所述至少两个不同的镜头视角。The device according to claim 14 or 15, wherein the device further comprises a lens angle acquisition unit configured to obtain the at least two different lenses according to the action information corresponding to the at least two frames of the real images. Perspective.
- 根据权利要求14或15所述的装置,其中,所述装置还包括镜头视角获取单元,配置为获取背景音乐;确定所述背景音乐对应的时间合集,其中,所述时间合集包括至少两个时间段;获取所述时间合集中每一个时间段对应的镜头视角。The device according to claim 14 or 15, wherein the device further comprises a lens angle acquisition unit configured to acquire background music; determine a time collection corresponding to the background music, wherein the time collection includes at least two times Segment; obtain the lens angle of view corresponding to each time segment in the time collection.
- 根据权利要求12所述的装置,其中,所述至少两个不同的镜头视角包括第一镜头视角和第二镜头视角;所述分镜单元,配置为以所述第一镜头视角对所述三维虚拟模型进行渲染,得到第一虚拟图像;以所述第二镜头视角对所述三维虚拟模型进行渲染,得到第二虚拟图像;展示根据所述第一虚拟图像和所述第二虚拟图像形成的图像序列。The device according to claim 12, wherein the at least two different lens angles include a first lens angle of view and a second lens angle of view; the splitting unit is configured to view the three-dimensional view from the first lens angle of view. The virtual model is rendered to obtain a first virtual image; the three-dimensional virtual model is rendered with the second lens angle of view to obtain a second virtual image; it is displayed based on the first virtual image and the second virtual image Image sequence.
- 根据权利要求19所述的装置,其中,所述分镜单元,配置为将所述第一镜头视角下的所述三维虚拟模型进行平移或者旋转,得到所述第二镜头视角下的所述三维虚拟模型;获取所述第二镜头视角下的所述三维虚拟模型对应的所述第二虚拟图像。The device according to claim 19, wherein the splitting unit is configured to translate or rotate the three-dimensional virtual model in the first lens angle of view to obtain the three-dimensional virtual model in the second lens angle of view. Virtual model; acquiring the second virtual image corresponding to the three-dimensional virtual model in the second lens angle of view.
- 根据权利要求20所述的装置,其中,所述分镜单元,配置为在所述第一虚拟图像和所述第二虚拟图像之间插入a帧虚拟图像,使得所述第一虚拟图像平缓切换至所述第二虚拟图像,其中,a是正整数。The device according to claim 20, wherein the splitting unit is configured to insert a frame of virtual image between the first virtual image and the second virtual image, so that the first virtual image is switched smoothly To the second virtual image, where a is a positive integer.
- 根据权利要求18至21任一项所述的装置,其中,所述装置还包括:节拍检测单元和舞台特效生成单元;其中,所述节拍检测单元,配置为对所述背景音乐进行节拍检测,得到所述背景音乐的节拍合集,其中,所述节拍合集包括多个节拍,所述多个节拍中的每一个节拍对应一个舞台特效;The device according to any one of claims 18 to 21, wherein the device further comprises: a beat detection unit and a stage special effect generation unit; wherein the beat detection unit is configured to perform beat detection on the background music, Obtaining a collection of beats of the background music, wherein the collection of beats includes multiple beats, and each beat of the multiple beats corresponds to a stage special effect;所述舞台特效生成单元,配置为将所述节拍合集对应的目标舞台特效添加到所述三维虚拟模型中。The stage special effect generating unit is configured to add the target stage special effect corresponding to the beat collection to the three-dimensional virtual model.
- 一种电子设备,所述电子设备包括:处理器、通信接口以及存储器;所述存储器用于存储指令,所述处理器用于执行所述指令,所述通信接口用于在所述处理器的控制下与其他设备进行通信,其中,所述处理器执行所述指令时实现权利要求1至11任一项权利要求所述的方法。An electronic device comprising: a processor, a communication interface, and a memory; the memory is used to store instructions, the processor is used to execute the instructions, and the communication interface is used to control the processor Communicate with other devices under the following conditions, wherein the processor implements the method according to any one of claims 1 to 11 when the processor executes the instructions.
- 一种计算机可读存储介质,存储有计算机程序,所述计算机程序被硬件执行以实现权利要求1至11任一项权利要求所述的方法。A computer-readable storage medium storing a computer program, and the computer program is executed by hardware to implement the method according to any one of claims 1 to 11.
- 一种计算机程序产品,所述计算机程序产品被计算机读取并执行以实现权利要求1至11任一项权利要求所述的方法。A computer program product that is read and executed by a computer to implement the method described in any one of claims 1 to 11.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020227018465A KR20220093342A (en) | 2019-12-03 | 2020-03-31 | Method, device and related products for implementing split mirror effect |
JP2022528715A JP7457806B2 (en) | 2019-12-03 | 2020-03-31 | Lens division realization method, device and related products |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911225211.4A CN111080759B (en) | 2019-12-03 | 2019-12-03 | Method and device for realizing split mirror effect and related product |
CN201911225211.4 | 2019-12-03 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021109376A1 true WO2021109376A1 (en) | 2021-06-10 |
Family
ID=70312713
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/082545 WO2021109376A1 (en) | 2019-12-03 | 2020-03-31 | Method and device for producing multiple camera-angle effect, and related product |
Country Status (5)
Country | Link |
---|---|
JP (1) | JP7457806B2 (en) |
KR (1) | KR20220093342A (en) |
CN (1) | CN111080759B (en) |
TW (1) | TWI752502B (en) |
WO (1) | WO2021109376A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113630646A (en) * | 2021-07-29 | 2021-11-09 | 北京沃东天骏信息技术有限公司 | Data processing method and device, equipment and storage medium |
CN114900743A (en) * | 2022-04-28 | 2022-08-12 | 中德(珠海)人工智能研究院有限公司 | Scene rendering transition method and system based on video plug flow |
CN115883814A (en) * | 2023-02-23 | 2023-03-31 | 阿里巴巴(中国)有限公司 | Method, device and equipment for playing real-time video stream |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI762375B (en) * | 2021-07-09 | 2022-04-21 | 國立臺灣大學 | Semantic segmentation failure detection system |
CN114157879A (en) * | 2021-11-25 | 2022-03-08 | 广州林电智能科技有限公司 | Full scene virtual live broadcast processing equipment |
CN114630173A (en) * | 2022-03-03 | 2022-06-14 | 北京字跳网络技术有限公司 | Virtual object driving method and device, electronic equipment and readable storage medium |
CN114745598B (en) * | 2022-04-12 | 2024-03-19 | 北京字跳网络技术有限公司 | Video data display method and device, electronic equipment and storage medium |
CN117014651A (en) * | 2022-04-29 | 2023-11-07 | 北京字跳网络技术有限公司 | Video generation method and device |
CN115442542B (en) * | 2022-11-09 | 2023-04-07 | 北京天图万境科技有限公司 | Method and device for splitting mirror |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106157359A (en) * | 2015-04-23 | 2016-11-23 | 中国科学院宁波材料技术与工程研究所 | A kind of method for designing of virtual scene experiencing system |
CN106295955A (en) * | 2016-07-27 | 2017-01-04 | 邓耀华 | A kind of client based on augmented reality is to the footwear custom-built system of factory and implementation method |
US10068376B2 (en) * | 2016-01-11 | 2018-09-04 | Microsoft Technology Licensing, Llc | Updating mixed reality thumbnails |
CN108604121A (en) * | 2016-05-10 | 2018-09-28 | 谷歌有限责任公司 | Both hands object manipulation in virtual reality |
CN108830894A (en) * | 2018-06-19 | 2018-11-16 | 亮风台(上海)信息科技有限公司 | Remote guide method, apparatus, terminal and storage medium based on augmented reality |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TW201333882A (en) * | 2012-02-14 | 2013-08-16 | Univ Nat Taiwan | Augmented reality apparatus and method thereof |
US20150049078A1 (en) * | 2013-08-15 | 2015-02-19 | Mep Tech, Inc. | Multiple perspective interactive image projection |
CN106385576B (en) * | 2016-09-07 | 2017-12-08 | 深圳超多维科技有限公司 | Stereoscopic Virtual Reality live broadcasting method, device and electronic equipment |
CN107103645B (en) * | 2017-04-27 | 2018-07-20 | 腾讯科技(深圳)有限公司 | virtual reality media file generation method and device |
CN107194979A (en) * | 2017-05-11 | 2017-09-22 | 上海微漫网络科技有限公司 | The Scene Composition methods and system of a kind of virtual role |
US10278001B2 (en) * | 2017-05-12 | 2019-04-30 | Microsoft Technology Licensing, Llc | Multiple listener cloud render with enhanced instant replay |
JP6469279B1 (en) | 2018-04-12 | 2019-02-13 | 株式会社バーチャルキャスト | Content distribution server, content distribution system, content distribution method and program |
CN108538095A (en) * | 2018-04-25 | 2018-09-14 | 惠州卫生职业技术学院 | Medical teaching system and method based on virtual reality technology |
JP6595043B1 (en) | 2018-05-29 | 2019-10-23 | 株式会社コロプラ | GAME PROGRAM, METHOD, AND INFORMATION PROCESSING DEVICE |
CN108961376A (en) * | 2018-06-21 | 2018-12-07 | 珠海金山网络游戏科技有限公司 | The method and system of real-time rendering three-dimensional scenic in virtual idol live streaming |
CN108833740B (en) * | 2018-06-21 | 2021-03-30 | 珠海金山网络游戏科技有限公司 | Real-time prompter method and device based on three-dimensional animation live broadcast |
CN108877838B (en) * | 2018-07-17 | 2021-04-02 | 黑盒子科技(北京)有限公司 | Music special effect matching method and device |
JP6538942B1 (en) * | 2018-07-26 | 2019-07-03 | 株式会社Cygames | INFORMATION PROCESSING PROGRAM, SERVER, INFORMATION PROCESSING SYSTEM, AND INFORMATION PROCESSING APPARATUS |
CN110139115B (en) * | 2019-04-30 | 2020-06-09 | 广州虎牙信息科技有限公司 | Method and device for controlling virtual image posture based on key points and electronic equipment |
CN110335334A (en) * | 2019-07-04 | 2019-10-15 | 北京字节跳动网络技术有限公司 | Avatars drive display methods, device, electronic equipment and storage medium |
CN110427110B (en) * | 2019-08-01 | 2023-04-18 | 广州方硅信息技术有限公司 | Live broadcast method and device and live broadcast server |
-
2019
- 2019-12-03 CN CN201911225211.4A patent/CN111080759B/en active Active
-
2020
- 2020-03-31 KR KR1020227018465A patent/KR20220093342A/en active Search and Examination
- 2020-03-31 JP JP2022528715A patent/JP7457806B2/en active Active
- 2020-03-31 WO PCT/CN2020/082545 patent/WO2021109376A1/en active Application Filing
- 2020-05-20 TW TW109116665A patent/TWI752502B/en active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106157359A (en) * | 2015-04-23 | 2016-11-23 | 中国科学院宁波材料技术与工程研究所 | A kind of method for designing of virtual scene experiencing system |
US10068376B2 (en) * | 2016-01-11 | 2018-09-04 | Microsoft Technology Licensing, Llc | Updating mixed reality thumbnails |
CN108604121A (en) * | 2016-05-10 | 2018-09-28 | 谷歌有限责任公司 | Both hands object manipulation in virtual reality |
CN106295955A (en) * | 2016-07-27 | 2017-01-04 | 邓耀华 | A kind of client based on augmented reality is to the footwear custom-built system of factory and implementation method |
CN108830894A (en) * | 2018-06-19 | 2018-11-16 | 亮风台(上海)信息科技有限公司 | Remote guide method, apparatus, terminal and storage medium based on augmented reality |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113630646A (en) * | 2021-07-29 | 2021-11-09 | 北京沃东天骏信息技术有限公司 | Data processing method and device, equipment and storage medium |
CN114900743A (en) * | 2022-04-28 | 2022-08-12 | 中德(珠海)人工智能研究院有限公司 | Scene rendering transition method and system based on video plug flow |
CN115883814A (en) * | 2023-02-23 | 2023-03-31 | 阿里巴巴(中国)有限公司 | Method, device and equipment for playing real-time video stream |
Also Published As
Publication number | Publication date |
---|---|
CN111080759B (en) | 2022-12-27 |
JP2023501832A (en) | 2023-01-19 |
JP7457806B2 (en) | 2024-03-28 |
KR20220093342A (en) | 2022-07-05 |
TWI752502B (en) | 2022-01-11 |
TW202123178A (en) | 2021-06-16 |
CN111080759A (en) | 2020-04-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021109376A1 (en) | Method and device for producing multiple camera-angle effect, and related product | |
CN111970535B (en) | Virtual live broadcast method, device, system and storage medium | |
KR102503413B1 (en) | Animation interaction method, device, equipment and storage medium | |
WO2022001593A1 (en) | Video generation method and apparatus, storage medium and computer device | |
US9654734B1 (en) | Virtual conference room | |
CN113240782B (en) | Streaming media generation method and device based on virtual roles | |
CN110968736B (en) | Video generation method and device, electronic equipment and storage medium | |
US20160110922A1 (en) | Method and system for enhancing communication by using augmented reality | |
KR102491140B1 (en) | Method and apparatus for generating virtual avatar | |
JP6683864B1 (en) | Content control system, content control method, and content control program | |
CN109035415B (en) | Virtual model processing method, device, equipment and computer readable storage medium | |
CN113840049A (en) | Image processing method, video flow scene switching method, device, equipment and medium | |
US20230368461A1 (en) | Method and apparatus for processing action of virtual object, and storage medium | |
CN114363689B (en) | Live broadcast control method and device, storage medium and electronic equipment | |
US20240163528A1 (en) | Video data generation method and apparatus, electronic device, and readable storage medium | |
US10955911B2 (en) | Gazed virtual object identification module, a system for implementing gaze translucency, and a related method | |
KR102200239B1 (en) | Real-time computer graphics video broadcasting service system | |
CN108320331A (en) | A kind of method and apparatus for the augmented reality video information generating user's scene | |
JP2001051579A (en) | Method and device for displaying video and recording medium recording video display program | |
JP2021009351A (en) | Content control system, content control method, and content control program | |
JP2021006886A (en) | Content control system, content control method, and content control program | |
WO2023029289A1 (en) | Model evaluation method and apparatus, storage medium, and electronic device | |
KR102622709B1 (en) | Method and Apparatus for generating 360 degree image including 3-dimensional virtual object based on 2-dimensional image | |
WO2022160867A1 (en) | Remote reproduction method, system, and apparatus, device, medium, and program product | |
Arita et al. | Non-verbal human communication using avatars in a virtual space |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20897576 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2022528715 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 20227018465 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 12.10.2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20897576 Country of ref document: EP Kind code of ref document: A1 |