CN111080759B - Method and device for realizing split mirror effect and related product - Google Patents

Method and device for realizing split mirror effect and related product Download PDF

Info

Publication number
CN111080759B
CN111080759B CN201911225211.4A CN201911225211A CN111080759B CN 111080759 B CN111080759 B CN 111080759B CN 201911225211 A CN201911225211 A CN 201911225211A CN 111080759 B CN111080759 B CN 111080759B
Authority
CN
China
Prior art keywords
dimensional virtual
image
real
model
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911225211.4A
Other languages
Chinese (zh)
Other versions
CN111080759A (en
Inventor
刘文韬
郑佳宇
黄展鹏
李佳桦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Sensetime Technology Co Ltd
Original Assignee
Shenzhen Sensetime Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Sensetime Technology Co Ltd filed Critical Shenzhen Sensetime Technology Co Ltd
Priority to CN201911225211.4A priority Critical patent/CN111080759B/en
Priority to JP2022528715A priority patent/JP7457806B2/en
Priority to KR1020227018465A priority patent/KR20220093342A/en
Priority to PCT/CN2020/082545 priority patent/WO2021109376A1/en
Publication of CN111080759A publication Critical patent/CN111080759A/en
Priority to TW109116665A priority patent/TWI752502B/en
Application granted granted Critical
Publication of CN111080759B publication Critical patent/CN111080759B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/2053D [Three Dimensional] animation driven by audio data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a method, a device and a related product for realizing a lens splitting effect, which are characterized in that the method comprises the following steps: acquiring a three-dimensional virtual model; and rendering the three-dimensional virtual model with at least two different lens visual angles to obtain virtual images respectively corresponding to the at least two different lens visual angles.

Description

Method and device for realizing split mirror effect and related product
Technical Field
The present application relates to the field of virtual technologies, and in particular, to a method and an apparatus for implementing a split mirror effect, and a related product.
Background
In recent years, "virtual characters" have frequently appeared in our lives, and for example, there are known applications of virtual idols such as "beginner future", "loving" and the like in the field of music, or applications of virtual moderators in live news, and the like. Since the virtual character can replace the real character to perform activities in the network world and the user can set the appearance, shape and the like of the virtual character according to the requirement, the virtual character gradually becomes a communication mode between people.
At present, a motion capture technology is generally adopted in a virtual character in a network in a generating process, and a shot real character image is analyzed through an image recognition method, so that the motion and the expression of the real character are oriented to the virtual character, and the virtual character can reproduce the motion and the expression of the real character.
Disclosure of Invention
The embodiment of the application discloses a method and a device for realizing a lens splitting effect and a related product.
In a first aspect, the present application provides a method for implementing a mirror splitting effect, including:
acquiring a three-dimensional virtual model;
and rendering the three-dimensional virtual model with at least two different lens visual angles to obtain virtual images respectively corresponding to the at least two different lens visual angles.
According to the method, the three-dimensional virtual model is obtained, and the three-dimensional virtual model is rendered by the at least two different lens visual angles, so that the virtual images corresponding to the at least two different lens visual angles are obtained, and a user can see the virtual images under the different lens visual angles, so that rich visual experience is brought to the user.
In this embodiment, the three-dimensional virtual model includes a three-dimensional virtual character model in a three-dimensional virtual scene model, and before acquiring the three-dimensional virtual model, the method further includes: acquiring a real image, wherein the real image comprises a real person image; extracting the features of the real person image to obtain feature information, wherein the feature information comprises action information of the real person; and generating a three-dimensional virtual model according to the characteristic information so that the action information of the three-dimensional virtual character model in the three-dimensional virtual model corresponds to the action information of the real character.
The method has the advantages that the three-dimensional virtual model is generated by extracting the characteristics of the acquired real character image, so that the three-dimensional virtual character model in the three-dimensional virtual model can reproduce the facial expression and the limb action of the real character, the audience can conveniently know the facial expression and the limb action of the real character by watching the virtual image corresponding to the three-dimensional virtual model, and the audience and the real character anchor can realize more flexible interaction.
In the embodiment of the present application, acquiring the real image includes: acquiring a video stream, and obtaining at least two real images according to at least two images in the video stream; the method for extracting the features of the real person image to obtain feature information comprises the following steps: and respectively extracting the characteristics of each frame of real figure image to obtain corresponding characteristic information.
Therefore, the three-dimensional virtual model can change in real time according to the collected multiple frames of real images, so that a user can see the dynamic change process of the three-dimensional virtual model under different lens visual angles.
In this embodiment of the present application, the real images further include real scene images, the three-dimensional virtual model further includes a three-dimensional virtual scene model, and before obtaining the three-dimensional virtual model, the method further includes: and constructing a three-dimensional virtual scene image according to the real scene image.
It can be seen that the method can also use the real scene image to construct the three-dimensional virtual scene image in the three-dimensional virtual model, so that the three-dimensional virtual scene image has more selectivity compared with the method that only a specific three-dimensional virtual scene image can be selected.
In an embodiment of the present application, the step of obtaining at least two different angles of view includes: and obtaining at least two different lens visual angles according to the at least two frames of real images.
It can be seen that each frame of real image corresponds to one lens visual angle, and multiple frames of real images correspond to multiple lens visual angles, so that at least two different lens visual angles can be obtained according to at least two frames of real images, and therefore, the method and the device are used for realizing the lens visual angle rendering of the three-dimensional virtual model and providing rich visual experience for users.
In an embodiment of the present application, the step of obtaining at least two different angles of view includes: and obtaining at least two different lens visual angles according to the action information respectively corresponding to the at least two frames of real images.
Therefore, the lens view angle is determined according to the action information of the real character in the real image, so that the action of the corresponding three-dimensional virtual character model can be amplified and displayed in the image, a user can conveniently know the action of the real character by watching the virtual image, and the interactivity and interestingness are improved.
In an embodiment of the present application, the step of obtaining at least two different angles of view includes: acquiring background music; determining a time collection corresponding to the background music, wherein the time collection comprises at least two time periods; and acquiring a lens visual angle corresponding to each time period in the time set.
Therefore, the method can improve the diversity of the lens visual angles and enable users to obtain richer visual experience by analyzing the background music and determining the time set corresponding to the background music so as to obtain a plurality of different lens visual angles.
In this embodiment of the present application, the at least two different lens perspectives include a first lens perspective and a second lens perspective, and rendering the three-dimensional virtual model with the at least two different lens perspectives, and obtaining virtual images corresponding to the at least two different lens perspectives respectively includes: rendering the three-dimensional virtual model at a first lens visual angle to obtain a first virtual image; rendering the three-dimensional virtual model at a second lens visual angle to obtain a second virtual image; a sequence of images formed from the first virtual image and the second virtual image is presented.
It can be seen that the three-dimensional virtual model is rendered at the first lens view angle and the second lens view angle respectively, so that the user can view the three-dimensional virtual model at the first lens view angle and the three-dimensional virtual model at the second lens view angle, and rich visual experience is provided for the user.
In this embodiment of the present application, rendering the three-dimensional virtual model with the second lens perspective to obtain the second virtual image includes: translating or rotating the three-dimensional virtual model under the first lens visual angle to obtain a three-dimensional virtual model under a second lens visual angle; and acquiring a second virtual image corresponding to the three-dimensional virtual model under the second lens visual angle.
It can be seen that the three-dimensional virtual model under the second lens view angle, that is, the second virtual image, can be quickly and accurately obtained by translating or rotating the three-dimensional virtual model under the first lens view angle.
In an embodiment of the present application, presenting a sequence of images formed from a first image and a second virtual image comprises: inserting a-frame virtual images between the first virtual image and the second virtual image such that the first virtual image is smoothly switched to the second virtual image, wherein a is a positive integer.
It can be seen that the insertion of the a-frame virtual image between the first virtual image and the second virtual image allows the viewer to see the entire process of change from the first virtual image to the second virtual image, rather than the single two images (the first virtual image and the second virtual image), thereby allowing the viewer to adapt to the effect of the change in the visual difference caused by the first virtual image to the second virtual image.
In an embodiment of the present application, the method further comprises: performing beat detection on the background music to obtain a beat set of the background music, wherein the beat set comprises a plurality of beats, and each beat in the plurality of beats corresponds to a stage special effect; and adding the target stage special effect corresponding to the beat set into the three-dimensional virtual model.
Therefore, the corresponding stage special effect is added to the virtual scene where the virtual character model is located according to the beat information of the music, so that different stage effects are presented for audiences, and the watching experience of the audiences is enhanced.
In a second aspect, the present application provides an apparatus for implementing a split mirror effect, including:
an acquisition unit configured to acquire a three-dimensional virtual model;
and the lens splitting unit is used for rendering the three-dimensional virtual model by using at least two different lens visual angles to obtain virtual images respectively corresponding to the at least two different lens visual angles.
The lens splitting effect implementation device obtains the three-dimensional virtual model through the obtaining unit and then sends the three-dimensional virtual model to the lens splitting unit, the lens splitting unit renders the three-dimensional virtual model through a plurality of lens visual angles to obtain virtual images corresponding to at least two different lens visual angles, so that a user can see the virtual images under different lens visual angles, and abundant visual experience is brought to the user.
In this embodiment of the present application, the three-dimensional virtual model includes a three-dimensional virtual character model in a three-dimensional virtual scene model, and before acquiring the three-dimensional virtual model, the apparatus further includes: the acquiring unit is further used for acquiring a real image, wherein the real image comprises a real person image; the characteristic extraction unit is used for extracting the characteristics of the real person image to obtain characteristic information, wherein the characteristic information comprises action information of the real person; and a three-dimensional virtual model generation unit for generating a three-dimensional virtual model according to the characteristic information so that the action information of the three-dimensional virtual character model in the three-dimensional virtual model corresponds to the action information of the real character.
The feature extraction unit extracts features of the real character image acquired by the acquisition unit and sends the feature information to the three-dimensional virtual model generation unit, so that the three-dimensional virtual model generation unit generates the three-dimensional virtual model according to the feature information, the three-dimensional virtual character model in the three-dimensional virtual model can reproduce the facial expression and the limb action of the real character, and the audience can know the facial expression and the limb action of the real character by watching the virtual image corresponding to the three-dimensional virtual model, so that the audience and the real character anchor broadcast can realize more flexible interaction.
In an embodiment of the present application, the obtaining unit is further configured to: acquiring a video stream, and obtaining at least two real images according to at least two images in the video stream; the feature extraction unit is further configured to: and respectively extracting the characteristics of each frame of real figure image to obtain corresponding characteristic information.
The acquiring unit may acquire the video stream and generate the corresponding three-dimensional virtual model according to the multiple frames of real images in the video stream, so that the generated three-dimensional virtual model may change in real time according to the multiple frames of real images acquired, and a user may view a dynamic change process of the three-dimensional virtual model at different lens viewing angles.
In this embodiment of the present application, the real images further include real scene images, the three-dimensional virtual model further includes a three-dimensional virtual scene model, and before obtaining the three-dimensional virtual model, the apparatus further includes: and the three-dimensional virtual scene image construction unit is used for constructing a three-dimensional virtual scene image according to the real scene image.
It can be seen that the three-dimensional virtual scene image constructing unit can construct the three-dimensional virtual scene image in the three-dimensional virtual model by using the real scene image, so that the three-dimensional virtual scene image has more selectivity compared with the case that only a specific three-dimensional virtual scene image can be selected.
In this embodiment of the present application, the apparatus further includes a lens angle obtaining unit, where the lens angle obtaining unit is configured to obtain at least two different lens angles, and the lens angle obtaining unit is further configured to: and obtaining at least two different lens visual angles according to the at least two frames of real images.
Therefore, the lens visual angle unit can obtain a plurality of different lens visual angles by acquiring the multi-frame real images, and the three-dimensional virtual model is rendered by utilizing the lens visual angles, so that rich visual experience is provided for a user.
In an embodiment of the present application, the lens angle obtaining unit is configured to: and obtaining at least two different lens visual angles according to the action information respectively corresponding to the at least two frames of real images.
Therefore, the lens visual angle acquisition unit determines the lens visual angle according to the action information of the real character in the real image, so that the action of the corresponding three-dimensional virtual character model can be amplified and displayed in the image, a user can conveniently know the action of the real character by watching the virtual image, and the interactivity and interestingness are improved.
In an embodiment of the present application, the lens angle obtaining unit is configured to: acquiring background music; determining a time collection corresponding to the background music, wherein the time collection comprises at least two time periods; and acquiring a lens visual angle corresponding to each time period in the time set.
It can be seen that the lens visual angle acquiring unit acquires a plurality of different lens visual angles by analyzing the background music and determining the time set corresponding to the background music.
In this embodiment of the present application, the at least two different lens views include a first lens view and a second lens view, and the lens splitting unit is specifically configured to: rendering the three-dimensional virtual model at a first lens visual angle to obtain a first virtual image; rendering the three-dimensional virtual model at a second lens visual angle to obtain a second virtual image; a sequence of images formed from the first virtual image and the second virtual image is shown.
Therefore, the lens splitting unit can respectively render the three-dimensional virtual model by utilizing the first lens visual angle and the second lens visual angle, so that the user can view the three-dimensional virtual model under the first lens visual angle and the three-dimensional virtual model under the second lens visual angle, and the viewing comfort of the user is improved.
In this embodiment of the present application, the mirror splitting unit is further configured to: translating or rotating the three-dimensional virtual model under the first lens visual angle to obtain a three-dimensional virtual model under a second lens visual angle; and acquiring a second virtual image corresponding to the three-dimensional virtual model under the second lens visual angle.
It can be seen that the lens splitting unit can rapidly and accurately obtain the three-dimensional virtual model under the second lens viewing angle, namely the second virtual image, by translating or rotating the three-dimensional virtual model under the first lens viewing angle.
In this embodiment of the present application, the mirror splitting unit is further configured to: inserting an a-frame virtual image between the first virtual image and the second virtual image such that the first virtual image is smoothly switched to the second virtual image, wherein a is a positive integer.
It can be seen that the split mirror unit allows the viewer to see the entire change process from the first virtual image to the second virtual image instead of the single two images (the first virtual image and the second virtual image) by inserting the a-frame virtual image between the first virtual image and the second virtual image, thereby allowing the viewer to adapt to the effect of the change in the visual difference caused by the first virtual image to the second virtual image.
In an embodiment of the present application, the apparatus further includes: the system comprises a beat detection unit, a stage effect generation unit and a stage effect generation unit, wherein the beat detection unit is used for carrying out beat detection on background music to obtain a beat set of the background music, the beat set comprises a plurality of beats, and each beat in the plurality of beats corresponds to a stage effect; and the stage special effect generating unit is used for adding the target stage special effect corresponding to the beat set into the three-dimensional virtual model.
Therefore, the beat detection unit detects the beats of the background music, so that the stage special effect generation unit can perform different rendering processing on the three-dimensional virtual model at different beats, different stage effects are presented for audiences, and the watching experience of the audiences is enhanced.
In a third aspect, the present application provides an electronic device, comprising: a processor, a communication interface, and a memory; the memory is for storing instructions, the processor is for executing the instructions, and the communication interface is for communicating with other devices under control of the processor, wherein execution of the instructions by the processor causes the electronic device to carry out the method according to any one of the first aspect as described above.
In a fourth aspect, the present application provides a computer-readable storage medium storing a computer program, the computer program being executed by hardware to implement any of the methods of the first aspect.
In a fifth aspect, the present application provides a computer program product, which when read and executed by a computer, performs the method of any one of the above first aspects.
Drawings
In order to more clearly illustrate the embodiments of the present application or technical solutions in the background art, the drawings required to be used in the description of the embodiments of the present application will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic diagram of a specific application scenario provided herein;
FIG. 2 is a schematic diagram of one possible three-dimensional virtual model provided herein;
fig. 3 is a schematic flowchart of a method for implementing a split-mirror effect provided in the present application;
FIG. 4 is a schematic diagram of an interpolation curve provided herein;
FIG. 5 is a schematic flow chart diagram of one embodiment provided herein;
FIG. 6 is a schematic diagram of a split-mirror rule provided in the present application;
FIG. 7A is a diagram illustrating the effects of a possible virtual image provided by the present application;
FIG. 7B is a diagram illustrating the effect of a possible virtual image provided by the present application;
FIG. 7C is a diagram illustrating the effects of a possible virtual image provided by the present application;
FIG. 7D is a diagram illustrating the effects of one possible virtual image provided by the present application;
FIG. 8 is a schematic structural diagram of an apparatus for implementing a split mirror effect provided in the present application;
fig. 9 is a schematic structural diagram of an electronic device provided in the present application.
Detailed Description
The terminology used in the embodiments of the present application is for the purpose of describing particular embodiments of the present application only and is not intended to be limiting of the present application.
The implementation method, device and related products of the split-view effect can be applied to multiple fields such as social contact, entertainment and education, for example, can be used for social interaction in virtual live broadcast and virtual community, can also be used for holding virtual concerts, and can also be applied to classroom teaching and the like. For convenience of understanding the embodiment of the present application, a specific application scenario of the embodiment of the present application is described in detail below by taking virtual live broadcast as an example.
Virtual live broadcast is a live broadcast mode which replaces a live anchor with a virtual character on a live broadcast platform. Because the virtual character has rich expressive force and is more in line with the propagation environment of the social network, the virtual live broadcast industry develops rapidly. In the process of virtual live broadcast, the facial expression and the action of the live anchor are applied to the virtual character model by using computer technologies such as facial expression capture, action capture, sound processing and the like, so that the interaction between the audience and the virtual anchor in a video website or a social network site is realized.
In order to save live broadcast cost and post-production cost, a user usually directly uses a mobile phone, a tablet personal computer and other terminal equipment for live broadcast. Referring to fig. 1, fig. 1 is a schematic view of a specific application scenario provided in the present application, in a live broadcast process shown in fig. 1, an image capturing device 110 captures an actual human anchor, and transmits a captured actual human image to a server 120 through a network for processing, and then the server sends a generated virtual image to a user terminal 130, so that different viewers view the entire live broadcast process through corresponding user terminals 130.
It can be seen that although the virtual live broadcast in this manner is low in cost, because only a single image pickup device 110 captures a live anchor, the generated virtual anchor is related to the position of the image pickup device 110, that is, a viewer can only see a virtual character at a specific viewing angle, and the specific viewing angle depends on the relative position between the image pickup device 110 and the live anchor, so that the presented live broadcast effect is unsatisfactory, for example, problems of stiff action of the virtual anchor, unsmooth shot switching picture, or monotonous shot picture and the like often occur in the virtual live broadcast process, thereby causing visual fatigue of the viewer, and failing to make the viewer experience personally on the scene.
Similarly, in other application scenarios, for example, teachers teach students knowledge in an online teaching mode in the teaching process, but this teaching method is usually tedious, teachers in videos cannot know how students hold knowledge points in real time, students can only see teachers or teaching lectures in a single visual angle picture, students are prone to fatigue, and the teaching effect of video teaching is greatly reduced compared with on-site teaching of teachers. For another example, when a concert cannot be held as expected due to limitations of weather, places and the like in the process of holding the concert, a singer can hold a virtual concert in a studio to simulate the scene of a real concert, and in order to realize the scene of the real concert, multiple cameras are generally required to be set up to shoot the singer. In order to solve the problems that a picture and a lens are single in visual angle and a lens switching picture is not smooth and the like frequently appearing in an application scene, the method for achieving the lens splitting effect is provided, a three-dimensional virtual model is generated according to a collected real image, a plurality of different visual angles are obtained according to background music or the action of a real person, the three-dimensional virtual model is rendered according to the plurality of different visual angles, virtual images corresponding to the plurality of different visual angles are obtained, the effect that a plurality of virtual cameras shoot the three-dimensional virtual model in the virtual scene is simulated, and the viewing experience of audiences is improved. In addition, the method analyzes the beat of the background music, and adds the corresponding stage special effect in the three-dimensional virtual model according to the beat information, so that different stage effects are presented to the audience, and the watching experience of the audience is further enhanced.
Next, a specific process of generating a three-dimensional virtual model from a real image in the embodiment of the present application is first explained.
In the embodiment of the application, the three-dimensional virtual model comprises a three-dimensional virtual character model in a three-dimensional virtual scene. Taking fig. 2 as an example, fig. 2 shows a schematic diagram of a possible three-dimensional virtual model, according to the three-dimensional virtual model shown in fig. 2, it can be seen that two hands of the three-dimensional virtual character model are lifted to the front of the chest, and in order to highlight the contrast effect, the upper left corner of fig. 2 also shows a real image acquired by the split mirror effect implementation apparatus, where the real character is also lifted to the front of the chest by two hands, in other words, the three-dimensional virtual character model is consistent with the actions of the real character. It can be understood that fig. 2 is only an example, in an actual application, the real image acquired by the split mirror effect implementation device may be a three-dimensional image or a two-dimensional image, the number of the characters in the real image may be one or more, the actions of the real characters may be that two hands lift the chest, that the left foot lifts up or other actions, and the like, correspondingly, the number of the three-dimensional virtual character models in the three-dimensional virtual model generated from the real character image may be one or more, the actions of the three-dimensional virtual character model may be that two hands lift the chest, that the left foot lifts up or other actions, and the like, and this is not limited specifically here.
In the embodiment of the application, the device for realizing the split mirror effect shoots the real person to obtain the multi-frame real image I 1 ,I 2 ,…,I n And to the real image I according to the time sequence 1 ,I 2 ,…,I n Feature extraction is performed separately to obtain a plurality of correspondencesThree-dimensional virtual model M of 1 ,M 2 ,…,M n Where n is a positive integer, and a real image I 1 ,I 2 ,…,I n With a three-dimensional virtual model M 1 ,M 2 ,…,M n There is a one-to-one correspondence between, that is, a frame of real character image is used to generate a three-dimensional virtual model, which can be obtained as a real image I i Generating a three-dimensional virtual model M i For example, the following steps are carried out:
step one, a device for realizing the split mirror effect acquires a real image I i
Wherein, the real image I i The real person image is included, i is a positive integer, and i is more than or equal to 1 and less than or equal to n.
Step two, the device for realizing the split mirror effect is used for the real image I i And extracting the features of the real person image to obtain feature information.
Wherein the characteristic information includes motion information of the real character.
It is understood that the feature information is used for controlling the posture of the three-dimensional virtual character model, the action information in the feature information includes facial expression features and limb action features, the facial expression features are used for describing various emotional states of the character, such as happiness, sadness, surprise, fear, anger or disgust and the like, and the limb action features are used for describing the action state of the real character, such as lifting the left hand, lifting the right foot or jumping and the like. In addition, the characteristic information may further include character information, where the character information includes a plurality of human key points of the real character and corresponding position information thereof, the human key points include human face key points and human skeleton key points, and the position characteristics include position coordinates of the human key points of the real character.
Optionally, the device for realizing the split mirror effect realizes the split mirror effect by comparing the real image I i Carrying out image segmentation and extracting to obtain a real image I i The real person image in (1); then, carrying out key point detection on the extracted real person image to obtain the position information of the plurality of human body key points and the plurality of human body key points, wherein the human body key points comprise human face key pointsThe key points and the human skeleton key points can be specifically positioned in a head region, a neck region, a shoulder region, a spine region, a waist region, a hip region, a wrist region, an arm region, a knee region, a leg region, an ankle region, a sole region and the like of a human body; the real image I is obtained by analyzing the key points of the human face and the position information of the key points of the human face i Facial expression characteristics of the middle real character; the real image I is obtained by analyzing the human skeleton key points and the position information of the human skeleton key points i The bone characteristics of the real person are obtained, and therefore the limb action characteristics of the real person are obtained.
Optionally, the device for realizing the split mirror effect can be used for carrying out real image I i Inputting the characteristic into a neural network for feature extraction, and extracting the plurality of human body key point information after the calculation of the plurality of convolution layers. The neural network is obtained through a large amount of training, and the neural network may be a Convolutional Neural Network (CNN), a Back Propagation Neural Network (BPNN), a generative adaptive neural network (GAN), a Recurrent Neural Network (RNN), or the like, which is not specifically limited herein. It should be noted that the above-mentioned human body feature extraction process may be performed in the same neural network, or may be performed in different neural networks, for example, the split mirror effect implementation apparatus may extract the key points of the human face by using the CNN to obtain the human body facial expression features; the BPNN may also be used to extract human skeleton key points to obtain human skeleton features and limb movement features, which is not specifically limited herein. In addition, the above example of the feature information for driving the three-dimensional virtual character model is only used for example, and other feature information may also be included in practical application, which is not limited specifically here.
Thirdly, the device for realizing the lens splitting effect generates a three-dimensional virtual model M according to the characteristic information i So that the three-dimensional virtual model M i Three-dimensional virtual character model and real image I in i The action information of the real person corresponds to the action information of the real person.
Optionally, the mirror splitting effect implementation device establishes a mapping relationship between the human body key points of the real character and the human body key points of the virtual character model through the characteristic information; and then controlling the expression and the posture of the virtual character model according to the mapping relation, so that the facial expression and the limb actions of the virtual character model are consistent with those of the real character.
Optionally, the mirror splitting effect implementation device respectively marks the serial numbers of the human key points of the real figures to obtain marking information of the human key points of the real figures, wherein the human key points correspond to the marking information one by one; then, labeling the human body key points in the virtual character model according to the labeling information of the human body key points of the real character, for example, if the labeling information of the left wrist of the real character is No. 1, then the labeling information of the left wrist of the three-dimensional virtual character model is No. 1, if the labeling information of the left arm of the real character is No. 2, then the labeling information of the left wrist of the three-dimensional virtual character model is No. 2, and so on; and then matching the human key point marking information of the real character with the human key point marking information of the three-dimensional virtual character model, and mapping the human key point position information of the real character to the corresponding human key points of the three-dimensional virtual character model, so that the three-dimensional virtual character model can reproduce the facial expression and the limb action of the real character.
In the embodiment of the present application, the real image I i Also comprises a real scene image and a three-dimensional virtual model M i Also includes a three-dimensional virtual scene model according to the real image I i Generating a three-dimensional virtual model M i The method of (2) further comprises: from the real image I i Real scene image in (1) constructs three-dimensional virtual model M i Of (3) a three-dimensional virtual scene.
Optionally, the device for realizing the split mirror effect firstly carries out the process of the real image I i Carrying out image segmentation to obtain a real image I i A real scene image of (2); then extracting scene features in the real scene image, such as position features, shape features, size features and the like of objects in the real scene; according to scene characteristicsConstructing a three-dimensional virtual image M i Such that the three-dimensional virtual image M i The three-dimensional virtual scene model in (1) can highly restore a real image I i Of the real scene image.
For the sake of simplicity, the above description has only been explained by the real image I i Generating a three-dimensional virtual model M i In fact, a three-dimensional virtual model M 1 ,M 2 ,…,M i-1 ,M i+1 ,…,M n Is generated and the three-dimensional virtual model M i The generation process is similar, and the detailed description is omitted here.
It should be noted that the three-dimensional virtual scene model in the three-dimensional virtual model may be constructed according to the real scene image in the real image, or may be a user-defined three-dimensional virtual scene model; the appearance of the five sense organs of the three-dimensional virtual character model in the three-dimensional virtual model can be constructed by the five sense organs of the real character image in the real image, and can also be the appearance of the five sense organs defined by the user, and the three-dimensional virtual model is not particularly limited here.
Next, the three-dimensional virtual model M is aligned at a plurality of different lens angles according to the embodiment of the present application 1 ,M 2 ,…,M n Each three-dimensional virtual model in the three-dimensional model is subjected to lens visual angle rendering, so that audiences can see virtual images of the same three-dimensional virtual model under different lens visual angles for detailed description. With a real image I i Generated three-dimensional virtual model M i For example, k different lenses are used for the three-dimensional virtual model M i Rendering is carried out to obtain k virtual images Q under different lens visual angles i1 ,Q i2 ,…,Q ik And k is more than or equal to 2, so that the effect of split mirror switching is realized, and the specific process can be expressed as follows:
as shown in fig. 3, fig. 3 is a schematic flow chart of a method for implementing a mirror splitting effect provided by the present application. The implementation method of the split mirror effect of the embodiment includes, but is not limited to, the following steps:
s101, the device for realizing the split mirror effect obtains a three-dimensional virtual model.
In the embodiment of the present application, a three-dimensional virtual model is used for simulating a real character and a real scene, the three-dimensional virtual model includes a three-dimensional virtual character model in the three-dimensional virtual scene model, the three-dimensional virtual model is generated according to a real character image included in the real image, the three-dimensional virtual character model in the three-dimensional virtual model is used for simulating the real character in the real image, and the action of the three-dimensional virtual character model corresponds to the action of the real character. The three-dimensional virtual scene model may be constructed according to a real scene image included in the real image, or may be a preset three-dimensional virtual scene model. When the three-dimensional virtual scene model is constructed by the real scene image, the three-dimensional virtual scene model can be used for simulating the real scene in the real image.
S102, the device for realizing the lens splitting effect obtains at least two different lens visual angles.
In the embodiment of the present application, the lens angle of view is used to indicate the position of the camera relative to the subject when the camera is shooting the subject. For example, when a camera shoots directly above an object, a top view of the object can be obtained, and if a lens angle of view corresponding to the camera located directly above the object is V, an image shot by the camera shows the object at the lens angle of view V, that is, the top view of the object.
In the embodiment of the present application, the real image may be captured by a real camera, the position of the real camera relative to the real person may be multiple, and multiple real images captured by multiple real cameras at different positions show the real person at multiple different lens viewing angles.
In an embodiment of the present application, acquiring at least two different angles of view includes: and obtaining at least two different lens visual angles according to the at least two frames of real images.
In the embodiment of the present application, the motion information includes a limb motion and a facial expression of a real person in the real image, wherein the limb motion includes a plurality of types, the limb motion may be one or more of lifting a right hand, lifting a left foot, jumping, and the like, the limb motion also includes a plurality of types, and the facial expression may be one or more of smiling, lacrimation, anger, and the like.
In the embodiment of the application, one action or a combination of multiple actions corresponds to one lens view angle. For example, when a real person smiles and jumps, the corresponding lens angle is V 1 When the real character only jumps, the corresponding view angle may be the view angle V 1 Or the angle of view V 2 And so on, similarly, the corresponding lens view may be the lens view V when the real character is only smiling 1 Or the angle of view V 2 Or the angle of view V of the lens 3 And so on.
In an embodiment of the present application, acquiring at least two different angles of view includes: and obtaining at least two different lens visual angles according to the action information respectively corresponding to the at least two frames of real images.
In the embodiment of the present application, the real image may be one or more frames in a video stream, where the video stream includes image information and background music information, and one frame of image corresponds to one frame of music.
In the embodiment of the present application, acquiring at least two different angles of view includes: acquiring background music; determining a time collection corresponding to the background music, wherein the time collection comprises at least two time periods; and acquiring a lens visual angle corresponding to each time period in the time set. S103, rendering the three-dimensional virtual model by the aid of at least two different lens visual angles by the split-lens effect achieving device to obtain virtual images corresponding to the at least two different lens visual angles respectively.
In this embodiment of the application, the at least two different lens perspectives include a first lens perspective and a second lens perspective, rendering the three-dimensional virtual model with the at least two different lens perspectives, and obtaining virtual images corresponding to the at least two different lens perspectives includes:
and S1031, rendering the three-dimensional virtual model according to the first lens visual angle to obtain a first virtual image.
S1032, rendering the three-dimensional virtual model with the second lens visual angle to obtain a second virtual image.
In this embodiment of the present application, rendering the three-dimensional virtual model with the second lens perspective to obtain the second virtual image includes: translating or rotating the three-dimensional virtual model under the first lens visual angle to obtain a three-dimensional virtual model under a second lens visual angle; and acquiring a second virtual image corresponding to the three-dimensional virtual model under the second lens visual angle.
It can be understood that the first lens view angle may be obtained according to a real image, or may be obtained according to action information corresponding to the real image, or may be obtained according to a time set corresponding to background music; similarly, the second lens angle may be obtained according to the real image, or may be obtained according to the action information corresponding to the real image, or may be obtained according to the time set corresponding to the background music, which is not specifically limited in this application.
S1033, showing a sequence of images formed from the first virtual image and the second virtual image.
In an embodiment of the present application, the above-mentioned displaying of the image sequence formed from the first image and the second virtual image includes: inserting an a-frame virtual image between the first virtual image and the second virtual image such that the first virtual image is smoothly switched to the second virtual image, wherein a is a positive integer.
Optionally, an a-frame dummy picture P is inserted between the first dummy picture and the second dummy picture 1 ,P 2 ,...,P a So that the first virtual image is smoothly switched to the second virtual image, wherein the a frame virtual image P 1 ,P 2 ,...,P a The time point of insertion is b 1 ,b 2 ,...,b a Time point b 1 ,b 2 ,...,b a The slope value of the formed curve satisfies a function of monotonically decreasing first and then monotonically increasing, and a is a positive integer.
For example, as shown in FIG. 4, FIG. 4 shows a schematic diagram of an interpolation curve. Fig. 4 shows that the mirror-effect realizing apparatus obtains a first virtual image at the 1 st minute and a second virtual image at the 2 nd minute, and the first virtual image is a front view of the three-dimensional virtual model,the second virtual image presents a left view of the three-dimensional virtual model. In order to make the viewer see a smooth shot cut, the split-view effect realization means inserts a plurality of time points between the 1 st minute and the 2 nd minute, and inserts a frame of virtual image at each time point, for example, inserts the virtual image P at 1.4 minutes 1 Inserting dummy image P at 1.65 minute 2 Insert dummy image P at 1.8 minutes 3 Inserting the dummy picture P at 1.85 minute 4 Wherein the virtual image P 1 The effect of rotating the three-dimensional virtual model 30 degrees to the left is presented, virtual image P 2 The effect of rotating the three-dimensional virtual model by 50 degrees is presented, virtual image P 3 And a virtual image P 4 The effect of rotating the three-dimensional virtual model by 90 degrees is presented so that the viewer can see the entire process of transforming the three-dimensional virtual model from a front view to a left view, rather than a single two images (front view of the three-dimensional virtual model and left view of the three-dimensional virtual model), so that the viewer can adapt to the changing effect of the visual difference switching from front view to left view.
Finally, the method for rendering the three-dimensional virtual model by using the stage special effect in the embodiment of the application so as to show different stage effects for audiences is explained in detail, and the method specifically comprises the following steps:
step one, the device for realizing the split mirror effect detects the beats of the background music to obtain a beat set of the background music.
The beat set comprises a plurality of beats, and each beat in the plurality of beats corresponds to a stage special effect. Optionally, the lens splitting effect implementation apparatus may respectively perform rendering processing on the three-dimensional virtual model by using a shader and a particle special effect, for example, the shader may be used to implement a spotlight rotation effect on the back of the virtual stage and a sound wave effect of the virtual stage, and the particle special effect is used to add a visual effect such as a spark, a fallen leaf, a meteor, and the like to the three-dimensional virtual model.
And step two, adding the target stage special effect corresponding to the beat set into the three-dimensional virtual model by the split mirror effect realization device.
According to the method, the three-dimensional virtual model is generated according to the acquired real image, and corresponding lens visual angle switching is carried out according to the acquired real image, the background music and the action of the real person, so that the effect of shooting the three-dimensional virtual model by a plurality of virtual cameras in a virtual scene is simulated, and the viewing experience of audiences is improved. In addition, the method analyzes the beat of the background music, and adds the corresponding stage special effect in the virtual image according to the beat information, so that different stage effects are presented to the audience, and the watching experience of the audience is further enhanced.
In order to facilitate understanding of the method for implementing the split mirror effect according to the above embodiments, the method for implementing the split mirror effect according to the embodiments of the present application is described in detail below by way of example.
Referring to fig. 5, as shown in fig. 5, fig. 5 shows a flowchart of an embodiment.
S201, the device for realizing the split mirror effect obtains a real image and background music, and obtains a first lens visual angle according to the real image.
When the background music sounds, the real person acts according to the background music, and the real camera shoots the real person to obtain a real image.
S202, the mirror splitting effect implementation device generates a three-dimensional virtual model according to the real image.
The three-dimensional virtual model is obtained by the device for realizing the split mirror effect at the first moment.
S203, the device for realizing the split mirror effect performs beat detection on the background music to obtain a beat set of the background music, and adds the target stage special effect corresponding to the beat set into the three-dimensional virtual model.
S204, the lens splitting effect implementation device renders the three-dimensional virtual model according to the first lens visual angle to obtain a first virtual image corresponding to the first lens visual angle.
S205, the device for realizing the split-mirror effect determines a time collection corresponding to the background music.
The time collection comprises a plurality of time periods, and each time period in the plurality of time periods corresponds to one lens visual angle;
s206, the device for realizing the split mirror effect judges whether the action information base contains the action information or not, if the action information base does not contain the action information, S207-S209 are executed, and if the action information base contains the action information, S210-S212 are executed. The action information is action information of a real person in the real image, the action information base comprises a plurality of action information, and each action information in the plurality of action information corresponds to one lens visual angle.
S207, determining a second lens visual angle corresponding to the time period of the first moment according to the time set by the split-lens effect realization device;
and S208, rendering the three-dimensional virtual model by the lens splitting effect implementation device according to the second lens visual angle to obtain a second virtual image corresponding to the second lens visual angle.
S209, the split mirror effect realizing device displays an image sequence formed according to the first virtual image and the second virtual image.
And S210, determining a third lens visual angle corresponding to the action information according to the action information by the lens splitting effect implementation device.
S211, rendering the three-dimensional virtual model by the lens splitting effect implementation device according to the third lens visual angle to obtain a third virtual image corresponding to the third lens visual angle.
S212, the split mirror effect realizing device displays an image sequence formed according to the first virtual image and the third virtual image.
According to the method shown in fig. 5, the application provides a schematic view of the splitting rule shown in fig. 6, and the effect diagrams of the four virtual images shown in fig. 7A to 7D can be obtained by performing the splitting processing and the stage special effect processing on the virtual image according to the splitting rule shown in fig. 6.
As shown in fig. 7A, at the 1 st minute, the device for realizing the split mirror effect is at the lens view angle V 1 The real person is shot to obtain a real image I 1 (as shown in the upper left corner of FIG. 7A), and then based on the real image I 1 Obtaining a three-dimensional virtual model M 1 . Device for realizing split mirror effect back to backPerforming beat detection on the scene music, and determining the beat corresponding to the 1 st minute as B 1 And according to the beat B 1 Obtaining the stage special effect W at the 1 st minute 1 Then the stage special effect W is processed 1 Adding to a three-dimensional virtual model M 1 Performing the following steps; the device for realizing the lens splitting effect determines that the 1 st minute corresponding lens visual angle is V according to the time lens script 1 (ii) a When the split mirror effect realization device detects that the action of the real character in the 1 st minute is that the two hands lift to the chest, and the action that the two hands lift to the chest is not in the action information base, the split mirror effect realization device displays a virtual image as shown in fig. 7A, wherein the virtual image and the real image I are shown in fig. 7A 1 The lens visual angles are the same.
As shown in fig. 7B, at the 2 nd minute, the device for realizing the split mirror effect is at the lens view angle V 1 Shooting a real person to obtain a real image I 2 (as shown in the upper left corner of FIG. 7B), and then from the real image I 2 Obtaining a three-dimensional virtual model M 2 . The device for realizing the split mirror effect detects the beat of the background music and determines the beat B corresponding to the 2 nd minute 2 And according to the beat B 2 The stage special effect W at the 2 nd minute is obtained 2 Then in the three-dimensional virtual model M 2 In-stage special effect W 2 (ii) a The lens splitting effect implementation device determines that the corresponding lens view angle at the 2 nd minute is V according to a preset lens script 2 (ii) a The split mirror effect implementation device detects that the action of the real character in the 2 nd minute is to lift up the two hands, and the action of lifting up the two hands is not in the action information base, and then the split mirror effect implementation device enables the three-dimensional virtual model M to be used in the process 2 Rotate to the left and the upper part to obtain a visual angle V of the lens 2 A corresponding virtual image. It can be seen that the virtual model M is in three dimensions 2 In-stage special effect W 2 The virtual image shown in fig. 7B adds a light effect to the virtual image shown in fig. 7A.
As shown in fig. 7C, at the 3 rd minute, the device for realizing the split mirror effect is at the lens view angle V 1 The real person is shot to obtain a real image I 3 (as shown in the upper left corner of FIG. 7C), and then based on the real image I 3 To obtain IIIDimension virtual model M 3 . The device for realizing the split mirror effect detects the beat of the background music and determines the beat B corresponding to the 3 rd minute 3 And according to the beat B 3 Obtaining the stage special effect W at the 3 rd minute 3 Then on the three-dimensional virtual model M 3 In which stage special effect W is added 3 (ii) a The lens splitting effect implementation device determines that the 3 rd minute corresponding lens visual angle is V according to a preset lens script 2 (ii) a The lens splitting effect implementation device detects that the action of the real person in the 3 rd minute is to lift the left foot upwards, and the visual angle of the lens corresponding to the action of lifting the left foot is V 3 Then, the device for realizing the split mirror effect realizes the three-dimensional virtual model M 3 Rotate left to obtain a lens visual angle V 3 A corresponding virtual image. It can be seen that the virtual model M is a three-dimensional model 3 In which stage special effect W is added 3 Meanwhile, the virtual image shown in fig. 7C is different from the light effect in the virtual image shown in fig. 7B, and the sound effect wave effect appears in the virtual image shown in fig. 7C.
As shown in fig. 7D, at the 4 th minute, the device for realizing the split mirror effect is at the lens angle V 1 Shooting a real person to obtain a real image I 4 (as shown in the upper left corner of FIG. 7D), and then from the real image I 4 Obtaining a three-dimensional virtual model M 4 . The device for realizing the split mirror effect detects the beat of the background music and determines the beat B corresponding to the 4 th minute 4 And according to the beat B 4 The stage special effect W at the 4 th minute is obtained 3 Then in the three-dimensional virtual model M 4 In-stage special effect W 4 (ii) a The lens splitting effect implementation device determines that the 3 rd minute corresponding lens visual angle is V according to a preset lens script 4 (ii) a The device for realizing the split mirror effect detects that the action of the real person in the 4 th minute is standing, and the visual angle of the lens corresponding to the action of standing is V 4 Then the device for realizing the lens splitting effect realizes the three-dimensional virtual model M 4 Rotate right to obtain a visual angle V 4 A corresponding virtual image. It can be seen that the virtual model M is in three dimensions 4 In which stage special effect W is added 4 In this case, the stage effect is not generated in the virtual image shown in fig. 7D and the virtual image shown in fig. 7CThe same is true.
The application provides a minute mirror effect implementation device can be software device also can be hardware device, when minute mirror effect device is the software device, minute mirror effect implementation device can dispose on a computing equipment under the cloud environment alone, also can dispose on a terminal equipment alone, when minute mirror effect implementation device is hardware device, the inside unit module of minute mirror effect implementation device also can have multiple division, each module can be software module also can be hardware module, also can be the part software module part hardware module, this application does not restrict it. Fig. 8 is an exemplary division manner, as shown in fig. 8, fig. 8 is an implementation apparatus 800 for implementing a mirror splitting effect provided by the present application, including:
an obtaining unit 810, configured to obtain a three-dimensional virtual model;
the lens splitting unit 820 is configured to render the three-dimensional virtual model with at least two different lens viewing angles, so as to obtain virtual images corresponding to the at least two different lens viewing angles.
In this embodiment of the present application, the three-dimensional virtual model includes a three-dimensional virtual character model in a three-dimensional virtual scene model, and before obtaining the three-dimensional virtual model, the apparatus further includes: an obtaining unit 810, further configured to obtain a real image, where the real image includes a real person image; a feature extraction unit 830, configured to extract features of the real person image to obtain feature information, where the feature information includes motion information of the real person; a three-dimensional virtual model generation unit 840 for generating a three-dimensional virtual model based on the feature information so that the motion information of the three-dimensional virtual character model in the three-dimensional virtual model corresponds to the motion information of the real character.
In an embodiment of the present application, the obtaining unit is further configured to: acquiring a video stream, and obtaining at least two real images according to at least two images in the video stream; the feature extraction unit 830 is further configured to: and respectively extracting the characteristics of each frame of real figure image to obtain corresponding characteristic information.
In this embodiment of the present application, the real image further includes a real scene image, the three-dimensional virtual model further includes a three-dimensional virtual scene model, and before the three-dimensional virtual model is obtained, the apparatus further includes: a three-dimensional virtual scene image constructing unit 850, configured to construct a three-dimensional virtual scene image according to the real scene image.
In this embodiment of the present application, the apparatus further includes a lens angle acquiring unit 860, where the lens angle acquiring unit 860 is configured to acquire at least two different lens angles, and the lens angle acquiring unit 860 is further configured to: and obtaining at least two different lens visual angles according to the at least two frames of real images.
In the embodiment of the present application, the lens angle acquiring unit 860 is configured to: and obtaining at least two different lens visual angles according to the action information respectively corresponding to the at least two frames of real images.
In this embodiment, the lens angle acquiring unit 860 is configured to: acquiring background music; determining a time collection corresponding to the background music, wherein the time collection comprises at least two time periods; and acquiring a lens visual angle corresponding to each time period in the time set.
In this embodiment of the application, the at least two different lens angles include a first lens angle and a second lens angle, and the lens splitting unit 820 is specifically configured to: rendering the three-dimensional virtual model at a first lens visual angle to obtain a first virtual image; rendering the three-dimensional virtual model at a second lens visual angle to obtain a second virtual image; a sequence of images formed from the first virtual image and the second virtual image is shown.
In the embodiment of the present application, the splitting unit 820 is further configured to: translating or rotating the three-dimensional virtual model under the first lens visual angle to obtain a three-dimensional virtual model under a second lens visual angle; and acquiring a second virtual image corresponding to the three-dimensional virtual model under the second lens visual angle.
In the embodiment of the present application, the splitting mirror unit 820 is further configured to: inserting an a-frame virtual image between the first virtual image and the second virtual image such that the first virtual image is smoothly switched to the second virtual image, wherein a is a positive integer.
In an embodiment of the present application, the apparatus further includes: the beat detection unit 870 is configured to perform beat detection on the background music to obtain a beat set of the background music, where the beat set includes a plurality of beats, and each beat in the plurality of beats corresponds to a stage special effect; and a stage special effect generating unit 880, configured to add the target stage special effect corresponding to the beat set to the three-dimensional virtual model.
According to the split mirror effect realization device, the three-dimensional virtual model is generated according to the acquired real image, the plurality of lens visual angles are acquired according to the acquired real image, the background music and the action of the real character, and the corresponding lens visual angle switching is carried out on the three-dimensional virtual model by utilizing the plurality of lens visual angles, so that the effect that the three-dimensional virtual model is shot by the plurality of virtual cameras in the virtual scene is simulated, a user can see the three-dimensional virtual model under a plurality of different lens visual angles, and the watching experience of audiences is improved. In addition, the device analyzes the beat of the background music and adds a corresponding stage special effect in the three-dimensional virtual model according to the beat information, so that different stage effects are presented for audiences, and the live broadcast watching experience of the audiences is further enhanced.
Referring to fig. 9, the present application provides a schematic structural diagram of an electronic device 900, where the electronic device 900 may be a device for implementing a split mirror effect in the foregoing description, and the electronic device 900 includes: a processor 910, a communication interface 920, and a memory 930, wherein the processor 910, the communication interface 920, and the memory 930 are coupled by a bus 940. Wherein, the first and the second end of the pipe are connected with each other,
the processor 910 may be a Central Processing Unit (CPU), a general purpose processor, a Digital Signal Processor (DSP), an application-specific integrated circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable Logic Device (PLD), a transistor logic device, a hardware component, or any combination thereof. The processor 910 may implement or perform various exemplary methods described in connection with the present disclosure. Specifically, the processor 910 reads the program code stored in the memory 930, and cooperates with the communication interface 920 to execute some or all of the steps of the method executed by the split-mirror effect implementing apparatus in the above-described embodiment of the present application.
The communication interface 920 may be a wired interface, such as an ethernet interface, a controller area network interface, a Local Interconnect Network (LIN) interface, or a FlexRay interface, or a wireless interface, such as a cellular network interface or a wireless lan interface, for communicating with other modules or devices. Specifically, the communication interface 920 is connected to the input/output device 950, and the input/output device 950 may include other terminal devices such as a mouse, a keyboard, and a microphone.
Memory 930 may include volatile memory, such as Random Access Memory (RAM); the memory 530 may also include a non-volatile memory (non-volatile memory), such as a read-only memory (ROM), a flash memory, a Hard Disk Drive (HDD), or a solid-state drive (SSD), and the memory 930 may also include a combination of the above types of memories. Memory 930 may store program codes as well as program data. The program code is composed of codes of some or all units in the above-described split mirror effect implementation apparatus 800, for example, a code of the acquisition unit 810, a code of the split mirror unit 820, a code of the feature extraction unit 830, a code of the three-dimensional training model generation unit 840, a code of the three-dimensional virtual scene image construction unit 850, a code of the lens view angle acquisition unit 860, a code of the beat detection unit 870, a code of the stage special effect generation unit 880, and the like. The program data is data generated by the split mirror effect implementing apparatus 800 during operation, such as real image data, three-dimensional virtual model data, lens angle data, background music data, and virtual image data, and the like.
The bus 940 may be a Controller Area Network (CAN) or other internal bus that enables interconnection between various systems or devices in the vehicle. The bus 940 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown, but this does not represent only one bus or one type of bus.
It should be understood that electronic device 900 may contain more or fewer components than illustrated in fig. 9, or have a different arrangement of components.
The present application further provides a computer-readable storage medium, where a computer program is stored, where the computer program is executed by hardware (for example, a processor, etc.) to implement part or all of the steps in the foregoing method for implementing the split mirror effect.
The application also provides a computer program product, when the computer program product runs on the device for realizing the split mirror effect or the electronic equipment, part or all of the steps of the method for realizing the split mirror effect are executed.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, digital subscriber line) or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, memory Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), among others. In the embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to the related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other ways of dividing the actual implementation, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted or not executed. In addition, the indirect coupling or direct coupling or communication connection between each other shown or discussed may be through some interfaces, indirect coupling or communication connection of devices or units, and may be electrical or in other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the elements may be selected according to actual needs to achieve the purpose of the solution of the embodiments of the present application.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium may include, for example: u disk, removable hard disk, read only memory, random access memory, magnetic or optical disk, etc.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think of various equivalent modifications or substitutions within the technical scope of the present application, and these modifications or substitutions should be covered within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (14)

1. A method for realizing a split mirror effect is characterized by comprising the following steps:
acquiring a three-dimensional virtual model, wherein the three-dimensional virtual model comprises a three-dimensional virtual character model in a three-dimensional virtual scene model, and the action information of the three-dimensional virtual character model in the three-dimensional virtual model corresponds to the action information of a real character;
rendering the three-dimensional virtual model at least two different lens visual angles to obtain virtual images corresponding to the three-dimensional virtual model under the at least two different lens visual angles, wherein the virtual images are obtained by translating or rotating the three-dimensional virtual model to the corresponding lens visual angles, the corresponding lens visual angles are obtained according to a shooting visual angle of a real character, action information of the three-dimensional virtual character and a time set corresponding to background music, and the real character acts according to the background music;
and displaying an image sequence formed by the virtual images respectively corresponding to the at least two different lens visual angles.
2. The method of claim 1, wherein prior to said obtaining a three-dimensional virtual model, the method further comprises:
acquiring a real image, wherein the real image comprises a real person image;
extracting features of the real person image to obtain feature information, wherein the feature information comprises action information of the real person;
and generating the three-dimensional virtual model according to the characteristic information, so that the action information of the three-dimensional virtual character model in the three-dimensional virtual model corresponds to the action information of the real character.
3. The method of claim 2, wherein the acquiring the real image comprises:
acquiring a video stream, and obtaining at least two frames of real images according to at least two frames of images in the video stream;
the feature extraction of the real person image to obtain feature information includes:
and respectively extracting the features of each frame of the real character image to obtain corresponding feature information.
4. The method of claim 3, wherein the real image further comprises a real scene image, the three-dimensional virtual model further comprises the three-dimensional virtual scene model, and prior to said obtaining the three-dimensional virtual model, the method further comprises:
and constructing the three-dimensional virtual scene model according to the real scene image.
5. The method according to claim 3 or 4, wherein the step of acquiring the at least two different lens perspectives comprises:
and obtaining the at least two different lens visual angles according to the at least two frames of real images.
6. The method according to claim 3 or 4, wherein the step of acquiring the at least two different lens perspectives comprises:
and obtaining the at least two different lens visual angles according to the action information respectively corresponding to the at least two frames of real images.
7. The method according to claim 3 or 4, wherein the step of acquiring the at least two different lens perspectives comprises:
acquiring the background music;
determining a time collection corresponding to the background music, wherein the time collection comprises at least two time periods;
and acquiring a lens visual angle corresponding to each time period in the time set.
8. The method of claim 1, wherein the at least two different viewing angles include a first viewing angle and a second viewing angle, and the rendering the three-dimensional virtual model with the at least two different viewing angles to obtain virtual images corresponding to the at least two different viewing angles comprises:
rendering the three-dimensional virtual model according to the first lens visual angle to obtain a first virtual image;
rendering the three-dimensional virtual model at the second lens visual angle to obtain a second virtual image;
displaying a sequence of images formed from the first virtual image and the second virtual image.
9. The method of claim 8, wherein rendering the three-dimensional virtual model at the second lens perspective resulting in a second virtual image comprises:
translating or rotating the three-dimensional virtual model under the first lens visual angle to obtain the three-dimensional virtual model under a second lens visual angle;
and acquiring the second virtual image corresponding to the three-dimensional virtual model under the second lens visual angle.
10. The method of claim 9, wherein said presenting a sequence of images formed from said first virtual image and said second virtual image comprises:
inserting between the first virtual image and the second virtual imageaFrame a virtual image such that the first virtual image is gently switched to the second virtual image, wherein theaIs a positive integer.
11. The method according to any one of claims 8 to 10, further comprising:
performing beat detection on the background music to obtain a beat set of the background music, wherein the beat set comprises a plurality of beats, and each beat in the plurality of beats corresponds to a stage special effect;
and adding the target stage special effect corresponding to the beat set into the three-dimensional virtual model.
12. An implementation device of a split mirror effect is characterized by comprising:
the device comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring a three-dimensional virtual model, the three-dimensional virtual model comprises a three-dimensional virtual character model in a three-dimensional virtual scene model, and the action information of the three-dimensional virtual character model in the three-dimensional virtual model corresponds to the action information of a real character;
the lens splitting unit is used for rendering the three-dimensional virtual model at least two different lens visual angles to obtain virtual images corresponding to the three-dimensional virtual model under the at least two different lens visual angles, wherein the virtual images are obtained by translating or rotating the three-dimensional virtual model to the corresponding lens visual angles, the corresponding lens visual angles are obtained according to a shooting visual angle of a real character, action information of the three-dimensional virtual character and a time set corresponding to background music, and the real character acts according to the background music; and displaying an image sequence formed by the virtual images respectively corresponding to the at least two different lens visual angles.
13. An electronic device, characterized in that the electronic device comprises: a processor, a communication interface, and a memory; the memory is configured to store instructions, the processor is configured to execute the instructions, and the communication interface is configured to communicate with other devices under the control of the processor, wherein the processor implements the method of any one of claims 1 to 11 when executing the instructions.
14. A computer-readable storage medium, in which a computer program is stored, the computer program being executable by hardware to implement the method of any one of claims 1 to 11.
CN201911225211.4A 2019-12-03 2019-12-03 Method and device for realizing split mirror effect and related product Active CN111080759B (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
CN201911225211.4A CN111080759B (en) 2019-12-03 2019-12-03 Method and device for realizing split mirror effect and related product
JP2022528715A JP7457806B2 (en) 2019-12-03 2020-03-31 Lens division realization method, device and related products
KR1020227018465A KR20220093342A (en) 2019-12-03 2020-03-31 Method, device and related products for implementing split mirror effect
PCT/CN2020/082545 WO2021109376A1 (en) 2019-12-03 2020-03-31 Method and device for producing multiple camera-angle effect, and related product
TW109116665A TWI752502B (en) 2019-12-03 2020-05-20 Method for realizing lens splitting effect, electronic equipment and computer readable storage medium thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911225211.4A CN111080759B (en) 2019-12-03 2019-12-03 Method and device for realizing split mirror effect and related product

Publications (2)

Publication Number Publication Date
CN111080759A CN111080759A (en) 2020-04-28
CN111080759B true CN111080759B (en) 2022-12-27

Family

ID=70312713

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911225211.4A Active CN111080759B (en) 2019-12-03 2019-12-03 Method and device for realizing split mirror effect and related product

Country Status (5)

Country Link
JP (1) JP7457806B2 (en)
KR (1) KR20220093342A (en)
CN (1) CN111080759B (en)
TW (1) TWI752502B (en)
WO (1) WO2021109376A1 (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI762375B (en) * 2021-07-09 2022-04-21 國立臺灣大學 Semantic segmentation failure detection system
CN113630646A (en) * 2021-07-29 2021-11-09 北京沃东天骏信息技术有限公司 Data processing method and device, equipment and storage medium
CN114157879A (en) * 2021-11-25 2022-03-08 广州林电智能科技有限公司 Full scene virtual live broadcast processing equipment
CN114630173A (en) * 2022-03-03 2022-06-14 北京字跳网络技术有限公司 Virtual object driving method and device, electronic equipment and readable storage medium
CN114745598B (en) * 2022-04-12 2024-03-19 北京字跳网络技术有限公司 Video data display method and device, electronic equipment and storage medium
CN114900743A (en) * 2022-04-28 2022-08-12 中德(珠海)人工智能研究院有限公司 Scene rendering transition method and system based on video plug flow
CN117014651A (en) * 2022-04-29 2023-11-07 北京字跳网络技术有限公司 Video generation method and device
CN115442542B (en) * 2022-11-09 2023-04-07 北京天图万境科技有限公司 Method and device for splitting mirror
CN115883814A (en) * 2023-02-23 2023-03-31 阿里巴巴(中国)有限公司 Method, device and equipment for playing real-time video stream

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110335334A (en) * 2019-07-04 2019-10-15 北京字节跳动网络技术有限公司 Avatars drive display methods, device, electronic equipment and storage medium

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201333882A (en) * 2012-02-14 2013-08-16 Univ Nat Taiwan Augmented reality apparatus and method thereof
US20150049078A1 (en) * 2013-08-15 2015-02-19 Mep Tech, Inc. Multiple perspective interactive image projection
CN106157359B (en) * 2015-04-23 2020-03-10 中国科学院宁波材料技术与工程研究所 Design method of virtual scene experience system
US10068376B2 (en) * 2016-01-11 2018-09-04 Microsoft Technology Licensing, Llc Updating mixed reality thumbnails
US10019131B2 (en) * 2016-05-10 2018-07-10 Google Llc Two-handed object manipulations in virtual reality
CN106295955A (en) * 2016-07-27 2017-01-04 邓耀华 A kind of client based on augmented reality is to the footwear custom-built system of factory and implementation method
CN106385576B (en) * 2016-09-07 2017-12-08 深圳超多维科技有限公司 Stereoscopic Virtual Reality live broadcasting method, device and electronic equipment
CN107103645B (en) * 2017-04-27 2018-07-20 腾讯科技(深圳)有限公司 virtual reality media file generation method and device
CN107194979A (en) * 2017-05-11 2017-09-22 上海微漫网络科技有限公司 The Scene Composition methods and system of a kind of virtual role
US10278001B2 (en) * 2017-05-12 2019-04-30 Microsoft Technology Licensing, Llc Multiple listener cloud render with enhanced instant replay
JP6469279B1 (en) 2018-04-12 2019-02-13 株式会社バーチャルキャスト Content distribution server, content distribution system, content distribution method and program
CN108538095A (en) * 2018-04-25 2018-09-14 惠州卫生职业技术学院 Medical teaching system and method based on virtual reality technology
JP6595043B1 (en) 2018-05-29 2019-10-23 株式会社コロプラ GAME PROGRAM, METHOD, AND INFORMATION PROCESSING DEVICE
CN108830894B (en) * 2018-06-19 2020-01-17 亮风台(上海)信息科技有限公司 Remote guidance method, device, terminal and storage medium based on augmented reality
CN108961376A (en) * 2018-06-21 2018-12-07 珠海金山网络游戏科技有限公司 The method and system of real-time rendering three-dimensional scenic in virtual idol live streaming
CN108833740B (en) * 2018-06-21 2021-03-30 珠海金山网络游戏科技有限公司 Real-time prompter method and device based on three-dimensional animation live broadcast
CN108877838B (en) * 2018-07-17 2021-04-02 黑盒子科技(北京)有限公司 Music special effect matching method and device
JP6538942B1 (en) 2018-07-26 2019-07-03 株式会社Cygames INFORMATION PROCESSING PROGRAM, SERVER, INFORMATION PROCESSING SYSTEM, AND INFORMATION PROCESSING APPARATUS
CN110139115B (en) * 2019-04-30 2020-06-09 广州虎牙信息科技有限公司 Method and device for controlling virtual image posture based on key points and electronic equipment
CN110427110B (en) * 2019-08-01 2023-04-18 广州方硅信息技术有限公司 Live broadcast method and device and live broadcast server

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110335334A (en) * 2019-07-04 2019-10-15 北京字节跳动网络技术有限公司 Avatars drive display methods, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
TWI752502B (en) 2022-01-11
WO2021109376A1 (en) 2021-06-10
JP7457806B2 (en) 2024-03-28
KR20220093342A (en) 2022-07-05
TW202123178A (en) 2021-06-16
JP2023501832A (en) 2023-01-19
CN111080759A (en) 2020-04-28

Similar Documents

Publication Publication Date Title
CN111080759B (en) Method and device for realizing split mirror effect and related product
WO2022095467A1 (en) Display method and apparatus in augmented reality scene, device, medium and program
CN111970535B (en) Virtual live broadcast method, device, system and storage medium
WO2022001593A1 (en) Video generation method and apparatus, storage medium and computer device
KR102503413B1 (en) Animation interaction method, device, equipment and storage medium
CN113240782B (en) Streaming media generation method and device based on virtual roles
CN108322832B (en) Comment method and device and electronic equipment
US11670015B2 (en) Method and apparatus for generating video
KR102491140B1 (en) Method and apparatus for generating virtual avatar
CN109035415B (en) Virtual model processing method, device, equipment and computer readable storage medium
CN114615513B (en) Video data generation method and device, electronic equipment and storage medium
CN113852838B (en) Video data generation method, device, electronic equipment and readable storage medium
CN112308977B (en) Video processing method, video processing device, and storage medium
CN113709543A (en) Video processing method and device based on virtual reality, electronic equipment and medium
CN114332374A (en) Virtual display method, equipment and storage medium
EP3924940A1 (en) Augmented reality methods and systems
CN113781613A (en) Expression driving method and system and computer equipment
US11282282B2 (en) Virtual and physical reality integration
KR102200239B1 (en) Real-time computer graphics video broadcasting service system
JP4962219B2 (en) Composite image output apparatus and composite image output processing program
WO2023029289A1 (en) Model evaluation method and apparatus, storage medium, and electronic device
US20240163528A1 (en) Video data generation method and apparatus, electronic device, and readable storage medium
CN114550293A (en) Action correcting method and device, storage medium and electronic equipment
US20240048780A1 (en) Live broadcast method, device, storage medium, electronic equipment and product
CN113908553A (en) Game character expression generation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40018614

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant