CN115379195B - Video generation method, device, electronic equipment and readable storage medium - Google Patents

Video generation method, device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN115379195B
CN115379195B CN202211034118.7A CN202211034118A CN115379195B CN 115379195 B CN115379195 B CN 115379195B CN 202211034118 A CN202211034118 A CN 202211034118A CN 115379195 B CN115379195 B CN 115379195B
Authority
CN
China
Prior art keywords
input
images
dimensional
scene
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211034118.7A
Other languages
Chinese (zh)
Other versions
CN115379195A (en
Inventor
程鹏
刘文强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202211034118.7A priority Critical patent/CN115379195B/en
Publication of CN115379195A publication Critical patent/CN115379195A/en
Application granted granted Critical
Publication of CN115379195B publication Critical patent/CN115379195B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a video generation method, a video generation device, electronic equipment and a readable storage medium, and belongs to the field of video processing. The video processing method comprises the following steps: displaying a three-dimensional scene model, wherein the three-dimensional scene model is a three-dimensional model of a target scene; receiving a first input of a user, wherein the first input is used for setting video fortune mirror parameters; in response to the first input, a target video corresponding to the target scene is generated from the three-dimensional scene model.

Description

Video generation method, device, electronic equipment and readable storage medium
Technical Field
The application belongs to the field of video processing, and particularly relates to a video generation method, a device, electronic equipment and a readable storage medium.
Background
In the prior art, if a user needs to obtain a video corresponding to a scene, the user can obtain the video corresponding to the required scene by achieving the scene and then shooting the scene.
However, in the above method, if the distance between the region where the user is located and the region where the scene to be shot is located is far, the method cannot be implemented, or requires the user to reach the region where the scene to be shot is located through other ways, so that the process of generating the video by the electronic device is complex and difficult.
Disclosure of Invention
The embodiment of the application aims to provide a video generation method, a video generation device, electronic equipment and a readable storage medium, which can solve the problems of complex processing process and high difficulty of the electronic equipment during video generation.
In a first aspect, an embodiment of the present application provides a video processing method, including: displaying a three-dimensional scene model, wherein the three-dimensional scene model is a three-dimensional model of a target scene; receiving a first input of a user, wherein the first input is used for setting video fortune mirror parameters; in response to the first input, a target video corresponding to the target scene is generated from the three-dimensional scene model.
In a second aspect, an embodiment of the present application provides a video processing apparatus, including: the device comprises a display module, a receiving module and a processing module; the display module is used for displaying a three-dimensional scene model, wherein the three-dimensional scene model is a three-dimensional model of a target scene. The receiving module is used for receiving a first input of a user, and the first input is used for setting video fortune mirror parameters. And the processing module is also used for responding to the first input received by the receiving module and generating a target video corresponding to the target scene according to the three-dimensional scene model.
In a third aspect, an embodiment of the present application provides an electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the method as described in the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor perform the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executable by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, the electronic equipment displays a three-dimensional scene model, and the three-dimensional scene model is a three-dimensional model of a target scene; receiving a first input of a user, wherein the first input is used for setting video fortune mirror parameters; in response to the first input, a target video corresponding to the target scene is generated from the three-dimensional scene model. Under the condition that the electronic equipment displays the three-dimensional scene module, a user can configure different video mirror parameters for the three-dimensional scene model, and multiple images corresponding to a target scene under the different video mirror parameters are obtained according to the different video mirror parameters, so that the electronic equipment can generate a target video under the target scene according to multiple images of the same target scene but different video mirror parameters, and therefore, the user only needs to input the three-dimensional scene model to configure the different video mirror parameters for the three-dimensional scene model, the video mirror parameters configured by the user by the electronic equipment are triggered, the video required by the user is generated, and the processing process of generating the video by the electronic equipment is simplified.
Drawings
Fig. 1 is a flowchart of a video generating method according to an embodiment of the present application;
fig. 2 is a schematic diagram of a video generating method according to an embodiment of the present application;
FIG. 3 is a second schematic diagram of a video generating method according to an embodiment of the present application;
FIG. 4 is a third schematic diagram of a video generating method according to an embodiment of the present application;
FIG. 5 is a diagram illustrating a video generation method according to an embodiment of the present application;
FIG. 6 is a diagram of a video generation method according to an embodiment of the present application;
FIG. 7 is a diagram illustrating a video generating method according to an embodiment of the present application;
FIG. 8 is a diagram of a video generating method according to an embodiment of the present application;
FIG. 9 is a schematic diagram eighth of a video generating method according to an embodiment of the present application;
FIG. 10 is a diagram illustrating a video generation method according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a video generating apparatus according to an embodiment of the present application;
fig. 12 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application;
fig. 13 is a second schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which are obtained by a person skilled in the art based on the embodiments of the present application, fall within the scope of protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type, and are not limited to the number of objects, such as the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
In the prior art, if a user needs to shoot a certain object in a video manner, the object can be shot through a video recording function in a mobile phone, and the user can move the shooting position of the mobile phone in the recording process, so that videos with different shooting angles can be obtained, and the user can shoot the certain object in different lens-transporting manners in the recording process; alternatively, the user may perform editing processing on the plurality of images to generate a video similar to an electronic album; however, if the user needs to integrate the contents of multiple images in the electronic device to generate the corresponding three-dimensional scene video, professional video processing personnel are usually required to edit the images according to the needs, the process is complex, and if the user needs to carry out mirror-transporting on the multiple images, no related technology can be realized.
However, currently, electronic devices may extract depth information of an image and reconstruct a three-dimensional scene using a two-dimensional image, i.e., reproduce a corresponding scene in the two-dimensional image in a three-dimensional form. The three-dimensional reconstruction refers to a process of reconstructing three-dimensional information according to a single-view or multi-view image, and because the single-view information is incomplete, the three-dimensional reconstruction needs to use the experience knowledge of a professional, however, the multi-view three-dimensional reconstruction (similar to the binocular positioning of a person) is relatively easy, and the method is that firstly, a camera is calibrated, namely, the relation between an image coordinate system of the camera and a world coordinate system is calculated, and then the three-dimensional information is reconstructed by using the information in a plurality of two-dimensional images. Moreover, if the electronic device needs to generate video by using images, dynamic images can be captured into a series of still photos by using photography, and according to the principle of persistence of vision, namely: when successive image changes exceed 24 frames (frames) per second or more, the human eye appears to be a smooth, continuous visual effect.
The embodiment of the application provides a video generation method, which can reconstruct and display a three-dimensional scene model corresponding to at least one image according to at least one image by utilizing the prior art, then display a virtual lens mark in a preset direction of the displayed three-dimensional scene model, and set different lens transporting parameters for the virtual lens mark, so that a video is obtained according to the three-dimensional scene model by controlling the lens transporting parameters set by the virtual lens mark, for example, a user can select a preset lens transporting template, trigger an electronic device to set a virtual camera according to the lens transporting parameters corresponding to the lens transporting template, and then the electronic device can automatically control the virtual camera and transport a lens according to the set lens transporting parameters so as to obtain a plurality of three-dimensional model images; or, the user can select a mirror mode, set the mirror parameters of the mirror mode, then manually control the virtual camera, and the mirror parameters set by the user are used for mirror operation, so as to obtain a plurality of three-dimensional model images. After obtaining the multiple three-dimensional model images, the electronic device can generate transition images between every two adjacent images in the multiple images according to the multiple images and parameter information of the images, so that the multiple images and the transition images are subjected to coding processing by utilizing a video generation technology to obtain videos required by users.
In the process, after the three-dimensional scene model is generated by the user through the selected images, different mirror parameters can be set for the virtual camera according to the requirements of the user, so that the electronic equipment can obtain a plurality of images according to the three-dimensional scene model through controlling the virtual camera according to the mirror parameters required by the user, and videos with mirror effects, such as videos with simulated aerial photography effects, under different visual angles are automatically generated according to the photographed images, and complicated operation of the electronic equipment on the images is not needed, and the user does not need to have special image processing skills, so that the processing complexity of the electronic equipment when generating the videos according to the images is reduced, and the processing procedure of the electronic equipment when generating the videos according to the images is simplified.
The video generation provided by the embodiment of the application is described in detail below through specific embodiments and application scenes thereof with reference to the accompanying drawings.
An embodiment of the application provides a video generation method, and fig. 1 shows a flowchart of the video generation method provided by the embodiment of the application, and the method can be applied to electronic equipment. As shown in fig. 1, the video generating method provided by the embodiment of the present application may include the following steps 201 to 203.
Step 201, the electronic device displays the three-dimensional scene model.
In the embodiment of the application, the three-dimensional scene model is a three-dimensional model of the target scene.
Optionally, in the embodiment of the present application, before the step 201, the video generating method provided in the embodiment of the present application further includes the following steps 401 and 402.
Step 401, the electronic device receives a third input from a user.
In the embodiment of the application, the third input is used for selecting at least two images corresponding to the target scene.
Alternatively, in the embodiment of the present application, the third input may be a user selection input of at least two images.
Optionally, in the embodiment of the present application, the third input may be a click input, a long press input, a slide input, a long press input, or the like of the user; the specific determination may be determined according to the actual use situation, and the embodiment of the present application is not limited in any way herein.
Optionally, in the embodiment of the present application, the at least two images may be images taken by a user in a target scene; or, the at least two images may be images corresponding to the target scene selected by the user from the images stored in the electronic device; alternatively, the at least two images may be images corresponding to the target scene acquired by the user from a channel such as the internet.
Optionally, in the embodiment of the present application, when the electronic device displays a plurality of images, a user may perform selection input on at least two images displayed in the electronic device, so that the electronic device obtains at least two images corresponding to a target scene that the user needs to process; or under the condition that the electronic equipment displays a plurality of images, the electronic equipment can automatically acquire at least two images with earlier storage time according to the sequence of the storage time of the images, so that the electronic equipment is triggered to acquire at least two images which are required to be processed by a user and correspond to a target scene.
Optionally, in the embodiment of the present application, at least two images are images with the same scene and different angles in the electronic device.
The scene may be a shooting scene or an image background.
Illustratively, as shown in fig. 2, the electronic device displays an interface 10, where the interface 10 includes a plurality of images, such as image 1 to image 12, and a user may perform click input on 3 images, such as image 1, image 2, and image 3, in the interface 10, so as to trigger the electronic device to acquire the images of image 1, image 2, and image 3 required by the user.
Step 402, the electronic device responds to the third input and generates a three-dimensional scene model according to the image information of at least two images.
In the embodiment of the application, after receiving the third input of the user, the electronic device can acquire at least two images corresponding to the target scene required by the user, and then the electronic device can trigger the electronic device to reconstruct three-dimensionally the at least two images according to the depth information of the at least two images, so as to generate a three-dimensional scene model corresponding to the at least two images.
Optionally, in the embodiment of the present application, the electronic device may input at least two images to the convolutional neural network, so as to calculate image information of the at least two images according to the convolutional neural network, so as to obtain depth information of the at least two images, thereby generating the three-dimensional scene model according to the depth information of the at least two images.
It will be appreciated that the convolutional neural network described above may be embodied with reference to the related art, and embodiments of the present application are not limited thereto.
Optionally, in the embodiment of the present application, after generating the three-dimensional scene model, the electronic device initializes the spatial position and the perspective information of the three-dimensional scene model to obtain the three-dimensional scene model displayed in the initialized spatial position and perspective.
The initialization refers to setting the spatial position and the perspective information of the three-dimensional scene model to default values.
Illustratively, in conjunction with fig. 2, as shown in fig. 3, the electronic device generates a three-dimensional scene model according to the depth information of the image 1, the image 2, and the image 3 selected by the user, and then the electronic device may initialize the spatial position and the perspective information of the three-dimensional scene model and display the three-dimensional scene model displayed in the initialized spatial position and perspective in the interface 11.
Optionally, in the embodiment of the present application, after the electronic device generates the three-dimensional scene model, the electronic device may perform input on the three-dimensional scene model, for example, drag input, movement input, or sliding input on the three-dimensional scene model, so as to trigger the electronic device to change the spatial position and the viewing angle of the three-dimensional scene model, so as to display the three-dimensional scene model displayed in other spatial positions and viewing angles.
For example, after the electronic device generates the three-dimensional scene model, the user may drag the three-dimensional scene model to move upward to trigger the electronic device to display the three-dimensional scene model with a view angle and a spatial position of the bottom view; alternatively, the user may drag the three-dimensional scene model to move downward to trigger the electronic device to display the three-dimensional scene model in a top-down view and spatial position.
Illustratively, in connection with FIG. 3, as shown in FIG. 4, a user may slide the three-dimensional scene model to the right in interface 11 to trigger the electronic device to alter the spatial position and perspective of the three-dimensional scene model.
In the embodiment of the application, the electronic equipment can generate the three-dimensional scene model corresponding to the target scene according to at least two images selected by the user, so that the electronic equipment can obtain the video of the three-dimensional scene model corresponding to the image according to the generated three-dimensional scene model, and the difficulty in subsequent video processing of the electronic equipment is reduced.
Optionally, in the embodiment of the present application, before the step 201, the video generating method provided in the embodiment of the present application further includes the following steps 501 to 503.
Step 501, the electronic device displays at least one scene identifier.
In the embodiment of the application, the electronic equipment can display a plurality of scene identifiers, and one scene identifier is used for indicating one scene, so that a user can select the identifier corresponding to the required scene from the plurality of scene identifiers.
Step 502, the electronic device receives a fourth input of a target scene identifier in at least one scene identifier by a user.
In the embodiment of the application, the target scene identifier is used for indicating the target scene.
Optionally, in the embodiment of the present application, after the electronic device displays the plurality of scene identifiers, the user may perform selection input on any one of the plurality of scene identifiers, and then the electronic device receives a fourth input of the user on the target scene identifier in the at least one scene identifier, so as to trigger the electronic device to obtain a three-dimensional scene model corresponding to the target scene according to the target scene identifier selected by the user; or triggering the electronic equipment to acquire at least two images according to the target scene identification selected by the user, so that the electronic equipment can generate a three-dimensional scene model according to the image information of the at least two images.
Optionally, the fourth input may be a click input, a long press input, or the like of the user on a target scene identifier in the at least one scene identifier; the specific determination may be determined according to the actual use situation, and the embodiment of the present application is not limited in any way herein.
In step 503, the electronic device obtains a three-dimensional scene model of the target scene in response to the fourth input.
In the embodiment of the application, after receiving the fourth input of the user, the electronic device can determine the target scene corresponding to the target scene identifier according to the target scene identifier, thereby acquiring the three-dimensional scene model of the target scene.
Illustratively, the electronic device displays a variety of scene identifications, such as: park identification, school identification, mall identification, etc., a user may input the park identification, thereby triggering the electronic device to obtain a three-dimensional scene model of the park scene corresponding to the park identification.
In the embodiment of the application, the electronic equipment can acquire the three-dimensional scene model corresponding to the scene required by the user according to any one of a plurality of identifiers by the user, so that the electronic equipment can acquire the video corresponding to the three-dimensional scene model according to the generated three-dimensional scene model, and the difficulty in subsequent video processing of the electronic equipment is reduced.
Step 202, the electronic device receives a first input from a user.
In the embodiment of the application, the first input is used for setting the parameters of the video mirror.
Optionally, in the embodiment of the present application, after receiving the first input from the user, the electronic device displays the video mirror parameter setting interface, so that the user may adjust the video mirror parameter on the video mirror parameter setting interface.
Optionally, in the embodiment of the present application, the electronic device may display a plurality of parameter setting identifiers on the video mirror parameter interface, and the user may input any one of the parameter setting identifiers, so as to trigger the electronic device to adjust the video mirror parameter according to the input of the user to the parameter setting identifier.
The video mirror parameters refer to parameters corresponding to a mirror technique for video shooting.
In step 203, the electronic device responds to the first input, and generates a target video corresponding to the target scene according to the three-dimensional scene model.
The embodiment of the application provides a video generation method, which comprises the steps that electronic equipment displays a three-dimensional scene model, wherein the three-dimensional scene model is a three-dimensional model of a target scene; receiving a first input of a user, wherein the first input is used for setting video fortune mirror parameters; in response to the first input, a target video corresponding to the target scene is generated from the three-dimensional scene model. Under the condition that the electronic equipment displays the three-dimensional scene module, a user can configure different video mirror parameters for the three-dimensional scene model, and multiple images corresponding to a target scene under the different video mirror parameters are obtained according to the different video mirror parameters, so that the electronic equipment can generate a target video under the target scene according to multiple images of the same target scene but different video mirror parameters, and therefore, the user only needs to input the three-dimensional scene model to configure the different video mirror parameters for the three-dimensional scene model, the video mirror parameters configured by the user by the electronic equipment are triggered, the video required by the user is generated, and the processing process of generating the video by the electronic equipment is simplified.
Optionally, in the embodiment of the present application, the first input includes a first sub-input and a second sub-input, and before the step 202, the video generating method provided in the embodiment of the present application further includes the following step 301; the above steps 202 and 203 may be specifically implemented by steps 302 to 306 described below.
Step 301, the electronic device displays at least one mirror mode identifier.
In the embodiment of the application, the electronic equipment can display at least one mirror mode identifier before receiving the first input for setting the video mirror parameters, so that a user can select the at least one mirror mode identifier.
Step 302, the electronic device receives a first sub-input of a target mirror mode identifier in at least one mirror mode identifier from a user.
In the embodiment of the application, after the electronic equipment displays at least one mirror mode identifier, a user can input a target mirror mode identifier in the at least one mirror mode identifier, so that the electronic equipment can receive a first sub-input of the target mirror mode identifier in the at least one mirror mode identifier by the user, and determine a mirror mode corresponding to the mirror mode identifier required by the user according to the first sub-input of the user.
Optionally, in an embodiment of the present application, any one of the at least one lens mode identifier corresponds to one lens mode.
Optionally, in an embodiment of the present application, the first sub-input may be a selection input of a target mirror mode identifier by a user.
Optionally, in the embodiment of the present application, the first sub-input may be a click input, a long press input, or the like; the specific determination may be determined according to the actual use situation, and the embodiment of the present application is not limited in any way herein.
Optionally, in an embodiment of the present application, the target mirror mode is identified as the mirror mode selected through the first sub-input.
Optionally, in the embodiment of the present application, the first sub-input is used to determine a target mirror mode identifier from multiple mirror mode identifiers, and trigger the electronic device to display a mirror parameter setting interface corresponding to the target mirror mode identifier, so that a user may set a mirror parameter on the mirror parameter setting interface.
Optionally, in the embodiment of the present application, a user may input a target lens mode identifier in the identifiers of at least two lens modes, so as to trigger the electronic device to display a lens parameter setting interface corresponding to the target lens mode identifier, so that the user may set a lens parameter corresponding to the target lens mode identifier.
Optionally, in the embodiment of the present application, after displaying the virtual camera, the user may input the virtual camera to trigger the electronic device to display the identifiers of the at least two mirror modes, and then the user may input the target mirror mode identifier in the at least two mirror mode identifiers according to the requirement, so that the electronic device may display the mirror parameter setting interface of the target mirror mode identifier selected by the user.
Optionally, in the embodiment of the present application, the above-mentioned lens-moving parameter may be a lens-moving light-shadow parameter, a lens-moving position parameter, etc., where the lens-moving light-shadow parameter is used for setting a light-shadow value in a lens-moving process, and the lens-moving position parameter is used for setting a shooting position of a virtual camera and a shooting track of the virtual camera in the lens-moving process.
Step 303, the electronic device responds to the first sub-input and displays the virtual lens identifier corresponding to the first mirror mode.
In the embodiment of the application, the target mirror mode identifier is used for indicating the first mirror mode.
Optionally, in the embodiment of the present application, the electronic device may display a virtual lens identifier corresponding to the first lens mode at a preset position.
Optionally, in the embodiment of the present application, the preset azimuth may be a default azimuth of the electronic device, or a preset azimuth of the user, or an azimuth edited last time by the user, and specifically may be determined according to an actual situation, which is not limited in the embodiment of the present application.
Optionally, in the embodiment of the present application, after the electronic device displays the virtual lens identifier in the preset azimuth of the three-dimensional scene model, the electronic device may obtain multiple images according to the three-dimensional scene model by controlling the virtual lens identifier to have a view angle corresponding to the preset azimuth.
Optionally, in the embodiment of the present application, after the electronic device displays the virtual lens identifier in the preset azimuth of the three-dimensional scene model, the user may input the virtual lens identifier, for example, drag input or move input, so as to trigger the electronic device to change the display position of the virtual lens identifier, that is, change the display azimuth of the virtual lens identifier relative to the three-dimensional scene model, so that the electronic device may obtain multiple images according to the three-dimensional scene model by controlling the virtual lens identifier to a view angle corresponding to the changed display azimuth.
Illustratively, in connection with fig. 3, as shown in fig. 5, the electronic device interface 11 includes a first control identifier 12 therein: and inserting a lens, and inputting the first control identifier 12 by a user so as to trigger the electronic equipment to display the virtual lens identifier 13 in a preset azimuth of the three-dimensional scene model in the interface 11.
Step 304, the electronic device receives a second sub-input of the virtual lens identifier from the user.
In the embodiment of the application, after the electronic equipment displays the virtual lens identifier, the user can input the virtual lens identifier, and then the electronic equipment can receive the second sub-input of the virtual lens identifier by the user, so that the electronic equipment controls the virtual lens identifier according to the second sub-input of the virtual lens identifier by the user, and a plurality of images are obtained according to the three-dimensional scene model.
Illustratively, in connection with fig. 5, as shown in (a) of fig. 6, the electronic device displays in the interface 11 the identification of a plurality of mirror modes, such as: the pushing mirror 14, the pulling mirror 15 and the shaking mirror 16, and a user can select and input the pushing mirror 14 in the interface 11 to trigger the electronic equipment to display a mirror operation parameter setting interface 17 corresponding to the pushing mirror 14; in combination with fig. 6 (a), as shown in fig. 6 (b), the mobile phone displays a lens parameter setting interface 17, and displays a lens shadow control identifier 18 and a lens position control identifier 19 in the lens parameter setting interface 17, so that a user can input the lens position control identifier 19, thereby triggering the mobile phone to control the virtual lens identifier 13 to slowly move forward and continuously advance to approach the three-dimensional scene model, the proportion of the three-dimensional scene model displayed in the electronic device gradually increases, and then the user can click a saving control identifier 20 to trigger the electronic device to save the current image.
As shown in fig. 7, a user may perform selection input on the pan 16 in the interface 11 to trigger the electronic device to display a lens parameter setting interface 21 corresponding to the pan 14, and display a lens light and shadow control identifier 22 and a lens position control identifier 23 in the lens parameter setting interface 21, and the user may input the lens position control identifier 23, so as to trigger the electronic device to rotate around the central axis of the three-dimensional scene model, where the angle of view of the three-dimensional scene model displayed in the electronic device is continuously changed, and the amplitude of rotation of the three-dimensional scene model may also be controlled by the input of the user, and then the user may click on the save control identifier 20 to trigger the electronic device to save the current image.
Optionally, in the embodiment of the present application, the electronic device controls the virtual camera in response to the second sub-input of the user, and uses the set lens-transporting parameters to transport the lens, and photographs the three-dimensional scene model M times under different lens-transporting parameters, and stores the photographed images, so as to obtain M images.
In step 305, the electronic device responds to the second sub-input, updates and displays the three-dimensional scene model according to the first mirror mode, and generates at least two three-dimensional model images.
In the embodiment of the application, after receiving the second sub-input of the user, the electronic equipment performs mirror operation according to the first mirror operation mode by using the mirror operation parameters corresponding to the first mirror operation mode, updates and displays the three-dimensional scene model, and generates at least two three-dimensional model images.
Optionally, in the embodiment of the present application, the second sub-input is used for generating at least two three-dimensional model images according to the first mirror mode and the mirror parameters corresponding to the first mirror mode and the three-dimensional scene model.
Optionally, in the embodiment of the present application, when the electronic device obtains a plurality of images according to the three-dimensional scene model with different mirror parameters, if the user selects to save the generated images, the image and the parameter information corresponding to the image are saved at the same time.
Step 306, the electronic device generates a target video corresponding to the target scene according to at least two three-dimensional model images.
Wherein the at least two three-dimensional model images are associated with the updated displayed three-dimensional scene model.
In the embodiment of the application, after at least two three-dimensional model images are obtained, the electronic equipment can generate the target video corresponding to the target scene according to the image information of the at least two three-dimensional model images.
According to the embodiment of the application, the electronic equipment can display the virtual lens identification in the preset azimuth of the three-dimensional scene model according to the input of the user, and can adjust the display position of the virtual lens identification, so that the electronic equipment can obtain a plurality of images according to the three-dimensional scene model based on the azimuth relative to the three-dimensional scene model according to the virtual lens identification. Therefore, the user can set the shooting angle and the shooting direction by himself, so that the operation difficulty of the user is simplified, and the man-machine interaction performance is improved.
Alternatively, in the embodiment of the present application, the "generating the target video corresponding to the target scene according to the three-dimensional scene model" in the above step 203 may be specifically implemented by the following steps 203a and 203 b.
Step 203a, the electronic device generates at least two three-dimensional model images according to the three-dimensional scene model.
Optionally, in the embodiment of the present application, the at least two three-dimensional model images are images with the same scene and different mirror parameters.
Step 203b, the electronic device generates a target video corresponding to the target scene according to at least two three-dimensional model images.
Optionally, in the embodiment of the present application, the electronic device obtains a transition image between every two adjacent images in at least two three-dimensional model images according to at least two three-dimensional model images and parameter information of at least two three-dimensional model images, so as to obtain L transition images.
Optionally, in an embodiment of the present application, the parameter information includes perspective coordinate information of at least two three-dimensional model images and mirror mode parameter information of at least two three-dimensional model images.
In the embodiment of the application, the electronic equipment acquires the transition image between every two adjacent images according to the two adjacent images in at least two three-dimensional model images and the parameter information of the two adjacent images so as to acquire L transition images.
For example, the electronic device may obtain a transition image between the K-th image and the k+1th image according to the K-th image, the k+1th image, the parameter information of the K-th image, and the parameter information of the k+1th image in the M images; and then acquiring the transition images between the K+1st image and the K+2nd image according to the parameter information of the K+1st image, the K+2nd image, the parameter information of the K+2nd image and the parameter information of the K+2nd image, and so on until acquiring the transition images between the M-1st image and the M th image according to the parameter information of the M-1st image, the parameter information of the M-1st image and the parameter information of the M-1st image, so as to acquire a set of the transition images between every two adjacent images to obtain L transition images.
Alternatively, in the embodiment of the present application, the image information of the at least two three-dimensional model images may include image feature information, image color value information, image pixel information and the like in the at least two three-dimensional model images.
Optionally, in an embodiment of the present application, the L transition images are an image set of transition images between every two adjacent images in at least two three-dimensional model images.
For example, in the case where M is 4, the electronic device obtains an a transition image between the first image and the second image, a b transition image between the second image and the third image, and a c transition image between the third image and the fourth image according to the M images and the parameter information of the M images, so that the electronic device may obtain L transition images according to the a transition images, the b transition images, and the c transition images, where the value corresponding to L is a value corresponding to a, and the value corresponding to b is a value corresponding to c.
Optionally, in the embodiment of the present application, after obtaining L images of every two adjacent transition images between at least two three-dimensional model images, the electronic device may perform encoding processing on at least two three-dimensional model images and L transition images to synthesize the target video.
Optionally, in the embodiment of the present application, after obtaining at least two three-dimensional model images according to the three-dimensional scene model, the electronic device may generate, according to the at least two three-dimensional model images, a target video corresponding to the target scene.
For example, the electronic device may generate, from at least two three-dimensional model images, a target video having the same scene, different viewing angles, and different image parameters as the target scene.
In the embodiment of the application, the electronic equipment can generate the target video according to the three-dimensional scene model after obtaining at least two three-dimensional model images, so that the efficiency of the electronic equipment for processing the video is improved.
Alternatively, in the embodiment of the present application, the above step 203a may be specifically implemented by the following steps 203a1 and 203a 2.
Step 203a1, the electronic device receives a second input of the user to the three-dimensional scene model.
In the embodiment of the application, the second input is used for performing shadow rendering on the three-dimensional scene model.
Optionally, in the embodiment of the present application, the user may input the three-dimensional scene model to trigger the electronic device to set the viewing parameters of the three-dimensional scene model according to the input of the user.
Optionally, in an embodiment of the present application, the mirror light and shadow parameter is used to set a light and shadow value of the three-dimensional scene model.
Step 203a2, the electronic device generates at least two three-dimensional model images with a shadow rendering effect in response to the second input.
In the embodiment of the application, after receiving the second input of the user, the electronic device adjusts the brightness of the three-dimensional scene model to generate at least two three-dimensional model images with the shadow rendering effect corresponding to the target scene.
As shown in fig. 8, the electronic device displays the mirror parameter setting interface 17, and displays the mirror light and shadow control identifier 18 and the mirror position control identifier 19 in the mirror parameter setting interface 17, if the user inputs the mirror light and shadow control identifier 18, the electronic device is triggered to control the virtual lens identifier 13 to adjust the brightness value of the shooting process based on the brightness information of the initial position in the pushing process, for example, to perform interpolation gradual change processing on the brightness value of the shooting process, so as to form a change of light and shadow in the shooting process, and the user can manually adjust the shooting brightness information, thereby obtaining the effect of focusing and highlighting the shot three-dimensional scene model.
In the embodiment of the application, when a user generates a plurality of images according to the three-dimensional scene model, the shadow rendering effect can be further increased, so that the interest of generating videos by the electronic equipment is improved.
Optionally, in the embodiment of the present application, the first input is an input for selecting a preset mirror parameter corresponding to the second mirror mode; the above step 203 may be specifically implemented by the following steps 203c and 203 d.
Step 203c, the electronic device generates at least two three-dimensional model images according to the three-dimensional scene model and preset mirror parameters.
Optionally, in the embodiment of the present application, a user may select a mirror template required to be used from the identifiers of at least two mirror templates by inputting the mirror template, so as to trigger the electronic device to generate at least two three-dimensional model images according to the three-dimensional scene model according to preset mirror parameters corresponding to the mirror template selected by the user.
Optionally, in the embodiment of the present application, one of the lens-transporting template identifiers corresponds to one of the lens-transporting templates, one of the lens-transporting templates corresponds to one of the lens-transporting modes and a lens-transporting track under the lens-transporting mode, one of the lens-transporting tracks corresponds to one of the lens-transporting parameters, and one of the lens-transporting parameters may correspond to one of the image generating effects.
Optionally, in the embodiment of the present application, multiple kinds of mirror templates, for example, a straight mirror, a curved mirror, a rotating mirror, a lifting mirror, a shaking mirror, a follow-up mirror, a sweeping mirror, a pushing mirror, and the like, are stored in the electronic device in advance, and the user can select the multiple kinds of mirror templates according to the needs, so that the electronic device controls the virtual camera to generate at least two three-dimensional model images according to the three-dimensional scene model under different mirror parameters by using the mirror parameters corresponding to the selected mirror template.
Illustratively, in conjunction with fig. 5, as shown in fig. 9, the electronic device displays a plurality of mirror template identifiers, such as a straight mirror identifier 24, a curved mirror identifier 25, and a rotating mirror identifier 26, in the interface 11, and the user may input the curved mirror identifier 25 to trigger the electronic device to determine mirror parameters corresponding to the selected curved mirror, and generate at least two three-dimensional model images according to the three-dimensional scene model. If there is no user demand in the mirror template corresponding to the three types of identifiers, the user can trigger the electronic device to display other mirror templates by sliding the mirror template identifiers, and then the user can click the save control identifier 20 to trigger the electronic device to save the current image.
Step 203d, the electronic device generates a target video corresponding to the target scene according to at least two three-dimensional model images.
In the embodiment of the application, the electronic equipment can generate the target video corresponding to the target scene according to at least two three-dimensional model images after acquiring the at least two three-dimensional model images.
Illustratively, in connection with fig. 9, as shown in fig. 10, by clicking on the save control identifier 20, the user triggers the electronic device to save at least two images and generate a transition image between each adjacent two of the at least two images, and then the electronic device may encode the at least two images and the transition image to generate the target video.
In the embodiment of the application, the electronic equipment can provide a plurality of mirror transporting templates for the user, and the user can select the mirror transporting templates, so that the electronic equipment is triggered to generate at least two three-dimensional model images according to the three-dimensional scene model and the preset mirror transporting parameters, and a target video is generated according to the at least two three-dimensional model images, the difficulty of the electronic equipment in generating the video is reduced, and the efficiency of the electronic equipment in generating the video is improved.
Fig. 11 shows a schematic diagram of a possible configuration of a video generating apparatus according to an embodiment of the present application. As shown in fig. 11, the video generating apparatus 60 may include: a display module 61, a receiving module 62 and a processing module 63.
The display module 61 is configured to display a three-dimensional scene model, where the three-dimensional scene model is a three-dimensional model of the target scene. The receiving module 62 is configured to receive a first input from a user, where the first input is used to set a video mirror parameter. The processing module 63 generates a target video corresponding to the target scene from the three-dimensional scene model in response to the first input received by the receiving module 62.
The embodiment of the application provides a video generating device, wherein under the condition that an electronic device displays a three-dimensional scene module, a user can configure different video mirror parameters for the three-dimensional scene model, and multiple images corresponding to a target scene under the different video mirror parameters are obtained according to the different video mirror parameters, so that the electronic device can generate a target video under the target scene according to multiple images of the same target scene but different video mirror parameters, therefore, the user only needs to input the three-dimensional scene model to configure the different video mirror parameters for the three-dimensional scene model, the video mirror parameters configured by the user by the electronic device are triggered, the video required by the user is generated, and the processing process of generating the video by the electronic device is simplified.
In one possible implementation manner, the processing module 63 is specifically configured to generate at least two three-dimensional model images according to the three-dimensional scene model; and generating a target video corresponding to the target scene according to at least two three-dimensional model images.
In a possible implementation, the first input includes a first sub-input and a second sub-input, and the display module 61 is further configured to display at least one mirror mode identification before the receiving module 62 receives the first input of the user. The receiving module 62 is specifically configured to receive a first sub-input of the target mirror mode identifier from the at least one mirror mode identifier by the user. The display module 61 is specifically configured to display, in response to the first sub-input, a virtual lens identifier corresponding to the first mirror mode, where the target mirror mode identifier is used to indicate the first mirror mode. The receiving module 62 is specifically configured to receive a second sub-input of the virtual lens identifier from the user. The processing module 63 is specifically configured to respond to the second sub-input, update and display the three-dimensional scene model according to the first mirror mode, and generate at least two three-dimensional model images; and generating a target video corresponding to the target scene according to at least two three-dimensional model images. Wherein the at least two three-dimensional model images are associated with the updated displayed three-dimensional scene model.
In one possible implementation manner, the first input is an input for selecting a preset mirror parameter corresponding to the second mirror mode; the processing module 63 is specifically configured to generate at least two three-dimensional model images according to the three-dimensional scene model and preset mirror parameters; and generating a target video corresponding to the target scene according to at least two three-dimensional model images.
In one possible implementation, the processing module 63 is specifically configured to receive a second input of the three-dimensional scene model from a user, where the second input is used to perform light shadow rendering on the three-dimensional scene model, and generate at least two three-dimensional model images with a light shadow rendering effect in response to the second input.
In a possible implementation, the receiving module 62 is further configured to receive, before the display module 61 displays the three-dimensional scene model, a third input of a user, where the third input is used to select at least two images corresponding to the target scene. The processing module 63 is further configured to generate a three-dimensional scene model according to the image information of the at least two images in response to the third input.
In a possible implementation, the display module 61 is further configured to display at least one scene identifier before displaying the three-dimensional scene model. The receiving module 62 is further configured to receive a fourth input from the user of a target scene identifier of the at least one scene identifier, where the target scene identifier is used to indicate the target scene. The processing module 63 is further configured to obtain a three-dimensional scene model of the target scene in response to the fourth input.
Video generation in embodiments of the present applicationDevice and method for controlling the sameMay be an electronic device or a component in an electronic device, such as an integrated circuit or chip. The electronic device may be a terminal, or may be other devices than a terminal. The electronic device may be a mobile phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, mobile internet appliance (Mobile Internet Device, MID), augmented reality (augmented reality, AR)/Virtual Reality (VR) device, robot, wearable device, ultra-mobile personal computer, UMPC, netbook or personal digital assistant (personal digital assistant, PDA), etc., or may be a mobile phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, mobile internet appliance (Mobile Internet Device, MID), mobile internet appliance (VR) or the likeThe embodiment of the present application is not particularly limited, and examples thereof include a server, a network attached storage (Network Attached Storage, NAS), a personal computer (personal computer, PC), a Television (TV), a teller machine, a self-service machine, and the like.
The video generating apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, and the embodiment of the present application is not limited specifically.
The video generating apparatus provided in the embodiment of the present application can implement each process implemented by the embodiments of the methods of fig. 1 to 10, and in order to avoid repetition, a description is omitted here.
Optionally, as shown in fig. 12, the embodiment of the present application further provides an electronic device 700, including a processor 701 and a memory 702, where the memory 702 stores a program or an instruction that can be executed on the processor 701, and the program or the instruction implements each step of the embodiment of the video generating method when executed by the processor 701, and the steps achieve the same technical effects, so that repetition is avoided, and no further description is given here.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device.
Fig. 13 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 100 includes, but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, and processor 110.
Those skilled in the art will appreciate that the electronic device 100 may further include a power source (e.g., a battery) for powering the various components, and that the power source may be logically coupled to the processor 110 via a power management system to perform functions such as managing charging, discharging, and power consumption via the power management system. The electronic device structure shown in fig. 13 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
The display unit 106 is configured to display a three-dimensional scene model, where the three-dimensional scene model is a three-dimensional model of the target scene. A user input unit 107 for receiving a first input from a user, the first input being used to set the video mirror parameters. The processor 110 generates a target video corresponding to the target scene from the three-dimensional scene model in response to the first input.
The embodiment of the application provides electronic equipment, wherein under the condition that the electronic equipment displays a three-dimensional scene module, a user can configure different video mirror parameters for the three-dimensional scene model, and multiple images corresponding to a target scene under the different video mirror parameters are obtained according to the different video mirror parameters, so that the electronic equipment can generate a target video under the target scene according to multiple images of the same target scene but different video mirror parameters, therefore, the user only needs to input the three-dimensional scene model to configure the different video mirror parameters for the three-dimensional scene model, the electronic equipment is triggered to generate a video required by the user according to the video mirror parameters configured by the user, and the processing process of generating the video by the electronic equipment is simplified.
Optionally, in an embodiment of the present application, the processor 110 is specifically configured to generate at least two three-dimensional model images according to the three-dimensional scene model; and generating a target video corresponding to the target scene according to at least two three-dimensional model images.
Optionally, in an embodiment of the present application, the first input includes a first sub-input and a second sub-input, and the display unit 106 is further configured to display at least one mirror mode identifier before receiving the first input of the user. The user input unit 107 is specifically configured to receive a first sub-input of a target mirror mode identifier from the at least one mirror mode identifier by a user. The display unit 106 is specifically configured to display, in response to the first sub-input, a virtual lens identifier corresponding to the first mirror mode, where the target mirror mode identifier is used to indicate the first mirror mode. The user input unit 107 is specifically configured to receive a second sub-input of the virtual lens identifier by the user. The processor 110 is specifically configured to respond to the second sub-input, update and display the three-dimensional scene model according to the first mirror mode, and generate at least two three-dimensional model images; and generating a target video corresponding to the target scene according to at least two three-dimensional model images. Wherein the at least two three-dimensional model images are associated with the updated displayed three-dimensional scene model.
Optionally, in the embodiment of the present application, the first input is an input for selecting a preset mirror parameter corresponding to the second mirror mode; the processor 110 is specifically configured to generate at least two three-dimensional model images according to the three-dimensional scene model and preset mirror parameters; and generating a target video corresponding to the target scene according to at least two three-dimensional model images.
Optionally, in an embodiment of the present application, the processor 110 is specifically configured to receive a second input of the three-dimensional scene model from a user, where the second input is used to perform light shadow rendering on the three-dimensional scene model, and generate at least two three-dimensional model images with a light shadow rendering effect in response to the second input.
Optionally, in the embodiment of the present application, the user input unit 107 is further configured to receive, before displaying the three-dimensional scene model, a third input of a user, where the third input is used to select at least two images corresponding to the target scene. The processor 110 is further configured to generate a three-dimensional scene model from image information of the at least two images in response to the third input.
Optionally, the display unit 106 is further configured to display at least one scene identifier before displaying the three-dimensional scene model. The user input unit 107 is further configured to receive a fourth input from the user of a target scene identifier of the at least one scene identifier, where the target scene identifier is used to indicate the target scene. The processor 110 is further configured to obtain a three-dimensional scene model of the target scene in response to the fourth input.
It should be appreciated that in embodiments of the present application, the input unit 104 may include a graphics processor (Graphics Processing Unit, GPU) 1041 and a microphone 1042, the graphics processor 1041 processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 107 includes at least one of a touch panel 1071 and other input devices 1072. The touch panel 1071 is also referred to as a touch screen. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
Memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a first memory area storing programs or instructions and a second memory area storing data, wherein the first memory area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 109 may include volatile memory or nonvolatile memory, or the memory 109 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (ddr SDRAM), enhanced SDRAM (Enhanced SDRAM), synchronous DRAM (SLDRAM), and Direct RAM (DRRAM). Memory 109 in embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
Processor 110 may include one or more processing units; optionally, the processor 110 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, etc., and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The embodiment of the application also provides a readable storage medium, on which a program or an instruction is stored, which when executed by a processor, implements each process of the embodiment of the video generating method, and can achieve the same technical effects, and in order to avoid repetition, the description is omitted here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes computer readable storage medium such as computer readable memory ROM, random access memory RAM, magnetic or optical disk, etc.
The embodiment of the application further provides a chip, which comprises a processor and a communication interface, wherein the communication interface is coupled with the processor, and the processor is used for running programs or instructions to realize the processes of the embodiment of the video generation method, and can achieve the same technical effects, so that repetition is avoided, and the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
Embodiments of the present application provide a computer program product stored in a storage medium, where the program product is executed by at least one processor to implement the respective processes of the video generation method embodiment described above, and achieve the same technical effects, and for avoiding repetition, a detailed description is omitted herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.

Claims (8)

1. A method of video generation, the method comprising:
displaying a three-dimensional scene model, wherein the three-dimensional scene model is a three-dimensional model of a target scene;
receiving a first input of a user;
responding to the first input, and generating at least two three-dimensional model images according to the three-dimensional scene model based on video mirror parameters corresponding to a first mirror mode; the first input is used for setting the video mirror parameters; the at least two three-dimensional model images are images with the same scene and different video mirror parameters, and the video mirror parameters comprise at least one of the following: a lens transporting shadow parameter, a lens transporting position parameter;
generating a target video corresponding to the target scene based on the at least two three-dimensional model images;
the generating a target video corresponding to the target scene based on the at least two three-dimensional model images includes:
acquiring transition images between every two adjacent images in the at least two three-dimensional model images according to the at least two three-dimensional model images and the parameter information of the at least two three-dimensional model images to obtain L transition images;
and carrying out coding processing on the at least two three-dimensional model images and the L transition images to generate a target video corresponding to the target scene.
2. The method of claim 1, wherein the first input comprises a first sub-input and a second sub-input, the method further comprising, prior to the receiving the first input by the user:
displaying at least one mirror mode identifier;
the receiving a first input of a user, responding to the first input, generating at least two three-dimensional model images according to the three-dimensional scene model based on video mirror parameters corresponding to a first mirror mode, and the method comprises the following steps:
receiving the first sub-input of a target mirror mode identifier in the at least one mirror mode identifier by a user;
responding to the first sub-input, displaying a virtual lens identifier corresponding to the first mirror mode, wherein the target mirror mode identifier is used for indicating the first mirror mode;
receiving the second sub-input of the virtual lens identifier by a user;
responding to the second sub-input, updating and displaying the three-dimensional scene model according to the first mirror mode, and generating at least two three-dimensional model images;
wherein the at least two three-dimensional model images are associated with the updated displayed three-dimensional scene model.
3. The method of claim 1, wherein generating at least two three-dimensional model images from the three-dimensional scene model comprises:
Receiving a second input of a user to the three-dimensional scene model, wherein the second input is used for conducting shadow rendering on the three-dimensional scene model;
at least two three-dimensional model images with the shadow rendering effect are generated in response to the second input.
4. The method of claim 1, wherein prior to the displaying the three-dimensional scene model, the method further comprises:
receiving a third input of a user, wherein the third input is used for selecting at least two images corresponding to a target scene;
and responding to the third input, and generating a three-dimensional scene model according to the image information of the at least two images.
5. The method of claim 1, wherein prior to the displaying the three-dimensional scene model, the method further comprises:
displaying at least one scene identifier;
receiving a fourth input of a user to a target scene identifier in the at least one scene identifier, wherein the target scene identifier is used for indicating a target scene;
and responding to the fourth input, and acquiring a three-dimensional scene model of the target scene.
6. A video generating apparatus, the apparatus comprising: the device comprises a display module, a receiving module and a processing module;
The display module is used for displaying a three-dimensional scene model, wherein the three-dimensional scene model is a three-dimensional model of a target scene;
the receiving module is used for receiving a first input of a user;
the processing module is further used for responding to the first input received by the receiving module and generating at least two three-dimensional model images based on video mirror parameters corresponding to a first mirror mode; the first input is used for setting the video mirror parameters; the at least two three-dimensional model images are images with the same scene and different mirror parameters, and the video mirror parameters comprise at least one of the following: a lens transporting shadow parameter, a lens transporting position parameter;
generating a target video corresponding to the target scene based on the at least two three-dimensional model images;
the processing module is specifically configured to obtain, according to the at least two three-dimensional model images and parameter information of the at least two three-dimensional model images, a transition image between each two adjacent images in the at least two three-dimensional model images, so as to obtain L transition images; and carrying out coding processing on the at least two three-dimensional model images and the L transition images to generate a target video corresponding to the target scene.
7. An electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the video generation method of any one of claims 1 to 5.
8. A readable storage medium, characterized in that the readable storage medium has stored thereon a program or instructions which, when executed by a processor, implement the steps of the video generation method according to any of claims 1 to 5.
CN202211034118.7A 2022-08-26 2022-08-26 Video generation method, device, electronic equipment and readable storage medium Active CN115379195B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211034118.7A CN115379195B (en) 2022-08-26 2022-08-26 Video generation method, device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211034118.7A CN115379195B (en) 2022-08-26 2022-08-26 Video generation method, device, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN115379195A CN115379195A (en) 2022-11-22
CN115379195B true CN115379195B (en) 2023-10-03

Family

ID=84068582

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211034118.7A Active CN115379195B (en) 2022-08-26 2022-08-26 Video generation method, device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN115379195B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115866224A (en) * 2022-11-25 2023-03-28 中国联合网络通信集团有限公司 Scene switching method and device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110276840A (en) * 2019-06-21 2019-09-24 腾讯科技(深圳)有限公司 Control method, device, equipment and the storage medium of more virtual roles
CN112887584A (en) * 2019-11-29 2021-06-01 华为技术有限公司 Video shooting method and electronic equipment
CN112927271A (en) * 2021-03-31 2021-06-08 Oppo广东移动通信有限公司 Image processing method, image processing apparatus, storage medium, and electronic device
CN113384880A (en) * 2020-11-06 2021-09-14 腾讯科技(深圳)有限公司 Virtual scene display method and device, computer equipment and storage medium
CN113473207A (en) * 2021-07-02 2021-10-01 广州博冠信息科技有限公司 Live broadcast method and device, storage medium and electronic equipment
CN114245000A (en) * 2020-09-09 2022-03-25 北京小米移动软件有限公司 Shooting method and device, electronic equipment and storage medium
WO2022068537A1 (en) * 2020-09-29 2022-04-07 华为技术有限公司 Image processing method and related apparatus
CN114387376A (en) * 2022-01-18 2022-04-22 北京有竹居网络技术有限公司 Rendering method and device of three-dimensional scene, electronic equipment and readable storage medium
CN114546227A (en) * 2022-02-18 2022-05-27 北京达佳互联信息技术有限公司 Virtual lens control method, device, computer equipment and medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110276840A (en) * 2019-06-21 2019-09-24 腾讯科技(深圳)有限公司 Control method, device, equipment and the storage medium of more virtual roles
CN112887584A (en) * 2019-11-29 2021-06-01 华为技术有限公司 Video shooting method and electronic equipment
CN114245000A (en) * 2020-09-09 2022-03-25 北京小米移动软件有限公司 Shooting method and device, electronic equipment and storage medium
WO2022068537A1 (en) * 2020-09-29 2022-04-07 华为技术有限公司 Image processing method and related apparatus
CN113384880A (en) * 2020-11-06 2021-09-14 腾讯科技(深圳)有限公司 Virtual scene display method and device, computer equipment and storage medium
CN112927271A (en) * 2021-03-31 2021-06-08 Oppo广东移动通信有限公司 Image processing method, image processing apparatus, storage medium, and electronic device
CN113473207A (en) * 2021-07-02 2021-10-01 广州博冠信息科技有限公司 Live broadcast method and device, storage medium and electronic equipment
CN114387376A (en) * 2022-01-18 2022-04-22 北京有竹居网络技术有限公司 Rendering method and device of three-dimensional scene, electronic equipment and readable storage medium
CN114546227A (en) * 2022-02-18 2022-05-27 北京达佳互联信息技术有限公司 Virtual lens control method, device, computer equipment and medium

Also Published As

Publication number Publication date
CN115379195A (en) 2022-11-22

Similar Documents

Publication Publication Date Title
US20180276882A1 (en) Systems and methods for augmented reality art creation
US11880999B2 (en) Personalized scene image processing method, apparatus and storage medium
US11836866B2 (en) Deforming real-world object using an external mesh
CN115379195B (en) Video generation method, device, electronic equipment and readable storage medium
CN110096144B (en) Interactive holographic projection method and system based on three-dimensional reconstruction
US11328436B2 (en) Using camera effect in the generation of custom synthetic data for use in training an artificial intelligence model to produce an image depth map
CN104601945A (en) Method for continuously displaying picture parts
CN114170472A (en) Image processing method, readable storage medium and computer terminal
CN112784081A (en) Image display method and device and electronic equipment
KR102337020B1 (en) Augmented reality video production system and method using 3d scan data
CN112511815B (en) Image or video generation method and device
CN112612463A (en) Graphical programming control method, system and device
CN106445282B (en) A kind of exchange method based on augmented reality
CN107666572A (en) Shooting method, shooting device, electronic equipment and storage medium
CN112887601A (en) Shooting method and device and electronic equipment
CN113906731A (en) Video processing method and device
CN112887603B (en) Shooting preview method and device and electronic equipment
CN114025237A (en) Video generation method and device and electronic equipment
CN114222069B (en) Shooting method, shooting device and electronic equipment
CN104349065A (en) Picture shooting method, picture shooting device and intelligent terminal
CN114339029B (en) Shooting method and device and electronic equipment
CN114285988B (en) Display method, display device, electronic equipment and storage medium
CN112634339B (en) Commodity object information display method and device and electronic equipment
CN112672059B (en) Shooting method and shooting device
CN116012564B (en) Equipment and method for intelligent fusion of three-dimensional model and live-action photo

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant