WO2023174009A1 - 基于虚拟现实的拍摄处理方法、装置及电子设备 - Google Patents

基于虚拟现实的拍摄处理方法、装置及电子设备 Download PDF

Info

Publication number
WO2023174009A1
WO2023174009A1 PCT/CN2023/077240 CN2023077240W WO2023174009A1 WO 2023174009 A1 WO2023174009 A1 WO 2023174009A1 CN 2023077240 W CN2023077240 W CN 2023077240W WO 2023174009 A1 WO2023174009 A1 WO 2023174009A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
shooting
camera model
virtual reality
shooting range
Prior art date
Application number
PCT/CN2023/077240
Other languages
English (en)
French (fr)
Inventor
赵文珲
吴雨涵
黄翔宇
陈憬夫
吴培培
李笑林
冀利悦
王璨
贺翔
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Publication of WO2023174009A1 publication Critical patent/WO2023174009A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay

Definitions

  • the present disclosure relates to the field of virtual reality technology, and in particular to a shooting processing method, device and electronic equipment based on virtual reality.
  • VR virtual reality
  • users can watch virtual live broadcasts and other video contents. For example, after users wear VR equipment and enter the virtual concert site, they can watch the performance content as if they were at the scene.
  • the present disclosure provides a shooting processing method, device and electronic equipment based on virtual reality.
  • the main purpose is to improve the current existing technology that cannot meet the shooting needs of users in the process of watching VR videos, which affects the use of VR by users.
  • the present disclosure provides a shooting processing method based on virtual reality, including:
  • the camera model is displayed in the virtual reality space, and the viewing frame information is displayed in the viewing frame area preset by the camera model, wherein the viewing screen information is based on the scene information of the virtual reality owned;
  • the captured image information is obtained by recording the real-time viewfinder picture information in the viewfinder frame area.
  • the present disclosure provides a virtual reality-based shooting processing device, including:
  • the display module is configured to display the camera model in the virtual reality space, and display the viewing frame information in the viewing frame area preset by the camera model, wherein the viewing screen information is based on the virtual reality scene. information obtained;
  • the recording module is configured to obtain captured image information by recording real-time viewing frame information in the viewing frame area in response to an instruction to confirm shooting.
  • the present disclosure provides a computer-readable storage medium on which a computer program is stored.
  • the computer program is executed by a processor, the virtual reality-based shooting processing method described in the first aspect is implemented.
  • the present disclosure provides an electronic device, including a storage medium, a processor, and a computer program stored on the storage medium and executable on the processor.
  • the processor executes the computer program, the first aspect is implemented.
  • the described shooting processing method based on virtual reality.
  • the present disclosure provides a virtual reality-based shooting processing method, device and electronic equipment.
  • the present disclosure can provide users with shooting services while watching VR videos.
  • the VR device can display the camera model in the virtual reality space when receiving the call instruction of the shooting function, and display the viewing screen information in the viewfinder area preset by the camera model.
  • the viewing screen information is based on
  • the VR device can obtain the captured image information by recording the real-time viewing screen information in the viewing frame area when receiving the instruction to confirm the shooting.
  • Figure 1 shows a schematic flowchart of a virtual reality-based shooting processing method provided by an embodiment of the present disclosure
  • Figure 2 shows a schematic flowchart of another virtual reality-based shooting processing method provided by an embodiment of the present disclosure
  • Figure 3 shows a schematic diagram showing an example effect of an interactive component model in the form of a suspended ball provided by an embodiment of the present disclosure
  • Figure 4 shows a schematic diagram showing an example effect of a camera model provided by an embodiment of the present disclosure
  • Figure 5 shows a schematic diagram showing an example effect of taking photos and saving them according to an embodiment of the present disclosure
  • FIG. 6 shows a schematic structural diagram of a virtual reality-based shooting processing device provided by an embodiment of the present disclosure.
  • This embodiment provides a virtual reality-based shooting processing method, as shown in Figure 1, which can be applied to the end-side of VR equipment.
  • the method includes:
  • Step 101 In response to the call instruction of the shooting function, obtain the camera model.
  • the camera model may be a preset model related to the shooting equipment, such as a smartphone model, a selfie stick camera model, etc.
  • Step 102 Display the camera model in the virtual reality space, and set the viewfinder preset by the camera model. Viewing screen information is displayed in the frame area.
  • the viewing screen information is obtained based on the virtual reality scene information.
  • displaying the viewing frame information in the preset viewing frame area of the camera model may specifically include: first obtaining the shooting range of the camera model; and then selecting scene information corresponding to the shooting range in the virtual reality image to render. to the texture; finally, the rendered texture map can be placed in the preset viewfinder area of the camera model.
  • the shooting range of the camera refers to the range of the virtual reality scene that the user wants to shoot when watching VR videos.
  • the relevant parameters for controlling the shooting range of the camera can be preset, such as the field of view (FOV), etc. Parameters are preset.
  • the shooting range can be adjusted according to the user's needs to capture the required photos or videos.
  • Scene information may include virtual scene content that can be seen within the shooting range.
  • Unity's camera tool can be used to select scene information corresponding to the shooting range of the camera model in the virtual reality image and render it to Texture (Render To Texture, RTT).
  • the rendered texture map is then placed in the preset viewfinder area of the camera model, thereby displaying the viewfinder information in the preset viewfinder area of the camera model.
  • the viewfinder area can be pre-set according to actual needs.
  • the purpose is to allow users to preview the effect of the selected scene information map before confirming the shooting.
  • the three-dimensional space position of the camera model is bound to the three-dimensional space position of the user's own virtual character in advance, and then based on the real-time three-dimensional space position of the user's own virtual character, the currently displayed three-dimensional space position of the camera model is determined, and then based on The camera model is displayed at this position, so that the effect of the user using the camera is presented, such as the effect of the user's own avatar holding a selfie stick camera.
  • the viewfinder frame can be the display screen position of the selfie stick camera, and the rendered texture map is placed in the viewfinder frame area, thereby simulating a viewfinder screen preview effect similar to that of a real camera before shooting.
  • the virtual shooting method of this embodiment is to render the VR scene information within the selected range into texture in real time, and then paste it into the area of the viewfinder, without the need for the physical camera module. sensor, thus ensuring the picture quality of the captured images.
  • the VR scene content within the dynamic moving shooting range can be presented in the preset viewfinder area in real time. The scene display effect will not be affected by factors such as camera swing, and can well simulate the user's real shooting experience, thereby improving the user's VR experience.
  • Step 103 In response to the instruction to confirm the shooting, obtain the captured image information by recording the real-time viewing screen information in the viewing frame area.
  • the captured image information may specifically include: captured photo information (ie, picture information) or video recording information (ie, recorded video information).
  • photography service or video recording service can be selected according to the actual needs of the user.
  • step 103 may specifically include: obtaining captured image information by recording real-time mapping information in the viewing frame area.
  • the VR device can use the real-time single map information in the viewfinder area as the photo information taken by the user when receiving the user's instruction to confirm the shooting.
  • the VR device can record the real-time texture information in the viewfinder area as video frame data when receiving the user's instruction to confirm the shooting, and stop recording when the user confirms that the shooting is completed. According to this period of time Recorded video frame data is generated to obtain recorded video information.
  • the virtual reality-based shooting processing method provided by this embodiment can provide users with shooting services during viewing VR videos, such as photo taking services or video recording services, so that in the dotted real environment Users can experience the experience of shooting with a camera in a real environment, which improves the user's VR experience.
  • VR videos such as photo taking services or video recording services
  • this embodiment provides a specific method as shown in Figure 2, which method includes:
  • Step 201 In response to the calling instruction of the shooting function, obtain the camera model and obtain the shooting range of the camera model.
  • the calling command of the shooting function can be used to turn on the shooting function, similar to turning on the camera function.
  • the user can trigger the call instruction of the input shooting function through the preset button on the control device (such as a handle device, etc.), and then call the shooting function and experience the shooting service.
  • the image information captured by the user by the camera can be monitored, and then based on the user's hand or the user's handheld device (such as a handle) in the image information, it is determined whether it conforms to the display interactive component model (used for interaction).
  • each interactive component model is pre-bound with the preset conditions of interactive function events). If it is determined that the preset conditions for displaying the interactive component model are met, at least one interactive component model will be displayed in the virtual reality space, and finally by identifying the user's hand Based on the action information of the user's handheld device or the user's handheld device, the interactive function event pre-bound by the interactive component model selected by the user is executed.
  • a camera can be used to capture an image of the user's hand or the user's handheld device, and based on image recognition technology, the user's hand gesture or the position change of the handheld device in the image can be judged. If it is determined that the user's hand or the user's handheld device has been raised a certain amount, The amplitude is such that the user's virtual hand or virtual handheld device mapped in the virtual reality space enters the user's current perspective range, and the display interactive component model can be evoked in the virtual reality space.
  • the user can lift the handheld device to call up an interactive component model in the form of a floating ball.
  • Each floating ball represents a control function, and the user can interact based on the floating ball function.
  • these floating balls 1, 2, 3, 4, and 5 can specifically correspond to: “leave the room”, “shoot”, “post emojis", “post barrage”, “2D live broadcast” and other interactive component models .
  • the position of the user's hand or the user's handheld device is identified and mapped into the virtual reality space to determine the corresponding click.
  • the spatial position of the sign If the spatial position of the click sign matches the spatial position of the target interactive component model among the displayed interactive component models, then the target interactive component model is determined to be the interactive component model selected by the user; finally, the target interactive component is executed Model pre-bound interaction function events.
  • the user can raise the handle of the left hand to evoke the interactive component model displayed in the form of a suspended ball, and then select and click on the interactive component by moving the handle of the right hand.
  • the position of the right hand handle will be identified and mapped into the virtual reality space to determine the spatial position of the corresponding click mark. If the spatial position of the click mark is consistent with the "shooting" interactive component model If the spatial position matches, the user chooses to click on the "shooting"function; finally, the interactive function event pre-bound by the interactive component model of the "shooting" is executed, which triggers the call to the shooting function.
  • steps 201 to 203 can be executed. The process shown.
  • Step 202 In the virtual reality image, select scene information corresponding to the shooting range of the camera model and render it into a texture.
  • Step 203 Display the camera model in the virtual reality space, and place the rendered texture map in the preset viewfinder area of the camera model.
  • the corresponding shooting function panel can be displayed, and then a camera model in the form of a selfie stick camera is displayed in the virtual reality space, and then in the viewfinder Display the viewfinder screen.
  • the user can dynamically adjust the shooting range of the camera model by inputting an adjustment instruction for the shooting range.
  • the calling instruction of the shooting function can be input through the user gesture.
  • the camera can first be identified to shoot the user. image information to obtain user gesture information; and then match the user gesture information with preset gesture information, where different preset gesture information has its own corresponding preset adjustment instructions (used to adjust the shooting range of the camera) ; Then the preset adjustment instructions corresponding to the matched preset gesture information can be obtained as the adjustment instructions for the shooting range.
  • the camera model and its shooting range can be triggered to follow the movement to the left, right, up, or down. Or move downward, or upward to the left, or downward to the left, etc.; moving the user's hand forward or backward can trigger the adjustment of the shooting focus of the camera tool; rotating the user's hand can trigger the camera model and its shooting range to follow the rotation .
  • users can conveniently control shooting and improve shooting efficiency.
  • the calling instruction of the input shooting function can be implemented through the interactive component model.
  • at least one interactive component model can be displayed in the virtual reality space first, wherein the interactive component models each Corresponding to the preset instructions for adjusting the shooting range, such as displaying interactive component models representing movement in the four directions of up, down, left, and right, and displaying interactive component models representing camera rotation and adjustment of focus; and then by identifying the camera, the user's captured images are Image information is obtained to obtain the position of the user's hand or the user's handheld device, and is mapped into the virtual reality space to determine the user's hand or the user's hand.
  • the spatial position of the click mark of the holding device if the spatial position of the click mark matches the spatial position of the target interactive component model among the interactive component models representing the adjustment of the shooting range, then the target interactive component model corresponds to the preset instruction for adjusting the shooting range, As an adjustment instruction for the shooting range of the camera.
  • the camera model and its shooting range can be triggered to follow the movement to the left; if the user's hand or If the spatial position of the click mark on the user's handheld device matches the spatial position of the "turn left” interactive component model, the camera model and its shooting range can be triggered to follow the left rotation.
  • the calling instruction of the shooting function can be input through the control device.
  • the adjustment instruction of the shooting range sent by the control device can be received; and/or the control can be controlled by identifying the camera.
  • the image information captured by the device determines the spatial position change of the control device, and determines the adjustment instructions for the shooting range of the camera based on the spatial position change of the control device.
  • control device can be a handle device held by the user, binding the shooting range of the camera viewfinder to the handle, and the user moves/rotates the handle to view the view; by pushing the joystick back and forth, the focus of the viewfinder image can be adjusted, etc.
  • physical buttons for up, down, left, right and rotation control can also be preset on the handle device. Users can directly initiate adjustments to the shooting range of the camera through these physical buttons.
  • Step 204 In response to the adjustment instruction of the shooting range, dynamically adjust the shooting range of the camera model.
  • step 204 may specifically include: dynamic adjustment by adjusting the spatial position of the camera model (such as spatial position adjustment of up, down, left, right, left and right rotation) and/or the shooting focus of the camera tool (such as Unity's Camera tool).
  • the spatial position of the camera model such as spatial position adjustment of up, down, left, right, left and right rotation
  • the shooting focus of the camera tool such as Unity's Camera tool
  • the method of this embodiment may also include: outputting guidance information on the adjustment method of the shooting range.
  • guidance information can be prompted to assist the user in shooting operations such as “push the joystick back and forth to adjust the focus”, “press the B button to exit shooting”, “click the trigger button to take a photo”, etc., which improves the user's ability to adjust the shooting range of the camera model and other shooting efficiency of related operations.
  • Step 205 In the virtual reality image, select the scene corresponding to the adjusted shooting range in real time. Information is rendered to the texture.
  • Step 206 Place the texture map obtained by real-time rendering in the preset viewfinder area of the camera model.
  • step 206 may specifically include: displaying the camera model in motion based on the dynamically adjusted spatial position of the camera model, and placing the texture map obtained by real-time rendering in the preset viewfinder area of the camera model.
  • the VR scene content within the dynamically moving shooting range can be presented in the preset viewfinder area in real time, and the viewfinder display effect will not be Affected by factors such as camera swing, it can well simulate the user's real shooting experience, thereby improving the user's VR experience.
  • Step 207 In response to the instruction to confirm the shooting, obtain the captured image information by recording the real-time map information in the viewfinder area.
  • step 207 may specifically include: outputting relevant prompt information in the video recording, or displaying a picture with a flashing effect in the viewfinder area; after confirming After obtaining the captured image information, a prompt message indicating successful shooting and recording can be output.
  • text or icon information representing the video can be displayed during the recording process, and the voice prompts in the video can also be output.
  • the photography service when the user clicks to take a photo, a blank transition picture can be quickly displayed in the viewfinder area and then quickly switched back to the texture information, thereby creating a flashing effect when taking photos, increasing the user's closer to the real shooting experience.
  • Figure 5 after the photo is taken successfully, it can be prompted that the photo taken has been successfully saved, and the storage directory of the photo can be displayed.
  • this embodiment may also include: responding to the sharing instruction, sharing the captured image information to the target platform (such as a social platform, the user or other users You can access these captured image information), or share it with designated users in the contact list through the server (such as sharing the captured image information with the user's designated friends through the server), or share it with other virtual reality users in the same virtual reality space.
  • the target platform such as a social platform, the user or other users You can access these captured image information
  • the server such as sharing the captured image information with the user's designated friends through the server
  • share it with other virtual reality users in the same virtual reality space The user corresponding to the object.
  • the user can view other users currently entering the same room, and then select one of them to share the captured image information with them; or select other virtual objects in the same VR scene through user focus, handle rays, etc., and share the captured image information with them.
  • the image information is shared with the virtual object, and the system can Locate the icon, find the corresponding target user, and forward the captured image information shared by the user to the target user to achieve the purpose of sharing the captured photos or videos.
  • the method of this embodiment may also include: displaying the camera model used when shooting other virtual objects in the same virtual reality space.
  • the camera used can be displayed when shooting other virtual objects.
  • Model For example, in the VR scene of a live concert, there are three virtual objects, namely virtual object a, virtual object b, and virtual object c, which correspond to three users entering the same room.
  • the system When the system detects the shooting of virtual object a, it can simultaneously display the camera model used by virtual object a to virtual object b and virtual object c, so that the two users of virtual object b and virtual object c can intuitively understand the current situation of virtual object a. Filming in progress. And in order to present a more realistic feeling, the system can also synchronize the cut information in the viewfinder area of the camera model (such as the texture map rendered by the VR scene within the shooting range selected for virtual object a) to The user side of virtual object b and virtual object c. In this way, you can experience a more realistic VR experience when taking selfies with multiple people (virtual objects).
  • the camera models used when shooting other virtual objects may include: In the same virtual reality space, the camera model of one's own virtual object and the camera model of other virtual objects are displayed according to their corresponding independent spatial positions. For example, the camera model of each virtual object in the same virtual reality space has its own corresponding independent spatial position and does not affect each other. There will be no problem of camera model display conflict.
  • this embodiment can provide users with shooting services during watching VR videos, such as photo taking services or video recording services, so that users in the dotted real environment can experience shooting with a camera in a real environment.
  • the feeling improves the user’s VR experience.
  • this embodiment provides a shooting processing device based on virtual reality.
  • the device includes: an acquisition module 31, a display module 32, a recording module Module 33.
  • the acquisition module 31 is configured to acquire the camera model in response to the calling instruction of the shooting function
  • the display module 32 is configured to display the camera model in the virtual reality space, and display the viewing frame information in the viewing frame area preset by the camera model, wherein the viewing screen information is based on virtual reality. Obtained from scene information;
  • the recording module 33 is configured to respond to the instruction to confirm the shooting and obtain the captured image information by recording the real-time viewing frame information in the viewing frame area.
  • the display module 32 is specifically configured to obtain the shooting range of the camera model; in the virtual reality image, select the scene information corresponding to the shooting range and render it to the texture; and map the rendered texture Placed within the preset viewfinder area of the camera model;
  • the recording module 33 is specifically configured to obtain captured image information by recording real-time map information in the viewfinder area.
  • the device may also include: an adjustment module;
  • the adjustment module is configured to adjust the camera model in response to the adjustment instruction of the shooting range before obtaining the captured image information by recording the real-time viewing frame information in the viewing frame area in response to the instruction to confirm the shooting. Dynamically adjust the shooting range;
  • the display module 32 is also configured to select the scene information corresponding to the adjusted shooting range in the virtual reality image and render it into a texture in real time; and place the texture map obtained by the real-time rendering in the preset position of the camera model. within the viewfinder area.
  • the adjustment module is configured to dynamically adjust the shooting range by adjusting the spatial position of the camera model and/or the shooting focal length of the camera tool.
  • the display module 32 is specifically configured to perform motion display on the camera model based on the dynamically adjusted spatial position of the camera model, and at the same time place the texture map obtained by real-time rendering on the camera model. within the viewfinder area preset by the camera model.
  • the acquisition module 31 is also configured to identify the image information captured by the user by the camera before dynamically adjusting the shooting range of the camera model in response to the adjustment instruction of the shooting range, to obtain User gesture information; matching the user gesture information with preset gesture information; and obtaining the preset adjustment instruction corresponding to the matched preset gesture information as an adjustment instruction for the shooting range.
  • the acquisition module 31 is further configured to receive the shooting range of the camera model sent by the control device before dynamically adjusting the shooting range of the camera model in response to the adjustment instruction of the shooting range. Adjustment instructions; and/or, by identifying the image information captured by the camera on the control device, determining the spatial position change of the control device, and determining the adjustment instruction of the shooting range based on the spatial position change of the control device.
  • the display module 32 is also configured to output adjustment method guidance information for the shooting range.
  • this device also includes: sharing module;
  • the sharing module is configured to, after obtaining the captured image information, respond to the sharing instruction and share the captured image information to the target platform, or share it with a designated user in the contact list through the server, or share it with a user in the same contact list. Users corresponding to other virtual objects in the virtual reality space.
  • the display module 32 is also configured to display the camera model used when shooting other virtual objects in the same virtual reality space.
  • the display module 32 is specifically configured to display the camera model of one's own virtual object and the camera models of other virtual objects according to their corresponding independent spatial positions in the same virtual reality space.
  • the captured image information includes: captured photo information or video information.
  • the recording module 33 is configured to output relevant prompt information in the video recording, or display a picture with a flashing effect in the viewfinder area; after confirming that the picture is taken, After capturing the image information, a prompt message indicating successful shooting and recording is output.
  • this embodiment also provides a computer-readable storage medium on which a computer program is stored.
  • the computer program is executed by the processor, the above-mentioned Figures 1 and 2 are implemented.
  • the technical solution of the present disclosure can be embodied in the form of a software product.
  • the software product can be stored in a non-volatile storage medium (which can be a CD-ROM, U disk, mobile hard disk, etc.), including several Instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method of each implementation scenario of the present disclosure.
  • a computer device which may be a personal computer, a server, or a network device, etc.
  • embodiments of the present disclosure also provide an electronic device, which can be a virtual reality device, such as VR A head-mounted device, etc., which includes a storage medium and a processor; the storage medium is used to store a computer program; the processor is used to execute the computer program to implement the above-mentioned virtual reality-based shooting processing method as shown in Figures 1 and 2 .
  • a virtual reality device such as VR A head-mounted device, etc.
  • the storage medium is used to store a computer program
  • the processor is used to execute the computer program to implement the above-mentioned virtual reality-based shooting processing method as shown in Figures 1 and 2 .
  • the above-mentioned physical devices may also include user interfaces, network interfaces, cameras, radio frequency (Radio Frequency, RF) circuits, sensors, audio circuits, WI-FI modules, etc.
  • the user interface may include a display screen (Display), an input unit such as a keyboard (Keyboard), etc.
  • the optional user interface may also include a USB interface, a card reader interface, etc.
  • Optional network interfaces may include standard wired interfaces, wireless interfaces (such as WI-FI interfaces), etc.
  • the above-mentioned physical device structure does not constitute a limitation on the physical device, and may include more or fewer components, or combine certain components, or arrange different components.
  • the storage medium may also include an operating system and a network communication module.
  • the operating system is a program that manages the hardware and software resources of the above-mentioned physical devices and supports the operation of information processing programs and other software and/or programs.
  • the network communication module is used to realize communication between components within the storage medium, as well as communication with other hardware and software in the information processing physical device.
  • this embodiment can provide users with shooting services during watching VR videos, such as photo taking services or video recording services, so that users in the dotted real environment can experience It feels like shooting with a camera in a real environment, which enhances the user's VR experience.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)
  • Studio Devices (AREA)

Abstract

本公开涉及一种基于虚拟现实的拍摄处理方法、装置及电子设备,涉及虚拟现实技术领域,其中方法包括:首先响应于拍摄功能的调用指令,首先获取拍摄器模型;然后在虚拟现实空间中显示所述拍摄器模型,并在所述拍摄器模型预设的取景框区域内显示取景画面信息,其中,所述取景画面信息是根据虚拟现实的场景信息得到的;响应于确认拍摄的指令,通过记录所述取景框区域内实时的取景画面信息,得到拍摄图像信息。通过应用本公开的技术方案,使得在虚线现实环境下的用户能够体验到犹如真实环境中使用相机拍摄的感受,提升了用户的VR使用体验。

Description

基于虚拟现实的拍摄处理方法、装置及电子设备
相关申请的交叉引用
本申请基于申请号为202210264018.7、申请日为2022年03月17日、名称为“基于虚拟现实的拍摄处理方法、装置及电子设备”的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本公开涉及虚拟现实技术领域,尤其涉及一种基于虚拟现实的拍摄处理方法、装置及电子设备。
背景技术
随着社会生产力和科学技术的不断发展,各行各业对虚拟现实(Virtual Reality,VR)技术的需求日益旺盛。VR技术也取得了巨大进步,并逐步成为一个新的科学技术领域。
目前,基于VR技术可使用户观看到虚拟的现场直播等视频内容,如用户佩戴VR设备后进入到虚拟的演唱会现场,观看演出内容,犹如身在现场的感觉。
然而,相关技术无法满足用户在观看VR视频过程中的拍摄需求,影响了用户的VR使用体验。
发明内容
有鉴于此,本公开提供了一种基于虚拟现实的拍摄处理方法、装置及电子设备,主要目的在于改善目前现有技术无法满足用户在观看VR视频过程中的拍摄需求,影响了用户的VR使用体验的技术问题。
第一方面,本公开提供了一种基于虚拟现实的拍摄处理方法,包括:
响应于拍摄功能的调用指令,获取拍摄器模型;
在虚拟现实空间中显示所述拍摄器模型,并在所述拍摄器模型预设的取景框区域内显示取景画面信息,其中,所述取景画面信息是根据虚拟现实的场景信息 得到的;
响应于确认拍摄的指令,通过记录所述取景框区域内实时的取景画面信息,得到拍摄图像信息。
第二方面,本公开提供了一种基于虚拟现实的拍摄处理装置,包括:
获取模块,被配置为响应于拍摄功能的调用指令,获取拍摄器模型;
显示模块,被配置为在虚拟现实空间中显示所述拍摄器模型,并在所述拍摄器模型预设的取景框区域内显示取景画面信息,其中,所述取景画面信息是根据虚拟现实的场景信息得到的;
记录模块,被配置为响应于确认拍摄的指令,通过记录所述取景框区域内实时的取景画面信息,得到拍摄图像信息。
第三方面,本公开提供了一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现第一方面所述的基于虚拟现实的拍摄处理方法。
第四方面,本公开提供了一种电子设备,包括存储介质、处理器及存储在存储介质上并可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现第一方面所述的基于虚拟现实的拍摄处理方法。
借由上述技术方案,本公开提供的一种基于虚拟现实的拍摄处理方法、装置及电子设备,与目前现有技术相比,本公开可为用户提供在观看VR视频过程中的拍摄服务。具体的,VR设备可在接收到拍摄功能的调用指令时,可在虚拟现实空间中显示拍摄器模型,并在拍摄器模型预设的取景框区域内显示取景画面信息,该取景画面信息是根据虚拟现实的场景信息得到的,VR设备可在接收到确认拍摄的指令时,通过记录该取景框区域内实时的取景画面信息,得到拍摄图像信息。通过应用本公开的技术方案,使得在虚线现实环境下的用户能够体验到犹如真实环境中使用相机拍摄的感受,提升了用户的VR使用体验。
上述说明仅是本公开技术方案的概述,为了能够更清楚了解本公开的技术手段,而可依照说明书的内容予以实施,并且为了让本公开的上述和其它目的、特征和优点能够更明显易懂,以下特举本公开的具体实施方式。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。
为了更清楚地说明本公开实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1示出了本公开实施例提供的一种基于虚拟现实的拍摄处理方法的流程示意图;
图2示出了本公开实施例提供的另一种基于虚拟现实的拍摄处理方法的流程示意图;
图3示出了本公开实施例提供的悬浮球形式的交互组件模型的显示示例效果的示意图;
图4示出了本公开实施例提供的拍摄器模型的显示示例效果的示意图;
图5示出了本公开实施例提供的拍摄照片保存的显示示例效果的示意图;
图6示出了本公开实施例提供的一种基于虚拟现实的拍摄处理装置的结构示意图。
具体实施方式
下面将参照附图更详细地描述本公开的实施例。需要说明的是,在不冲突的情况下,本公开中的实施例及实施例中的特征可以相互组合。
为了改善目前现有技术无法满足用户在观看VR视频过程中的拍摄需求,影响了用户的VR使用体验的技术问题。本实施例提供了一种基于虚拟现实的拍摄处理方法,如图1所示,可应用于VR设备端侧,该方法包括:
步骤101、响应于拍摄功能的调用指令,获取拍摄器模型。
拍摄器模型可为拍摄设备相关的预设模型,如可为智能手机模型、自拍杆相机模型等。
步骤102、在虚拟现实空间中显示拍摄器模型,并在拍摄器模型预设的取景 框区域内显示取景画面信息。
其中,取景画面信息是根据虚拟现实的场景信息得到的。
可选的,在拍摄器模型预设的取景框区域内显示取景画面信息具体可包括:首先获取拍摄器模型的拍摄范围;然后在虚拟现实图像中,选择与该拍摄范围所对应的场景信息渲染到纹理;最后可将渲染得到的纹理贴图放置在拍摄器模型预设的取景框区域内。
拍摄器的拍摄范围是指用户在观看VR视频过程中对虚拟现实场景所要拍摄的范围,对于本实施例,针对控制拍摄器的拍摄范围的相关参数可预先设置,如视场角(FOV)等参数预先设定。该拍摄范围可根据用户的需求进行调整,进而拍摄到所需的照片或视频等。
场景信息可包括针对拍摄范围内所能看到的虚拟场景内容,例如,可利用Unity的相机(Camera)工具,在虚拟现实图像中,选择与拍摄器模型的拍摄范围所对应的场景信息渲染到纹理(Render To Texture,RTT)。然后将渲染得到的纹理贴图放置在拍摄器模型预设的取景框区域内,进而实现在拍摄器模型预设的取景框区域内显示取景画面信息。
取景框区域可根据实际需求预先设置,目的是为了让用户在确认拍摄之前,对所选场景信息贴图的效果预览。
例如,预先将拍摄器模型的三维空间位置与用户本身虚拟角色的三维空间位置进行绑定,然后基于用户本身虚拟角色实时的三维空间位置,确定该拍摄器模型当前显示的三维空间位置,进而依据此位置显示拍摄器模型,使得呈现出用户在使用拍摄器的效果,如呈现出用户本身虚拟角色手持自拍杆相机的效果。而取景框可为自拍杆相机的显示屏幕位置,将渲染得到的纹理贴图放置在该取景框区域内,进而模拟得到类似于真实相机拍摄前的取景画面预览效果。
与现有录屏的方式不同,本实施例方案这种虚拟拍摄方式是对所选范围内的VR场景信息实时渲染到纹理,然后再贴到取景框的区域内,无需借助实体相机模块的那些传感器,因此可保证拍摄图像的画面质量。并且在拍摄器移动过程中,能够实时将动态移动拍摄范围内的VR场景内容呈现在预设的取景框区域内,取 景画面展示效果不会受到拍摄器摆动等因素的影响,可很好地模拟出用户真实拍摄的感受,进而可提升用户的VR使用体验。
步骤103、响应于确认拍摄的指令,通过记录取景框区域内实时的取景画面信息,得到拍摄图像信息。
可选的,拍摄图像信息具体可包括:拍摄的照片信息(即图片信息)或录像信息(即录制的视频信息)。本实施例中具体可根据用户的实际需求选择拍照服务还是录像服务。基于上述通过纹理贴图得到取景画面信息的可选方式,步骤103具体可包括:通过记录取景框区域内实时的贴图信息,得到拍摄图像信息。
例如,如果用户选择拍照服务,VR设备可在接收到用户确认拍摄的指令时,将取景框区域内实时的单张贴图信息作为用户所拍摄的照片信息。而如果用户选择录像服务,VR设备可在接收到用户确认拍摄的指令时,记录取景框区域内实时的贴图信息作为视频帧数据,并在用户确认拍摄完成时,停止录制,依据这段时间内记录的视频帧数据生成得到录制的视频信息。
本实施例提供的基于虚拟现实的拍摄处理方法,与目前现有技术相比,本实施例可为用户提供在观看VR视频过程中的拍摄服务,如拍照服务还是录像服务,使得在虚线现实环境下的用户能够体验到犹如真实环境中使用相机拍摄的感受,提升了用户的VR使用体验。
进一步的,作为上述实施例的细化和扩展,为了完整说明本实施例方法的具体实现过程,本实施例提供了如图2所示的具体方法,该方法包括:
步骤201、响应于拍摄功能的调用指令,获取拍摄器模型以及获取拍摄器模型的拍摄范围。
拍摄功能的调用指令可用于开启拍摄功能,类似于打开相机功能。例如,用户可通过操控设备(如手柄设备等)上的预置按钮触发输入拍摄功能的调用指令,进而调用使用拍摄功能,体验拍摄服务。
用户输入拍摄功能的调用指令还可存在其他多种可选方式,相比于使用实体设备按钮进行触发拍摄功能调用的方式,本可选方式提出无需借助实体设备按钮进行VR操控的改进方案,可改善由于实体设备按钮容易损坏,进而会容易影响 到用户操控的技术问题。
具体的,在本可选方式中,可监测摄像头对用户拍摄的图像信息,然后根据图像信息中的用户手部或用户手持设备(如手柄),判断是否符合显示交互组件模型(用于交互的组件模型,交互组件模型各自预先绑定有交互功能事件)的预设条件,若判定符合显示交互组件模型的预设条件,则在虚拟现实空间中显示至少一个交互组件模型,最后通过识别用户手部或用户手持设备的动作信息,执行用户所选的交互组件模型预先绑定的交互功能事件。
例如,可利用摄像头拍摄用户手部图像或用户手持设备图像,并基于图像识别技术对该图像中的用户手部手势或手持设备位置变化进行判断,若判定用户手部或用户手持设备抬起一定幅度,使得在虚拟现实空间中映射的用户虚拟手部或虚拟手持设备进入到用户当前的视角范围内,则可在虚拟现实空间中唤起显示交互组件模型。如图3所示,基于图像识别技术,用户抬起手持设备可唤出如悬浮球形式的交互组件模型,其中,每个悬浮球各自代表一种操控功能,用户可基于悬浮球功能进行交互。如图3所示,这些悬浮球1、2、3、4、5具体可对应:“离开房间”、“拍摄”、“发表情”、“发弹幕”、“2D直播”等交互组件模型。
在唤出如悬浮球形式的交互组件模型后,根据后续监测到的用户手部图像或用户手持设备图像,通过识别用户手部或用户手持设备的位置,映射到虚拟现实空间中,确定相应点击标志的空间位置,如果该点击标志的空间位置与显示的这些交互组件模型中的目标交互组件模型的空间位置匹配,则确定目标交互组件模型为用户所选的交互组件模型;最后执行目标交互组件模型预先绑定的交互功能事件。
用户可通过左手的手柄抬起来唤起显示如悬浮球形式的交互组件模型,然后通过移动右手的手柄位置选择点击其中的交互组件。在VR设备侧,会根据用户的手柄图像,通过识别右手手柄的位置,映射到虚拟现实空间中,确定相应点击标志的空间位置,如果该点击标志的空间位置与“拍摄”的交互组件模型的空间位置匹配,则用户选择点击了该“拍摄”功能;最后执行该“拍摄”的交互组件模型预先绑定的交互功能事件,即触发调用拍摄功能,具体可执行步骤201至步骤203 所示的过程。
步骤202、在虚拟现实图像中,选择与拍摄器模型的拍摄范围所对应的场景信息渲染到纹理。
步骤203、在虚拟现实空间中显示拍摄器模型,并将渲染得到的纹理贴图放置在拍摄器模型预设的取景框区域内。
例如,如图4所示,用户点击了“拍摄”功能的悬浮球后,可展示相应的拍摄功能面板,进而在虚拟现实空间中显示如自拍杆相机形式的拍摄器模型,然后在取景框中显示取景画面。
在本实施例中,如果用户需要拍摄自己所期望的拍摄范围内的图像信息,可通过输入拍摄范围的调整指令,对拍摄器模型的拍摄范围进行动态调整。
用户输入拍摄功能的调用指令可存在多种可选方式,作为其中一种可选方式,可通过用户手势进行输入拍摄功能的调用指令,相应的,在VR设备侧,可首先识别摄像头对用户拍摄的图像信息,得到用户手势信息;然后将该用户手势信息与预设手势信息进行匹配,其中,不同的预设手势信息均有各自对应的预设调整指令(用于调整拍摄器的拍摄范围);进而可获取匹配到的预设手势信息所对应的预设调整指令,作为拍摄范围的调整指令。
例如,用户手部向左、或向右、或向上、或向下、或向左上、或向左下等移动,可触发拍摄器模型连同其拍摄范围均跟随向左、或向右、或向上、或向下、或向左上、或向左下等移动;用户手部向前或向后移动,可触发调整相机工具的拍摄焦距;用户手部转动,可触发拍摄器模型连同其拍摄范围均跟随转动。通过这种可选方式,可方便用户进行拍摄操控,提高了拍摄效率。
作为其中另一种可选方式,可通过交互组件模型实现输入拍摄功能的调用指令,相应的,在VR设备侧,可首先在虚拟现实空间中显示至少一个交互组件模型,其中,交互组件模型各自对应预设有调整拍摄范围的指令,如显示出分别代表向上下左右四个方向移动的交互组件模型,以及显示出代表相机转动、和调整焦距的交互组件模型;然后通过识别摄像头对用户拍摄的图像信息,获取用户手部或用户手持设备的位置,映射到虚拟现实空间中,进而确定用户手部或用户手 持设备的点击标志空间位置;若点击标志空间位置与代表调整拍摄范围的这些交互组件模型中的目标交互组件模型的空间位置匹配,则将目标交互组件模型对应预设的调整拍摄范围的指令,作为拍摄器的拍摄范围的调整指令。
例如,如果用户手部或用户手持设备的点击标志空间位置与“向左”的交互组件模型的空间位置匹配,则可触发拍摄器模型连同其拍摄范围均跟随向左移动;如果用户手部或用户手持设备的点击标志空间位置与“向左转”的交互组件模型的空间位置匹配,则可触发拍摄器模型连同其拍摄范围均跟随向左转动。通过这种可选方式,无需实体设备按钮操控,可避免由于实体设备按钮容易损坏所带来的影响到用户操控的情况出现。
作为其中又一种可选方式,可通过操控设备实现输入拍摄功能的调用指令,相应的,在VR设备侧,可接收操控设备发送的拍摄范围的调整指令;和/或,通过识别摄像头对操控设备拍摄的图像信息,确定操控设备的空间位置变化,并依据操控设备的空间位置变化确定拍摄器的拍摄范围的调整指令。
例如,操控设备可为用户手持的手柄设备,将拍摄器取景框拍摄范围与手柄绑定,用户移动/转动手柄进行取景;通过前后推动摇杆,可以对取景画面进行调整焦距等。除此之外,还可在手柄设备预先设置上下左右及转动控制的实体按钮,用户通过这些实体按钮可直接发起对拍摄器的拍摄范围调整。
步骤204、响应于拍摄范围的调整指令,对拍摄器模型的拍摄范围进行动态调整。
示例性的,步骤204具体可包括:通过调整拍摄器模型的空间位置(如上下左右、左右转动的空间位置调整)、和/或相机工具(如Unity的Camera工具)的拍摄焦距,实现动态调整拍摄器模型的拍摄范围。
而为了指导用户如何调整拍摄器模型的拍摄范围,可选的,本实施例方法还可包括:输出拍摄范围的调整方法指导信息。例如,可提示“前后推动摇杆调节焦距”、“按B键退出拍摄”、“点按扳机键拍照”等辅助用户拍摄操作的指导信息,提高了用户调整拍摄器模型的拍摄范围以及其他拍摄相关操作的效率。
步骤205、在虚拟现实图像中,实时选择与调整后的拍摄范围所对应的场景 信息渲染到纹理。
步骤206、将实时渲染得到的纹理贴图放置在拍摄器模型预设的取景框区域内。
示例性的,步骤206具体可包括:基于拍摄器模型动态调整的空间位置,对拍摄器模型进行运动显示,同时将实时渲染得到的纹理贴图放置在拍摄器模型预设的取景框区域内。本实施例,在拍摄器(如图4所示的自拍杆相机)移动过程中,能够实时将动态移动拍摄范围内的VR场景内容呈现在预设的取景框区域内,取景画面展示效果不会受到拍摄器摆动等因素的影响,可很好地模拟出用户真实拍摄的感受,进而可提升用户的VR使用体验。
步骤207、响应于确认拍摄的指令,通过记录取景框区域内实时的贴图信息,得到拍摄图像信息。
对于本实施例,为了表现出更加贴近真实拍摄的效果,可选的,步骤207具体可包括:输出录像中相关的提示信息,或在取景框区域内显示起到拍照闪烁效果的画面;在确认得到拍摄图像信息之后,可输出拍摄记录成功的提示信息。
例如,针对录像服务,在录像过程中可显示代表录像中的文字或图标信息等,并且还可一并输出录像中的语音提示等。针对照相服务,在用户点击拍照时,可在取景框区域内迅速显示一张空白的过渡图片然后再迅速切换回贴图信息,进而起到拍照闪烁的效果,增加用户更加贴近真实的拍摄感受。如图5所示,在拍照成功后可提示拍摄的照片保存成功,并可显示该照片的保存目录。
进一步的,为了满足用户对拍摄得到照片或视频的分享需求,在步骤207之后,本实施例还可包括:响应于分享指令,将拍摄图像信息分享至目标平台(如社交平台,用户或其他用户可访问这些拍摄的图像信息),或通过服务端分享给联系人列表中的指定用户(如将拍摄图像信息通过服务端分享给用户指定的好友),或分享给在同一虚拟现实空间中其他虚拟对象所对应的用户。
例如,用户可查看当前进入同一房间的其他用户,然后选择其中的用户将拍摄的图像信息分享给他;或者通过用户焦点、手柄射线等方式,在同一VR场景中选择其他虚拟对象,将拍摄的图像信息分享给该虚拟对象,系统可根据虚拟对 象的标识,查找到对应的目标用户,将用户分享的拍摄图像信息转发给该目标用户,实现拍摄照片或视频的分享目的。
为了使得用户体验到更加贴近真实的VR感受,进一步可选的,本实施例方法还可包括:在同一虚拟现实空间中,显示其他虚拟对象拍摄时所使用的拍摄器模型。例如,在演唱会直播的VR场景中,大家有对现场VR场景拍照的需求,或者几个虚拟角色之间有进行自拍的需求等,因此,在其他虚拟对象拍摄时可显示所使用的拍摄器模型。如在演唱会直播的VR场景中,存在三个虚拟对象,分别为虚拟对象a、虚拟对象b、虚拟对象c,即对应进入同一房间内的三个用户。系统监测到虚拟对象a拍摄时,可将虚拟对象a所使用的拍摄器模型同步展示给虚拟对象b和虚拟对象c,使得虚拟对象b和虚拟对象c这两个用户直观了解到虚拟对象a当前正在拍摄。并且为了呈现出更加真实的感受,系统可将该拍摄器模型的取景框区域内的切图信息(如针对虚拟对象a选择的拍摄范围内VR场景所渲染得到的纹理贴图)也一并同步至虚拟对象b和虚拟对象c的用户端侧。通过这种方式,可在多人(虚拟对象)自拍时体验到更加贴近真实的VR感受。
而为了避免多人同时举起拍摄器模型时造成的显示冲突的情况出现,可选的,上述在同一虚拟现实空间中,显示其他虚拟对象拍摄时所使用的拍摄器模型,具体可包括:在同一虚拟现实空间中,将己方虚拟对象的拍摄器模型与其他虚拟对象的拍摄器模型,按照各自对应的单独空间位置进行显示。例如,在同一虚拟现实空间的每个虚拟对象的拍摄器模型均有各自对应的单独空间位置,互不影响,不会存在拍摄器模型显示冲突的问题。
与目前现有技术相比,本实施例可为用户提供在观看VR视频过程中的拍摄服务,如拍照服务还是录像服务,使得在虚线现实环境下的用户能够体验到犹如真实环境中使用相机拍摄的感受,提升了用户的VR使用体验。
进一步的,作为图1和图2所示方法的具体实现,本实施例提供了一种基于虚拟现实的拍摄处理装置,如图6所示,该装置包括:获取模块31、显示模块32、记录模块33。
获取模块31,被配置为响应于拍摄功能的调用指令,获取拍摄器模型;
显示模块32,被配置为在虚拟现实空间中显示所述拍摄器模型,并在所述拍摄器模型预设的取景框区域内显示取景画面信息,其中,所述取景画面信息是根据虚拟现实的场景信息得到的;
记录模块33,被配置为响应于确认拍摄的指令,通过记录所述取景框区域内实时的取景画面信息,得到拍摄图像信息。
在具体的应用场景中,显示模块32,具体被配置为获取拍摄器模型的拍摄范围;在虚拟现实图像中,选择与所述拍摄范围所对应的场景信息渲染到纹理;将渲染得到的纹理贴图放置在所述拍摄器模型预设的取景框区域内;
相应的,记录模块33,具体被配置为通过记录所述取景框区域内实时的贴图信息,得到拍摄图像信息。
在具体的应用场景中,本装置还可包括:调整模块;
调整模块,被配置为在所述响应于确认拍摄的指令,通过记录所述取景框区域内实时的取景画面信息,得到拍摄图像信息之前,响应于拍摄范围的调整指令,对所述拍摄器模型的拍摄范围进行动态调整;
显示模块32,还被配置为在虚拟现实图像中,实时选择与调整后的所述拍摄范围所对应的场景信息渲染到纹理;将实时渲染得到的纹理贴图放置在所述拍摄器模型预设的取景框区域内。
在具体的应用场景中,调整模块,具体被配置为通过调整所述拍摄器模型的空间位置、和/或相机工具的拍摄焦距,实现动态调整所述拍摄范围。
在具体的应用场景中,显示模块32,具体被配置为基于所述拍摄器模型动态调整的空间位置,对所述拍摄器模型进行运动显示,同时将实时渲染得到的纹理贴图放置在所述拍摄器模型预设的取景框区域内。
在具体的应用场景中,获取模块31,还被配置为在所述响应于拍摄范围的调整指令,对所述拍摄器模型的拍摄范围进行动态调整之前,识别摄像头对用户拍摄的图像信息,得到用户手势信息;将所述用户手势信息与预设手势信息进行匹配;获取匹配到的预设手势信息所对应的预设调整指令,作为所述拍摄范围的调整指令。
在具体的应用场景中,获取模块31,还被配置为在所述响应于拍摄范围的调整指令,对所述拍摄器模型的拍摄范围进行动态调整之前,在虚拟现实空间中显示至少一个交互组件模型,其中,所述交互组件模型各自对应预设有调整拍摄范围的指令;通过识别摄像头对用户拍摄的图像信息,确定用户手部或用户手持设备的点击标志空间位置;若所述点击标志空间位置与所述交互组件模型中的目标交互组件模型的空间位置匹配,则将所述目标交互组件模型对应预设的调整拍摄范围的指令,作为所述拍摄范围的调整指令。
在具体的应用场景中,获取模块31,还被配置为在所述响应于拍摄范围的调整指令,对所述拍摄器模型的拍摄范围进行动态调整之前,接收操控设备发送的所述拍摄范围的调整指令;和/或,通过识别摄像头对操控设备拍摄的图像信息,确定所述操控设备的空间位置变化,并依据所述操控设备的空间位置变化确定所述拍摄范围的调整指令。
在具体的应用场景中,显示模块32,还被配置为输出所述拍摄范围的调整方法指导信息。
在具体的应用场景中,本装置还包括:分享模块;
分享模块,被配置为在得到所述拍摄图像信息之后,响应于分享指令,将所述拍摄图像信息分享至目标平台,或通过服务端分享给联系人列表中的指定用户,或分享给在同一虚拟现实空间中其他虚拟对象所对应的用户。
在具体的应用场景中,显示模块32,还被配置为在同一虚拟现实空间中,显示其他虚拟对象拍摄时所使用的拍摄器模型。
在具体的应用场景中,显示模块32,具体还被配置为在同一虚拟现实空间中,将己方虚拟对象的拍摄器模型与其他虚拟对象的拍摄器模型,按照各自对应的单独空间位置进行显示。
在具体的应用场景中,可选的,所述拍摄图像信息包括:拍摄的照片信息或录像信息。
在具体的应用场景中,记录模块33,具体被配置为输出录像中相关的提示信息,或在所述取景框区域内显示起到拍照闪烁效果的画面;在确认得到所述拍 摄图像信息之后,输出拍摄记录成功的提示信息。
需要说明的是,本实施例提供的一种基于虚拟现实的拍摄处理装置所涉及各功能单元的其它相应描述,可以参考图1和图2中的对应描述,在此不再赘述。
基于上述如图1和图2所示方法,相应的,本实施例还提供了一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现上述如图1和图2所示的基于虚拟现实的拍摄处理方法。
基于这样的理解,本公开的技术方案可以以软件产品的形式体现出来,该软件产品可以存储在一个非易失性存储介质(可以是CD-ROM,U盘,移动硬盘等)中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本公开各个实施场景的方法。
基于上述如图1和图2所示的方法,以及图6所示的虚拟装置实施例,为了实现上述目的,本公开实施例还提供了一种电子设备,具体可以为虚拟现实设备,如VR头戴设备等,该设备包括存储介质和处理器;存储介质,用于存储计算机程序;处理器,用于执行计算机程序以实现上述如图1和图2所示的基于虚拟现实的拍摄处理方法。
可选的,上述实体设备还可以包括用户接口、网络接口、摄像头、射频(Radio Frequency,RF)电路,传感器、音频电路、WI-FI模块等等。用户接口可以包括显示屏(Display)、输入单元比如键盘(Keyboard)等,可选用户接口还可以包括USB接口、读卡器接口等。网络接口可选的可以包括标准的有线接口、无线接口(如WI-FI接口)等。
本领域技术人员可以理解,本实施例提供的上述实体设备结构并不构成对该实体设备的限定,可以包括更多或更少的部件,或者组合某些部件,或者不同的部件布置。
存储介质中还可以包括操作系统、网络通信模块。操作系统是管理上述实体设备硬件和软件资源的程序,支持信息处理程序以及其它软件和/或程序的运行。网络通信模块用于实现存储介质内部各组件之间的通信,以及与信息处理实体设备中其它硬件和软件之间通信。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到本公开可以借助软件加必要的通用硬件平台的方式来实现,也可以通过硬件实现。通过应用本实施例的方案,与目前现有技术相比,本实施例可为用户提供在观看VR视频过程中的拍摄服务,如拍照服务还是录像服务,使得在虚线现实环境下的用户能够体验到犹如真实环境中使用相机拍摄的感受,提升了用户的VR使用体验。
需要说明的是,在本文中,诸如“第一”和“第二”等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上所述仅是本公开的具体实施方式,使本领域技术人员能够理解或实现本公开。对这些实施例的多种修改对本领域的技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本公开的精神或范围的情况下,在其它实施例中实现。因此,本公开将不会被限制于本文所述的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。

Claims (17)

  1. 一种基于虚拟现实的拍摄处理方法,其特征在于,包括:
    响应于拍摄功能的调用指令,获取拍摄器模型;
    在虚拟现实空间中显示所述拍摄器模型,并在所述拍摄器模型预设的取景框区域内显示取景画面信息,其中,所述取景画面信息是根据虚拟现实的场景信息得到的;
    响应于确认拍摄的指令,通过记录所述取景框区域内实时的取景画面信息,得到拍摄图像信息。
  2. 根据权利要求1所述的方法,其特征在于,所述在所述拍摄器模型预设的取景框区域内显示取景画面信息,包括:
    获取拍摄器模型的拍摄范围;
    在虚拟现实图像中,选择与所述拍摄范围所对应的场景信息渲染到纹理;
    将渲染得到的纹理贴图放置在所述拍摄器模型预设的取景框区域内;
    所述通过记录所述取景框区域内实时的取景画面信息,得到拍摄图像信息,包括:
    通过记录所述取景框区域内实时的贴图信息,得到拍摄图像信息。
  3. 根据权利要求2所述的方法,其特征在于,在所述响应于确认拍摄的指令,通过记录所述取景框区域内实时的取景画面信息,得到拍摄图像信息之前,所述方法还包括:
    响应于拍摄范围的调整指令,对所述拍摄器模型的拍摄范围进行动态调整;
    在虚拟现实图像中,实时选择与调整后的所述拍摄范围所对应的场景信息渲染到纹理;
    将实时渲染得到的纹理贴图放置在所述拍摄器模型预设的取景框区域内。
  4. 根据权利要求3所述的方法,其特征在于,所述响应于拍摄范围的调整指令,对所述拍摄器模型的拍摄范围进行动态调整,包括:
    通过调整所述拍摄器模型的空间位置、和/或相机工具的拍摄焦距,实现动态调整所述拍摄范围。
  5. 根据权利要求4所述的方法,其特征在于,所述将实时渲染得到的纹理贴图放置在所述拍摄器模型预设的取景框区域内,包括:
    基于所述拍摄器模型动态调整的空间位置,对所述拍摄器模型进行运动显示,同时将实时渲染得到的纹理贴图放置在所述拍摄器模型预设的取景框区域内。
  6. 根据权利要求3所述的方法,其特征在于,在所述响应于拍摄范围的调整指令,对所述拍摄器模型的拍摄范围进行动态调整之前,所述方法还包括:
    识别摄像头对用户拍摄的图像信息,得到用户手势信息;
    将所述用户手势信息与预设手势信息进行匹配;
    获取匹配到的预设手势信息所对应的预设调整指令,作为所述拍摄范围的调整指令。
  7. 根据权利要求3所述的方法,其特征在于,在所述响应于拍摄范围的调整指令,对所述拍摄器模型的拍摄范围进行动态调整之前,所述方法还包括:
    在虚拟现实空间中显示至少一个交互组件模型,其中,所述交互组件模型各自对应预设有调整拍摄范围的指令;
    通过识别摄像头对用户拍摄的图像信息,确定用户手部或用户手持设备的点击标志空间位置;
    若所述点击标志空间位置与所述交互组件模型中的目标交互组件模型的空间位置匹配,则将所述目标交互组件模型对应预设的调整拍摄范围的指令,作为所述拍摄范围的调整指令。
  8. 根据权利要求3所述的方法,其特征在于,在所述响应于拍摄范围的调整指令,对所述拍摄器模型的拍摄范围进行动态调整之前,所述方法还包括:
    接收操控设备发送的所述拍摄范围的调整指令;和/或,
    通过识别摄像头对操控设备拍摄的图像信息,确定所述操控设备的空间位置变化,并依据所述操控设备的空间位置变化确定所述拍摄范围的调整指令。
  9. 根据权利要求3至8中任一项所述的方法,其特征在于,所述方法还包括:
    输出所述拍摄范围的调整方法指导信息。
  10. 根据权利要求1所述的方法,其特征在于,在得到所述拍摄图像信息之后,所述方法还包括:
    响应于分享指令,将所述拍摄图像信息分享至目标平台,或通过服务端分享给联系人列表中的指定用户,或分享给在同一虚拟现实空间中其他虚拟对象所对应的用户。
  11. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    在同一虚拟现实空间中,显示其他虚拟对象拍摄时所使用的拍摄器模型。
  12. 根据权利要求11所述的方法,其特征在于,所述在同一虚拟现实空间中,显示其他虚拟对象拍摄时所使用的拍摄器模型,包括:
    在同一虚拟现实空间中,将己方虚拟对象的拍摄器模型与其他虚拟对象的拍摄器模型,按照各自对应的单独空间位置进行显示。
  13. 根据权利要求1所述的方法,其特征在于,所述拍摄图像信息包括:拍摄的照片信息或录像信息。
  14. 根据权利要求13所述的方法,其特征在于,所述响应于确认拍摄的指令,通过记录所述取景框区域内实时的取景画面信息,得到拍摄图像信息,包括:
    输出录像中相关的提示信息,或在所述取景框区域内显示起到拍照闪烁效果的画面;
    在确认得到所述拍摄图像信息之后,输出拍摄记录成功的提示信息。
  15. 一种基于虚拟现实的拍摄处理装置,其特征在于,包括:
    获取模块,被配置为响应于拍摄功能的调用指令,获取拍摄器模型;
    显示模块,被配置为在虚拟现实空间中显示所述拍摄器模型,并在所述拍摄器模型预设的取景框区域内显示取景画面信息,其中,所述取景画面信息是根据虚拟现实的场景信息得到的;
    记录模块,被配置为响应于确认拍摄的指令,通过记录所述取景框区域内实时的取景画面信息,得到拍摄图像信息。
  16. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1至14中任一项所述的方法。
  17. 一种电子设备,包括存储介质、处理器及存储在存储介质上并可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现权利要求1至14中任一项所述的方法。
PCT/CN2023/077240 2022-03-17 2023-02-20 基于虚拟现实的拍摄处理方法、装置及电子设备 WO2023174009A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210264018.7 2022-03-17
CN202210264018.7A CN116828131A (zh) 2022-03-17 2022-03-17 基于虚拟现实的拍摄处理方法、装置及电子设备

Publications (1)

Publication Number Publication Date
WO2023174009A1 true WO2023174009A1 (zh) 2023-09-21

Family

ID=88022292

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/077240 WO2023174009A1 (zh) 2022-03-17 2023-02-20 基于虚拟现实的拍摄处理方法、装置及电子设备

Country Status (2)

Country Link
CN (1) CN116828131A (zh)
WO (1) WO2023174009A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117425076A (zh) * 2023-12-18 2024-01-19 湖南快乐阳光互动娱乐传媒有限公司 一种虚拟相机拍摄方法和系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106341603A (zh) * 2016-09-29 2017-01-18 网易(杭州)网络有限公司 用于虚拟现实环境的取景方法、装置以及虚拟现实设备
CN109952757A (zh) * 2017-08-24 2019-06-28 腾讯科技(深圳)有限公司 基于虚拟现实应用录制视频的方法、终端设备及存储介质
CN111701238A (zh) * 2020-06-24 2020-09-25 腾讯科技(深圳)有限公司 虚拟画卷的显示方法、装置、设备及存储介质
CN113852838A (zh) * 2021-09-24 2021-12-28 北京字跳网络技术有限公司 视频数据生成方法、装置、电子设备及可读存储介质
CN114155322A (zh) * 2021-12-01 2022-03-08 北京字跳网络技术有限公司 一种场景画面的展示控制方法、装置以及计算机存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106341603A (zh) * 2016-09-29 2017-01-18 网易(杭州)网络有限公司 用于虚拟现实环境的取景方法、装置以及虚拟现实设备
CN109952757A (zh) * 2017-08-24 2019-06-28 腾讯科技(深圳)有限公司 基于虚拟现实应用录制视频的方法、终端设备及存储介质
CN111701238A (zh) * 2020-06-24 2020-09-25 腾讯科技(深圳)有限公司 虚拟画卷的显示方法、装置、设备及存储介质
CN113852838A (zh) * 2021-09-24 2021-12-28 北京字跳网络技术有限公司 视频数据生成方法、装置、电子设备及可读存储介质
CN114155322A (zh) * 2021-12-01 2022-03-08 北京字跳网络技术有限公司 一种场景画面的展示控制方法、装置以及计算机存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117425076A (zh) * 2023-12-18 2024-01-19 湖南快乐阳光互动娱乐传媒有限公司 一种虚拟相机拍摄方法和系统
CN117425076B (zh) * 2023-12-18 2024-02-20 湖南快乐阳光互动娱乐传媒有限公司 一种虚拟相机拍摄方法和系统

Also Published As

Publication number Publication date
CN116828131A (zh) 2023-09-29

Similar Documents

Publication Publication Date Title
WO2019128787A1 (zh) 网络视频直播方法、装置及电子设备
CN109934931B (zh) 采集图像、建立目标物体识别模型的方法及装置
CN109495686B (zh) 拍摄方法及设备
US20240078703A1 (en) Personalized scene image processing method, apparatus and storage medium
CN108495032B (zh) 图像处理方法、装置、存储介质及电子设备
CN109582122B (zh) 增强现实信息提供方法、装置及电子设备
CN112905074B (zh) 交互界面展示方法、交互界面生成方法、装置及电子设备
US11064095B2 (en) Image displaying system, communication system, and method for image displaying
KR20170027266A (ko) 영상 촬영 장치 및 그 동작 방법
KR20110006243A (ko) 휴대용 단말기에서 매뉴얼 포커싱 방법 및 장치
WO2022073389A1 (zh) 视频画面的展示方法及电子设备
WO2023174223A1 (zh) 视频录制方法、装置和电子设备
CN114286142A (zh) 一种虚拟现实设备及vr场景截屏方法
CN110266983A (zh) 一种图像处理方法、设备及存储介质
CN112044064A (zh) 一种游戏技能展示方法、装置、设备及存储介质
US20230405475A1 (en) Shooting method, apparatus, device and medium based on virtual reality space
WO2023174009A1 (zh) 基于虚拟现实的拍摄处理方法、装置及电子设备
CN114615556B (zh) 虚拟直播增强互动方法及装置、电子设备、存储介质
JP6617547B2 (ja) 画像管理システム、画像管理方法、プログラム
CN112422812B (zh) 图像处理方法、移动终端及存储介质
CN116489504A (zh) 一种摄像头模组的控制方法、摄像头模组及电子设备
CN115967854A (zh) 拍照方法、装置及电子设备
CN113989424A (zh) 三维虚拟形象的生成方法、装置及电子设备
CN111367598B (zh) 动作指令的处理方法、装置、电子设备及计算机可读存储介质
CN112887620A (zh) 视频拍摄方法、装置及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23769517

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2023769517

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2023769517

Country of ref document: EP

Effective date: 20241017