WO2019223158A1 - Vr图像生成方法、装置、计算机设备及存储介质 - Google Patents

Vr图像生成方法、装置、计算机设备及存储介质 Download PDF

Info

Publication number
WO2019223158A1
WO2019223158A1 PCT/CN2018/102873 CN2018102873W WO2019223158A1 WO 2019223158 A1 WO2019223158 A1 WO 2019223158A1 CN 2018102873 W CN2018102873 W CN 2018102873W WO 2019223158 A1 WO2019223158 A1 WO 2019223158A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
dimensional images
dimensional
spatial
steps
Prior art date
Application number
PCT/CN2018/102873
Other languages
English (en)
French (fr)
Inventor
黄俊凯
黄晓霞
靳倩慧
盛亮
杜立
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2019223158A1 publication Critical patent/WO2019223158A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Definitions

  • the embodiments of the present application relate to the field of intelligent prompts, and in particular, to a method, a device, a computer device, and a storage medium for generating a VR image.
  • a three-dimensional model of a house interior needs to be made first, and then a virtual three-dimensional house landscape can be viewed through a VR device.
  • the inventor created by this application found in research that the three-dimensional visualized house selection system in the prior art first needs to construct a three-dimensional model of the interior of the house, which is therefore time-consuming and labor-intensive. And because the modeling work of the three-dimensional model is calculated from the cost, the types of rooms that can be visualized in three dimensions are limited. It is not possible to display each house through three-dimensional modeling. Therefore, only a few representative room types can be selected. Modeling cannot provide users with personalized 3D viewing requirements. Therefore, the three-dimensional visual housing selection system in the prior art is time-consuming and laborious to construct, has a small application area, and cannot meet the diverse needs of users.
  • the embodiments of the present application provide a VR image generating method, device, computer equipment, and storage medium capable of collecting and synthesizing VR images through a portable mobile terminal.
  • a technical solution adopted by the embodiment created in the present application is to provide a VR image generation method, including the following steps: obtaining a preset spatial shooting order in a horizontal direction and a vertical direction; The space shooting sequence sequentially acquires at least two two-dimensional images in different spatial directions; and stitches the at least two two-dimensional images according to the spatial directions represented by the two-dimensional images to generate a VR image.
  • an embodiment of the present application further provides a VR image generating device, including: an acquiring module for acquiring a preset spatial shooting order in a horizontal direction and a vertical direction; a processing module for receiving The spatial shooting order sequentially acquires at least two two-dimensional images in different spatial directions; an execution module is configured to stitch the at least two two-dimensional images according to the spatial directions represented by the respective two-dimensional images to generate a VR image.
  • an embodiment of the present application further provides a computer device including a memory and a processor.
  • the memory stores computer-readable instructions.
  • the processor executes the following steps of a VR image generation method: acquiring a preset spatial shooting order in horizontal and vertical directions; and sequentially acquiring at least two two-dimensional images in different spatial directions according to the spatial shooting order. ; Splicing the at least two two-dimensional images according to a spatial direction represented by each of the two-dimensional images to generate a VR image.
  • an embodiment of the present application further provides a non-volatile storage medium storing computer-readable instructions.
  • the computer-readable instructions are executed by one or more processors, one or more processes are processed.
  • the processor executes the following steps of a VR image generation method: acquiring a preset spatial shooting order in the horizontal and vertical directions; sequentially acquiring at least two two-dimensional images in different spatial directions according to the spatial shooting order; The at least two two-dimensional images are spliced according to a spatial direction represented by each of the two-dimensional images to generate a VR image.
  • a user can capture and synthesize VR images in a house through a portable mobile terminal, and can obtain a three-dimensional picture in the house without spending energy or money on performing three-dimensional modeling of the house. . It can meet the user's needs for VR image acquisition of all houses, or for rapid synthesis of VR images in other application scenarios.
  • FIG. 1 is a schematic flowchart of a VR image generating method according to an embodiment of the present application
  • FIG. 2 is a schematic flowchart of a method for generating a VR image with a preset starting position according to an embodiment of the present application
  • FIG. 3 is a schematic flowchart of naming a two-dimensional image according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a basic process of cutting an image of a cross region according to an embodiment of the present application.
  • FIG. 5 is a schematic flowchart of identifying a cross image area according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a specific process for identifying a proportion of cross image regions according to an embodiment of the present application.
  • FIG. 7 is a schematic flowchart of an angle rotation method when a two-dimensional image is spliced according to an embodiment of the present application
  • FIG. 8 is a block diagram of a basic structure of a VR image generating device according to an embodiment of the present application.
  • FIG. 9 is a block diagram of a basic structure of a computer device according to an embodiment of the present application.
  • terminal and “terminal equipment” as used herein include both wireless signal receiver devices, and only devices with wireless signal receivers that do not have transmission capabilities, as well as receiving and transmitting hardware.
  • Such equipment may include: cellular or other communication equipment, which has a single-line display or multi-line display or a cellular or other communication device without a multi-line display; PCS (Personal Communications Service, Personal Communication System), which can combine voice and data Processing, fax and / or data communication capabilities; PDA (Personal Digital Assistant), which may include radio frequency receivers, pagers, Internet / Intranet access, web browsers, notepads, calendars, and / or GPS (Global Positioning System (Global Positioning System) receiver; conventional laptop and / or palmtop computer or other device having and / or conventional laptop and / or palmtop computer or other device including a radio frequency receiver.
  • GPS Global Positioning System
  • terminal may be portable, transportable, installed in a vehicle (air, sea, and / or land), or suitable and / or configured to operate locally, and / or Runs in a distributed fashion on any other location on Earth and / or space.
  • the "terminal” and “terminal equipment” used herein may also be communication terminals, Internet terminals, music / video playback terminals, such as PDA, MID (Mobile Internet Device), and / or music / video playback Functional mobile phones can also be smart TVs, set-top boxes and other devices.
  • the terminal includes a 720 ° PTZ and a portable shooting terminal (smartphone, camera, PDA, etc.), where the portable shooting terminal is set on the 720 ° PTZ, and the 720 ° PTZ can carry a portable shooting terminal for full shooting. Bearing rotation.
  • the VR image generation method in this embodiment is not limited to VR image synthesis inside a house, and this technical solution can be applied to VR image synthesis in a real environment.
  • FIG. 1 is a schematic flowchart of a VR image generating method according to this embodiment.
  • a VR image generating method includes the following steps:
  • the space shooting order is a preset shooting order of the portable shooting terminal. For example, when performing space shooting, first shoot in the horizontal direction and then shoot in the vertical direction. Or first shoot in the vertical direction and then shoot in the horizontal direction. Or shoot horizontally and vertically.
  • the number and magnitude of rotations of the portable shooting terminal in different planes are limited by the angle of the lens of the portable shooting terminal.
  • a normal lens portable shooting terminal needs to collect 4 photos in the horizontal plane and 2 photos in the vertical direction. Therefore, a 720 ° gimbal needs to be rotated 3 times in the horizontal direction, each rotation angle is 90 °, and the rotation in the vertical direction is 1 time.
  • 3 photos need to be collected in the horizontal plane, and 2 photos should be collected in the vertical direction.
  • the 720 ° head needs to be rotated twice in the horizontal direction at a rotation angle of 150 ° each time, and once in the vertical direction.
  • the user before the user obtains the space shooting order, the user first inputs the model of the portable shooting terminal used or the shooting angle using the lens, and then adapts the appropriate space shooting order.
  • S1200 Obtain at least two two-dimensional images in different spatial directions in sequence according to the spatial shooting order;
  • Two-dimensional images in different spatial directions are sequentially acquired according to the spatial shooting order.
  • the portable shooting terminal needs to collect 4 photos in a horizontal plane, and the spatial shooting sequence of collecting 2 photos in a vertical direction is taken as an example for illustration.
  • the portable shooting terminal When shooting indoors, place the 720 ° head in a relatively central position in the shooting space, and then adjust the shooting direction of the portable shooting terminal to any angle in the horizontal direction.
  • the portable shooting terminal first obtains the initial A two-dimensional picture of the position, then turn 90 ° in turn on the horizontal plane, take a picture after each rotation and then continue to rotate. After all four photos in the horizontal plane have been collected, the 720 ° head is flipped upward, the portable acquisition device collects the first photo in the vertical direction, and then the head is rotated 180 ° to take the photo on the other side in the vertical direction. The collection of two-dimensional pictures is over.
  • S1300 Merge the at least two two-dimensional images according to a spatial direction represented by each of the two-dimensional images to generate a VR image.
  • the captured two-dimensional images are stitched according to the spatial direction represented by each two-dimensional image, that is, the four images in the horizontal direction are closed in accordance with their shooting order to form a closed-loop state, and then the two pictures in the vertical direction are respectively Placed above and below the circular photo in the horizontal direction to complete the VR image composition.
  • the foregoing embodiment enables the terminal for image acquisition to collect two-dimensional images in each spatial direction in an all-round manner through the set spatial shooting order, and then splices the two-dimensional images according to the spatial directions represented by them to form a VR image.
  • the adoption of the above-mentioned technology enables a user to capture and synthesize VR images in a house through a portable mobile terminal, and to obtain a three-dimensional picture in the house without consuming energy or financial resources to perform three-dimensional modeling for the house. It can meet the user's needs for VR image acquisition of all houses, or for rapid synthesis of VR images in other application scenarios.
  • the portable shooting terminal is preset with a starting position when shooting, and the portable shooting terminal rotates from the starting position in turn when shooting, and returns to the starting position after shooting is completed.
  • FIG. 2 is a schematic flowchart of a VR image generating method with a preset starting position in this embodiment.
  • step S1200 includes the following steps:
  • the starting shooting direction of the space shooting order is any direction in the horizontal direction.
  • the starting position is set in the horizontal direction. Since the total shooting angle in the horizontal direction is 360 °, the position of the starting shooting order can be at any point in the horizontal direction.
  • the spatial direction pointed by the starting position is related to the angle at which the 720 ° head is installed.
  • the user can adjust the (rotate the 720 ° head in the horizontal direction as a whole) the portable shooting terminal in the horizontal direction.
  • Position inside adjust the direction facing the starting position.
  • the portable shooting terminal sequentially acquires two-dimensional images in different spatial directions in the space.
  • the 720 ° PTZ drives the portable shooting terminal to return to the starting position to complete the acquisition of two-dimensional images in different spatial directions.
  • the starting position it is convenient for the user to determine the spatial direction facing the starting position, that is, determining the starting 2D picture of the synthesized 3D photo.
  • FIG. 3 is a schematic flowchart of naming a two-dimensional image in this embodiment.
  • step S1212 the following steps are further included:
  • the name indicates that the two-dimensional pictures are the first two-dimensional pictures of the first batch on April 3, 2018. If the initial shooting position is in the horizontal direction, it indicates that the photo is: the first two-dimensional picture in the first batch on April 3, 2018
  • the collected two-dimensional images are named in the order of image acquisition.
  • the two-dimensional images are named in turn according to the set image coding rules, and the result of the naming will be used as the basis for VR image synthesis.
  • the picture is regularly coded and named, which makes it convenient to call the image when the VR image is synthesized, and to identify the spatial direction represented by different two-dimensional images.
  • FIG. 4 is a schematic diagram of a basic process for cutting an image of an intersection region in this embodiment.
  • step S1300 the following steps are further included:
  • Image processing technology is used to identify overlapping image areas in adjacent two-dimensional images.
  • FIG. 5 is a schematic flowchart of identifying a cross image area in this embodiment.
  • step S1310 includes the following steps:
  • image processing is performed on a two-dimensional image.
  • the image processing steps include steps of graying, noise reduction, binarization, character segmentation, and normalization.
  • y (x-MinValue) / (MaxValue-MinValue) Note: x and y are the values before and after the conversion, respectively. MaxValue and MinValue are the maximum and minimum values of the sample.
  • the normalized code is:
  • the row After finding the maximum value of each column of the matrix, and forming a single-row array, the row will be converted to a column after transposing, and then the maximum value will be found if max is not transposed. Only the maximum value of each column can be obtained. .
  • the pixel values in the picture can be read directly without using the normalization process, but the image data after the normalization process is reduced, which can speed up the calculation.
  • the pixel value in a two-dimensional image is extracted, and the pixel value is converted into an array, and then the similarity comparison is performed based on the Hamming distance of the image edge array.
  • the Hamming distance of the array in the same row or column in two adjacent two-dimensional images based on the Hamming distance.
  • the threshold of the Hamming distance can be set according to the actual application. The smaller the Hamming distance is, the higher the similarity pair is required, and the larger the Hamming distance threshold is, the lower the similarity pair is.
  • S1320 Crop and cut off the cross image area in the two-dimensional image, so that the adjacent two-dimensional image stitching edges have no overlapping area.
  • the rows or columns in two adjacent two-dimensional images are the same or similar, the rows or columns of pixels in the similar or identical row or list feature are the intersection image areas of the adjacent two-dimensional images.
  • the cross image area of the adjacent two-dimensional image is subjected to image clipping so that there is no overlapping area between the adjacent two-dimensional images.
  • Recognition and image cropping of the intersection image areas of the edge positions in adjacent two-dimensional images can prevent overlapping areas at the stitching positions of adjacent two-dimensional images, and make the transition of adjacent two-dimensional images smoother.
  • the cross image area occupies a large proportion in the two-dimensional image, which indicates that when performing two-dimensional image acquisition, the angle of rotation of the gimbal has deviated, causing the cross image area in some adjacent two-dimensional images
  • the proportion is too large, resulting in distortion of the synthesized VR image space. Therefore, before cropping the cross-image area of the two-dimensional image, the proportion of the cross-image area needs to be identified.
  • FIG. 6 is a schematic diagram of a specific process for identifying a cross image region in this embodiment.
  • step S1310 As shown in FIG. 6, after step S1310, the following steps are further included:
  • the pixel values of the cross image area are obtained, the pixel values of a single two-dimensional image are obtained. Because the equipment used for shooting is the same, the pixel values in the two-dimensional image are also approximately the same. Therefore, the total pixel value of a two-dimensional image can be obtained by obtaining the pixel values of a two-dimensional image.
  • softwares such as gdi +, OpenCV, cximge, and freeimage are also used.
  • the pixel values of the cross image area are compared with the total pixel values of the two-dimensional image to obtain the percentage data of the pixel values of the cross image area and the total pixel values of the two-dimensional image.
  • the first comparison threshold is a set percentage threshold. In this embodiment, the first comparison threshold is 30%, but it is not limited to a second time.
  • the setting of the first comparison threshold can be set according to a specific application environment. When the color tone in the image is relatively single and the spatial layout is similar, the value of the first comparison threshold can be increased. On the contrary, when the color tone of the shooting environment is complicated and the spatial layout in different spatial directions is large, the first comparison threshold can be reduced. Value.
  • step S1320 is performed.
  • the angle value of each rotation is 90 °.
  • FIG. 7 is a schematic flowchart of an angle rotation method when a two-dimensional image is spliced in this embodiment.
  • step S1300 specifically includes the following steps:
  • the included angle between two adjacent two-dimensional images is 90 °, and the stitching and synthesis of VR images is completed.
  • S1333 Perform image stitching on the boundaries of adjacent two-dimensional images to form the VR image.
  • the captured two-dimensional images are stitched according to the spatial direction represented by each two-dimensional image, that is, the four images in the horizontal direction are closed in accordance with their shooting order to form a closed-loop state, and then the two vertical images are respectively Placed above and below the circular photo in the horizontal direction to complete the VR image composition.
  • FIG. 8 is a block diagram of the basic structure of the VR image generating device of this embodiment.
  • a VR image generating device includes: an acquisition module 2100, a processing module 2200, and an execution module 2300.
  • the obtaining module 2100 is used to obtain a preset spatial shooting order in the horizontal and vertical directions;
  • the processing module 2200 is used to sequentially obtain two-dimensional images in different spatial directions according to the spatial shooting order;
  • the execution module 2300 is used to convert two The two-dimensional image is spliced according to the spatial direction it represents to generate a VR image.
  • the VR image generation device enables a terminal for image acquisition to collect a two-dimensional image in each spatial direction in an all-round manner through a set spatial shooting order, and then splices the two-dimensional image according to the spatial direction it represents to form a VR image. .
  • the adoption of the above-mentioned technology enables a user to capture and synthesize VR images in a house through a portable mobile terminal, and to obtain a three-dimensional picture in the house without consuming energy or financial resources to perform three-dimensional modeling for the house. It can meet the user's needs for VR image acquisition of all houses, or for rapid synthesis of VR images in other application scenarios.
  • the starting shooting direction of the space shooting order is any direction within the horizontal direction;
  • the VR image generation device further includes: a first processing sub-module and a first execution sub-module.
  • the first processing sub-module is used to sequentially rotate in the initial shooting direction according to the spatial shooting order;
  • the first execution sub-module is used to sequentially capture the two-dimensional image in the space direction after rotation.
  • the VR image generating apparatus further includes a first acquisition sub-module and a first named sub-module.
  • the first obtaining sub-module is used to obtain a preset image encoding rule;
  • the first naming sub-module is used to name a two-dimensional image according to the image encoding rule, and the order of naming is performed according to the collection order of the two-dimensional image.
  • the VR image generating device further includes: a second acquisition sub-module and a second processing sub-module.
  • the second acquisition sub-module is used to acquire the cross-image area where the pixels in two adjacent two-dimensional images overlap; the second processing sub-module is used to crop and cut the cross-image area in the two-dimensional image to make the adjacent two-dimensional image There are no overlapping areas at the edges of image stitching.
  • the VR image generating device further includes a third acquisition sub-module and a first calculation sub-module.
  • the third acquisition sub-module is used to obtain a pixel matrix of a two-dimensional image;
  • the first calculation sub-module is used to calculate the same columns or rows in two adjacent pixel matrices, where the images represented by the same columns or rows are pixel overlaps
  • the cross image area of an image is the cross image area where the pixels are represented by the same column or row in the pixel matrix of two adjacent two-dimensional images.
  • the VR image generating apparatus further includes a fourth acquisition sub-module, a third processing sub-module, and a second execution sub-module.
  • the fourth acquisition sub-module is used to acquire the pixel values of the cross-image area;
  • the third processing sub-module is used to compare the pixel values of the cross-image area with the total pixel value of the two-dimensional image;
  • the second execution sub-module is used to When the ratio of the pixel value of the cross image area to the total pixel value is greater than a preset first comparison threshold, a two-dimensional image is acquired again.
  • the VR image generating apparatus further includes a fifth acquisition sub-module and a fourth processing sub-module.
  • the fifth acquisition sub-module is used to sequentially acquire two-dimensional images according to the spatial shooting order; the fourth processing sub-module is used to rotate the angle between two adjacent two-dimensional images clockwise to 90 °.
  • FIG. 9 is a block diagram of the basic structure of the computer device of this embodiment.
  • the computer device includes a processor, a nonvolatile storage medium, a memory, and a network interface connected through a system bus.
  • the non-volatile storage medium of the computer device stores an operating system, a database, and computer-readable instructions.
  • the database may store control information sequences.
  • the processor may implement a VR image generation method.
  • the processor of the computer equipment is used to provide computing and control capabilities to support the operation of the entire computer equipment.
  • the memory of the computer device may store computer-readable instructions.
  • the processor may cause the processor to execute a VR image generation method.
  • the network interface of the computer equipment is used to connect and communicate with the terminal.
  • FIG. 9 is only a block diagram of a part of the structure related to the scheme of the present application, and does not constitute a limitation on the computer equipment to which the scheme of the present application is applied.
  • the specific computer equipment may be Include more or fewer parts than shown in the figure, or combine certain parts, or have a different arrangement of parts.
  • the processor is configured to execute the specific content of the acquisition module 2100, the processing module 2200, and the execution module 2300 in FIG. 8.
  • the memory stores program codes and various types of data required to execute the modules.
  • the network interface is used for data transmission to user terminals or servers.
  • the memory in this embodiment stores program code and data required for executing all sub-modules in the VR image generating device, and the server can call the program code and data of the server to execute the functions of all the sub-modules.
  • the computer equipment sets the spatial shooting order so that the terminal for image acquisition can collect two-dimensional images in each spatial direction in an all-round way, and then stitch the two-dimensional images according to the spatial directions they represent to form a VR image.
  • the present application also provides a storage medium storing computer-readable instructions.
  • the computer-readable instructions are executed by one or more processors, the one or more processors are caused to perform VR image generation according to any one of the foregoing embodiments. Method steps.
  • the computer program can be stored in a computer-readable storage medium.
  • the foregoing storage medium may be a non-volatile storage medium such as a magnetic disk, an optical disc, a read-only memory (ROM), or a random access memory (Random Access Memory, RAM).

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

一种VR图像生成方法、装置、计算机设备及存储介质,该方法包括下述步骤:获取预设的水平方向和竖直方向内的空间拍摄次序(S1100);根据所述空间拍摄次序依次获取至少两张不同空间方向上的二维图像(S1200);将所述至少两张二维图像按照其各二维图像表征的空间方向进行拼接生成VR图像(S1300)。采用上述技术使用户通过便携式移动终端就能够拍摄并合成房屋内的VR图像,无需耗费精力或财力为该房屋进行三维建模,就能够得到房屋内的三维图片。能够满足用户对所有房屋的VR图像采集需求,或其他应用场景中对于VR图像的快速合成。

Description

VR图像生成方法、装置、计算机设备及存储介质
本申请要求于2018年5月23日提交中国专利局、申请号为201810501389.6,发明名称为“VR图像生成方法、装置、计算机设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及智能提示领域,尤其是一种VR图像生成方法、装置、计算机设备及存储介质。
背景技术
数字化发展的今天,所有的信息展示可以通过计算机全景展现,而现阶段发展起来的三维可视化选房系统集成了可随意观赏的虚拟样板间,购房者通过键盘及鼠标的操控,可实现虚拟样板房多角度漫游参观,强烈的身临现场感觉,让购房者在交互操作中感受将来“家”的温馨。
现有技术中的三维可视化选房系统中,首先需要为房屋内景制作三维模型,然后才能够通过VR设备对虚拟的三维房内景观进行观看。
本申请创造的发明人在研究中发现,现有技术中的三维可视化选房系统,首先需要构建房屋内部的三维模型,因此耗时耗力。而且由于三维模型的建模工作从成本计算,能够进行三维可视化的房型是有限的,无法将每一间房屋均通过三维建模的方式进行展示,因此只能选取具有代表性的几种房型进行建模,无法为用户提供个性化的三维看房需求。因此,现有技术中的三维可视化选房系统,构建费时费力,适用面较小,无法满足用户多样化的需求。
发明内容
本申请实施例提供一种通过便携式移动终端就能够采集并合成VR图像的VR图像生成方法、装置、计算机设备及存储介质。
为解决上述技术问题,本申请创造的实施例采用的一个技术方案是:提供一种VR图像生成方法,包括下述步骤:获取预设的水平方向和竖直方向内的空间拍摄次序;根据所述空间拍摄次序依次获取至少两张不同空间方向上的二维图像;将所述至少两张二维图像按照其各二维图像表征的空间方向进行拼接生成VR图像。
为解决上述技术问题,本申请实施例还提供一种VR图像生成装置,包括:获取 模块,用于获取预设的水平方向和竖直方向内的空间拍摄次序;处理模块,用于根据所述空间拍摄次序依次获取至少两张不同空间方向上的二维图像;执行模块,用于将所述至少两张二维图像按照其各二维图像表征的空间方向进行拼接生成VR图像。
为解决上述技术问题,本申请实施例还提供一种计算机设备,包括存储器和处理器,所述存储器中存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行一种VR图像生成方法的下述步骤:获取预设的水平方向和竖直方向内的空间拍摄次序;根据所述空间拍摄次序依次获取至少两张不同空间方向上的二维图像;将所述至少两张二维图像按照其各二维图像表征的空间方向进行拼接生成VR图像。
为解决上述技术问题,本申请实施例还提供一种存储有计算机可读指令的非易失性存储介质,所述计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行一种VR图像生成方法的下述步骤:获取预设的水平方向和竖直方向内的空间拍摄次序;根据所述空间拍摄次序依次获取至少两张不同空间方向上的二维图像;将所述至少两张二维图像按照其各二维图像表征的空间方向进行拼接生成VR图像。
本申请提供的VR图像生成技术方案中,能够使用户通过便携式移动终端就能够拍摄并合成房屋内的VR图像,无需耗费精力或财力为该房屋进行三维建模,就能够得到房屋内的三维图片。能够满足用户对所有房屋的VR图像采集需求,或其他应用场景中对于VR图像的快速合成。
附图说明
图1为本申请实施例VR图像生成方法的基本流程示意图;
图2为本申请实施例预设起始位置的VR图像生成方法的流程示意图;
图3为本申请实施例对二维图像进行命名的流程示意图;
图4为本申请实施例剪切交叉区域图像的基本流程示意图;
图5为本申请实施例识别交叉图像区域的流程示意图;
图6为本申请实施例对交叉图像区域进行占比识别的具体流程示意图;
图7为本申请实施例二维图像拼接时角度旋转方法的流程示意图;
图8为本申请实施例VR图像生成装置基本结构框图;以及
图9为本申请实施例计算机设备基本结构框图。
具体实施方式
本技术领域技术人员可以理解,这里所使用的“终端”、“终端设备”既包括无线信号接收器的设备,其仅具备无发射能力的无线信号接收器的设备,又包括接收和发射硬 件的设备,其具有能够在双向通信链路上,执行双向通信的接收和发射硬件的设备。这种设备可以包括:蜂窝或其他通信设备,其具有单线路显示器或多线路显示器或没有多线路显示器的蜂窝或其他通信设备;PCS(Personal Communications Service,个人通信系统),其可以组合语音、数据处理、传真和/或数据通信能力;PDA(Personal Digital Assistant,个人数字助理),其可以包括射频接收器、寻呼机、互联网/内联网访问、网络浏览器、记事本、日历和/或GPS(Global Positioning System,全球定位系统)接收器;常规膝上型和/或掌上型计算机或其他设备,其具有和/或包括射频接收器的常规膝上型和/或掌上型计算机或其他设备。这里所使用的“终端”、“终端设备”可以是便携式、可运输、安装在交通工具(航空、海运和/或陆地)中的,或者适合于和/或配置为在本地运行,和/或以分布形式,运行在地球和/或空间的任何其他位置运行。这里所使用的“终端”、“终端设备”还可以是通信终端、上网终端、音乐/视频播放终端,例如可以是PDA、MID(Mobile Internet Device,移动互联网设备)和/或具有音乐/视频播放功能的移动电话,也可以是智能电视、机顶盒等设备。
本实施方式中终端包括720°云台和便携式拍摄终端(智能手机、相机和PDA等),其中,便携式拍摄终端设置在720°云台上,720°云台云台能够携带便携式拍摄终端进行全方位旋转。
本实施方式中的VR图像生成方法不局限于房屋内部的VR图像合成,该技术方案能够被应用于现实环境的VR图像合成。
具体地,请参阅图1,图1为本实施例VR图像生成方法的基本流程示意图。
如图1所示,一种VR图像生成方法,包括下述步骤:
S1100、获取预设的水平方向和竖直方向内的空间拍摄次序;
空间拍摄次序是预设的便携式拍摄终端拍摄的顺序,例如,进行空间拍摄时首先在水平方向内进行拍摄,然后在拍摄竖直方向内进行拍摄。或者首先在竖直方向内进行拍摄,然后在拍摄水平方向内进行拍摄。或者在水平方向和竖直方向内进行穿插拍摄。
便携式拍摄终端在不同的平面内转动的次数和幅度受到便携式拍摄终端镜头的角度限制,如普通的镜头便携式拍摄终端拍在水平面内需要采集4张照片,在竖直方向内采集2张照片。因此,720°云台需要在水平方向内旋转3次,每次的旋转角度为90°,而在竖直方向上的转动则为1次。当采用150°的广角镜头进行拍摄时,在水平面内需要采集3张照片,在竖直方向内采集2张照片。720°云台需要在水平方向内旋转2次,每次的旋转角度为150°,而在竖直方向上的转动则为1次。
在一些实施方式中,用户在获取空间拍摄次序之前,首先输入所使用的便携式拍摄终端的型号或者使用镜头的拍摄角度,然后由适配合适的空间拍摄次序。
S1200、根据所述空间拍摄次序依次获取至少两张不同空间方向上的二维图像;
根据空间拍摄次序依次的获取不同空间方向上的二维图像。具体地,便携式拍摄终端拍在水平面内需要采集4张照片,在竖直方向内采集2张照片的空间拍摄次序为例进行说明。
在室内进行拍摄时,将720°云台放置在拍摄的空间内相对中央的位置,然后将便携式拍摄终端的拍摄方向调整至水平方向内的任意角度后,进行图像获取时便携式拍摄终端首先获取初始位置的二维图片,然后在水平面上依次转动90°,每转动一次后进行一次拍摄然后继续进行转动。当水平面内的四张照片均采集完成后,720°云台向上翻转,便携式采集设备采集竖直方向内的第一张照片,然后云台旋转180°拍摄竖直方向另一侧的照片,至此,二维图片的采集结束。
S1300、将所述至少两张二维图像按照其各二维图像表征的空间方向进行拼接生成VR图像。
将拍摄的二维图像,按照每个二维图像表征的空间方向进行拼接,即将水平方向内的4张图像按照其拍摄次序收尾相接形成闭环状态,然后将竖直方向上的两张图片分别放置在水平方向内环状照片的上方和下方完成VR图像的合成。
上述实施方式通过设定的空间拍摄次序,使进行图像采集的终端能够全方位的采集各个空间方向上的二维图像,然后将二维图像按照其表征的空间方向进行图像拼接,形成VR图像。采用上述技术能够使用户通过便携式移动终端就能够拍摄并合成房屋内的VR图像,无需耗费精力或财力为该房屋进行三维建模,就能够得到房屋内的三维图片。能够满足用户对所有房屋的VR图像采集需求,或其他应用场景中对于VR图像的快速合成。
在一些实施方式中,便携式拍摄终端拍摄时预设有起始位置,便携式拍摄终端拍摄时由起始位置开始依次转动,拍摄完成后回归至起始位置。具体请参阅图2,图2为本实施例预设起始位置的VR图像生成方法的流程示意图。
如图2所示,步骤S1200具体包括下述步骤:
S1211、根据预设的旋转角度沿所述空间拍摄次序的起始拍摄方向依次进行旋转;
空间拍摄次序的起始拍摄方向为水平方向内的任一方向。起始位置设置在水平方向内,由于水平方向内总共需要拍摄的角度为360°,因此起始拍摄次序的位置能够在水平方向内的任一一点。
设置720°云台的起始位置,该起始位置指向的空间方向由安装720°云台的角度有关,用户能够通过调整(在水平方向内整体旋转720°云台)便携式拍摄终端在水平方向内的位置,调整起始位置面向的方向。
S1212、依次采集旋转后空间方向上的二维图像。
由起始位置开始便携式拍摄终端依次采集空间内不同空间方向内的二维图像。依次采集结束后,720°云台带动便携式拍摄终端回归至起始位置,完成对不同空间方向上二维图像的采集。
通过设置起始位置,方便用户确定起始位置面向的空间方向,即确定合成的三维照片的起始二维图片。
在一些实施方式中,为方便后期的三维图片合成,需要在对二维图像进行采集时进行编码命名。具体地,请参阅图3,图3为本实施例对二维图像进行命名的流程示意图。
如图3所示,步骤S1212之后还包括下述步骤:
S1221、获取预设的图像编码规则;
为方便后期的三维图片合成,需要在对二维图像进行采集时进行编码命名,具体地,命名方式采用拍摄时的时间信息加拍摄批次和拍摄次序,即名称=时间信息+批次+拍摄次序。
举例说明,201804030001-01,该名称表明二维图片是在2018年4月3日第一批次的第一张二维图片。若起始拍摄位置在水平方向内,则表明该照片为:2018年4月3日第一批次的第一张水平方向内的二维图片
S1222、根据所述图像编码规则对所述二维图像进行命名,其中,所述命名的次序根据所述二维图像的采集顺序依次进行。
根据设定的图像编码规则,按照图像采集的顺序对采集的二维图像进行命名。
通过设定的图像编码规则对二维图像依次进行命名,该命名的结果将作为VR图像合成的依据。
通过在采集二维图像时,对图片进行规则编码命名,使其方便VR图像合成时对图像进行调用,以及识别不同二维图像所表征的空间方向。
在一些实施方式中,二维图像在进行拼接合成时,需要对二维图像中重叠的图像区域进行剪切,以使合成的三维图片的边缘位置无重叠区原实现平滑拼接。具体请参阅图4,图4为本实施例剪切交叉区域图像的基本流程示意图。
如图4所示,步骤S1300之后还包括下述步骤:
S1310、获取两个相邻二维图像中像素重叠的交叉图像区域;
采用图像处理技术对相邻二维图像中重叠的交叉图像区域进行识别。
具体地,识别方法请详见图5,图5为本实施例识别交叉图像区域的流程示意图。
如图5所示,步骤S1310包括下述步骤:
S1311、获取所述二维图像的像素矩阵;
首先对二维图像进行图像处理,图像处理的步骤包括:灰度化、降噪、二值化、字符切分以及归一化等步骤。
对图片进行灰度归一化处理,使各图片中的图像像素均为0-255之间的取值。)通过线性函数转换,表达式如下:
y=(x-MinValue)/(MaxValue-MinValue)说明:x、y分别为转换前、后的值,MaxValue、MinValue分别为样本的最大值和最小值。
归一化代码为:
I=double(I);
maxvalue=max(max(I)');%max
在把矩阵每列的最大值找到,并组成一个单行的数组,转置一下就会行转换为列,再max就求一个最大的值,如果不转置,只能求出每列的最大值。
需要指出的是在一些实施方式中,也能够不采用归一化处理,直接读取图片中的像素值,但归一化处理后的图像数据减小,能够加快计算。
S1312、计算两个相邻像素矩阵中相同的列或行,其中相同的列或行表征的图像为像素重叠的交叉图像区域,所述两个相邻二维图像的像素矩阵中相同的列或行表征的图像为像素重叠的交叉图像区域。
提取个二维图像中的像素数值,并将该像素数值转化为数组,然后通过图像边缘数组的汉明距离进行相似度比较。
根据汉明距离计算两个相邻二维图像中相同的行或列中数组的汉明距离,当相邻二维图像中的行或列的数组之间的汉明距离小于30(不限于),汉明距离的阈值能够根据实际应用的进行设定,汉明距离越小则表明要求的相似对越高,汉明距离阈值越大则表明要求的相似对越低。
S1320、将所述交叉图像区域在所述二维图像中裁剪切除,以使所述相邻二维图像拼接边缘无重叠区域。
当相邻两张二维图像中的行或列相同或相似时,则该相似或相同的行或列表征的像素的行或列是相邻二维图像的交叉图像区域。将该相邻二维图像的交叉图像区域进 行图像剪切,使相邻二维图像之间不具有重叠区域。
通过对相邻二维图像中边缘位置的交叉图像区域进行识别和图像裁剪,能够使相邻二维图像拼接位置处不出现重叠区域,使相邻二维图像拼接过渡更加平滑。
在一些实时方式中,交叉图像区域在二维图像中的占比较大,则表明在进行二维图像采集时,云台转动的角度出现了偏差,致使部分相邻二维图像中的交叉图像区域占比过大,导致合成后的VR图像空间发生扭曲。因此,在进行二维图像交叉图像区域的裁剪之前,需要对交叉图像区域的占比进行识别。具体请参阅图6,图6为本实施例对交叉图像区域进行占比识别的具体流程示意图。
如图6所示,步骤S1310之后还包括下述步骤:
S1313、获取所述交叉图像区域的像素值;
获取相邻二维图像之间的交叉图像区域,并通过用gdi+、OpenCV、cximge、freeimage等软件,对交叉图像区域像素值进行提取。
S1314、将所述交叉图像区域的像素值与所述二维图像的总像素值进行比对;
当获取到交叉图像区域的像素值,获取单个二维图像的像素值,由于拍摄所采用的的设备一致,因此,二维图像中的像素值也大致相同。所以能够通过获取一张二维图像的像素值得到二维图像的总像素值。获取二维图像的像素值同样采用用gdi+、OpenCV、cximge、freeimage等软件。
将交叉图像区域的像素值与二维图像的总像素值进行比对,得到交叉图像区域的像素值与二维图像的总像素值的百分比数据。
S1315、当所述交叉图像区域的像素值与所述总像素值的比值大于预设的第一比较阈值时,重新获取所述二维图像。
第一比较阈值为设定的百分比阈值,在本实施方式中,第一比较阈值为30%,但不限于次,第一比较阈值的设定能够根据具体的应用环境进行设定,当拍摄环境中的色调较为单一,且空间布局相似度较大时,能够提高第一比较阈值的数值,相反,当拍摄环境色调复杂且不同空间方向上的空间布局相差较大时,能够降低第一比较阈值的数值。
当交叉图像区域的像素值与二维图像的总像素值的百分比数据,大于第一比较阈值时,表明采集的二维图像会使拼接后的VR图像出现扭曲,需要对空间内的二维像进行重新采集;当交叉图像区域的像素值与二维图像的总像素值的百分比数据,小于第一比较阈值时则执行步骤S1320。
在一些实施方式中,云台进行二维图像采集时,每次旋转的角度值为90°,在进 行VR图像拼接时,需要对二维图像进行对应的旋转,然后完成拼接。具体请参阅图7,图7为本实施例二维图像拼接时角度旋转方法的流程示意图。
如图7所示,步骤S1300具体包括下述步骤:
S1331、根据所述图像编码规则依次获取所述各二维图像;
根据拍摄的次序,依次获取存储在本地存储空间内的二维图像。
S1332、将两个相邻二维图像的夹角顺时针旋转至90°。
将位于第一张二维图像之后的二维图像依次进行顺时针90°旋转,使其与第一张二维图像垂直,然后将两张图片的边缘进行对齐完成图像拼接,以此类推,通过顺时针旋转使两两相邻的二维图像之间的夹角均为90°,进而完成VR图像的拼接合成。
S1333、将相邻二维图像的边界进行图像拼接形成所述VR图像。
将拍摄的二维图像,按照每个二维图像表征的空间方向进行拼接,即将水平方向内的4张图像按照其拍摄次序收尾相接形成闭环状态,然后将竖直方向上的两张图片分别放置在水平方向内环状照片的上方和下方完成VR图像的合成。
为解决上述技术问题本申请实施例还提供一种用户行为激励装置。具体请参阅图8,图8为本实施例VR图像生成装置基本结构框图。
如图8所示,一种VR图像生成装置,包括:获取模块2100、处理模块2200和执行模块2300。其中,获取模块2100用于获取预设的水平方向和竖直方向内的空间拍摄次序;处理模块2200用于根据空间拍摄次序依次获取不同空间方向上的二维图像;执行模块2300用于将二维图像按照其表征的空间方向进行拼接生成VR图像。
VR图像生成装置通过设定的空间拍摄次序,使进行图像采集的终端能够全方位的采集各个空间方向上的二维图像,然后将二维图像按照其表征的空间方向进行图像拼接,形成VR图像。采用上述技术能够使用户通过便携式移动终端就能够拍摄并合成房屋内的VR图像,无需耗费精力或财力为该房屋进行三维建模,就能够得到房屋内的三维图片。能够满足用户对所有房屋的VR图像采集需求,或其他应用场景中对于VR图像的快速合成。
在一些实施方式中,空间拍摄次序的起始拍摄方向为水平方向内的任一方向;VR图像生成装置还包括:第一处理子模块和第一执行子模块。其中,第一处理子模块用于根据空间拍摄次序沿起始拍摄方向依次进行旋转;第一执行子模块用于依次采集旋转后空间方向上的二维图像。
在一些实施方式中,VR图像生成装置还包括:第一获取子模块和第一命名子模块。其中,第一获取子模块用于获取预设的图像编码规则;第一命名子模块用于根据图像 编码规则对二维图像进行命名,其中,命名的次序根据二维图像的采集顺序依次进行。
在一些实施方式中,VR图像生成装置还包括:第二获取子模块和第二处理子模块。其中,第二获取子模块用于获取两个相邻二维图像中像素重叠的交叉图像区域;第二处理子模块用于将交叉图像区域在二维图像中裁剪切除,以使相邻二维图像拼接边缘无重叠区域。
在一些实施方式中,VR图像生成装置还包括:第三获取子模块和第一计算子模块。其中,第三获取子模块用于获取二维图像的像素矩阵;第一计算子模块用于计算两个相邻像素矩阵中相同的列或行,其中相同的列或行表征的图像为像素重叠的交叉图像区域,两个相邻二维图像的像素矩阵中相同的列或行表征的图像为像素重叠的交叉图像区域。
在一些实施方式中,VR图像生成装置还包括:第四获取子模块、第三处理子模块和第二执行子模块。其中,第四获取子模块用于获取交叉图像区域的像素值;第三处理子模块用于将交叉图像区域的像素值与二维图像的总像素值进行比对;第二执行子模块用于当交叉图像区域的像素值与总像素值的比值大于预设的第一比较阈值时,重新获取二维图像。
在一些实施方式中,VR图像生成装置还包括:第五获取子模块和第四处理子模块。其中,第五获取子模块用于根据空间拍摄次序依次获取二维图像;第四处理子模块用于将两个相邻二维图像的夹角顺时针旋转至90°。
为解决上述技术问题,本申请实施例还提供计算机设备。具体请参阅图9,图9为本实施例计算机设备基本结构框图。
如图9所示,计算机设备的内部结构示意图。如图9所示,该计算机设备包括通过系统总线连接的处理器、非易失性存储介质、存储器和网络接口。其中,该计算机设备的非易失性存储介质存储有操作系统、数据库和计算机可读指令,数据库中可存储有控件信息序列,该计算机可读指令被处理器执行时,可使得处理器实现一种VR图像生成方法。该计算机设备的处理器用于提供计算和控制能力,支撑整个计算机设备的运行。该计算机设备的存储器中可存储有计算机可读指令,该计算机可读指令被处理器执行时,可使得处理器执行一种VR图像生成方法。该计算机设备的网络接口用于与终端连接通信。本领域技术人员可以理解,图9中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
本实施方式中处理器用于执行图8中获取模块2100、处理模块2200和执行模块2300的具体内容,存储器存储有执行上述模块所需的程序代码和各类数据。网络接口用于向用户终端或服务器之间的数据传输。本实施方式中的存储器存储有VR图像生成装置中执行所有子模块所需的程序代码及数据,服务器能够调用服务器的程序代码及数据执行所有子模块的功能。
计算机设备通过设定的空间拍摄次序,使进行图像采集的终端能够全方位的采集各个空间方向上的二维图像,然后将二维图像按照其表征的空间方向进行图像拼接,形成VR图像。采用上述技术能够使用户通过便携式移动终端就能够拍摄并合成房屋内的VR图像,无需耗费精力或财力为该房屋进行三维建模,就能够得到房屋内的三维图片。能够满足用户对所有房屋的VR图像采集需求,或其他应用场景中对于VR图像的快速合成。
本申请还提供一种存储有计算机可读指令的存储介质,所述计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行上述任一实施例所述VR图像生成方法的步骤。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,该计算机程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,前述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)等非易失性存储介质,或随机存储记忆体(Random Access Memory,RAM)等。

Claims (20)

  1. 一种VR图像生成方法,包括下述步骤:
    获取预设的水平方向和竖直方向内的空间拍摄次序;
    根据所述空间拍摄次序依次获取至少两张不同空间方向上的二维图像;
    将所述至少两张二维图像按照其各二维图像表征的空间方向进行拼接生成VR图像。
  2. 根据权利要求1所述的VR图像生成方法,所述空间拍摄次序的起始拍摄方向为所述水平方向或竖直方向内的任一方向;
    所述根据所述空间拍摄次序依次获取至少两张不同空间方向上的二维图像的步骤,具体包括下述步骤:
    根据预设的旋转角度沿所述空间拍摄次序的起始拍摄方向依次进行旋转;
    依次采集旋转后空间方向上的二维图像。
  3. 根据权利要求2所述的VR图像生成方法,所述依次采集旋转后空间方向上的二维图像的步骤之后,还包括下述步骤:
    获取预设的图像编码规则;
    根据所述图像编码规则对所述二维图像进行命名,其中,所述命名的次序根据所述二维图像的采集顺序依次进行。
  4. 根据权利要求3所述的VR图像生成方法,所述预设的旋转角度为90°,所述将所述至少两张二维图像按照其各二维图像表征的空间方向进行拼接生成VR图像的步骤,具体包括下述步骤:
    根据所述图像编码规则依次获取所述各二维图像;
    将两个相邻二维图像的夹角顺时针旋转至90°;
    将相邻二维图像的边界进行图像拼接形成所述VR图像。
  5. 根据权利要求3或4所述的VR图像生成方法,所述预设的旋转角度为90°,所述将所述至少两张二维图像按照其各二维图像表征的空间方向进行拼接生成VR图像的步骤,具体包括下述步骤:
    获取两个相邻二维图像中像素重叠的交叉图像区域;
    将所述交叉图像区域在所述二维图像中裁剪切除,以使所述相邻二维图像拼接边缘无重叠区域。
  6. 根据权利要求5所述的VR图像生成方法,所述获取两个相邻二维图像中像素重叠的交叉图像区域的步骤,具体包括下述步骤:
    获取所述二维图像的像素矩阵;
    计算两个相邻像素矩阵中相同的列或行,其中相同的列或行表征的图像为像素重叠的交叉图像区域,所述两个相邻二维图像的像素矩阵中相同的列或行表征的图像为像素重叠的交叉图像区域。
  7. 根据权利要求5所述的VR图像生成方法,获取两个相邻二维图像中像素重叠的交叉图像区域的步骤之后,还包括下述步骤:
    获取所述交叉图像区域的像素值;
    将所述交叉图像区域的像素值与所述二维图像的总像素值进行比对;
    当所述交叉图像区域的像素值与所述总像素值的比值大于预设的第一比较阈值时,重新获取所述二维图像。
  8. 一种VR图像生成装置,包括:
    获取模块,用于获取预设的水平方向和竖直方向内的空间拍摄次序;
    处理模块,用于根据所述空间拍摄次序依次获取至少两张不同空间方向上的二维图像;
    执行模块,用于将所述至少两张二维图像按照其各二维图像表征的空间方向进行拼接生成VR图像。
  9. 一种计算机设备,包括存储器和处理器,所述存储器中存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行一种VR图像生成方法的下述步骤:
    获取预设的水平方向和竖直方向内的空间拍摄次序;
    根据所述空间拍摄次序依次获取至少两张不同空间方向上的二维图像;
    将所述至少两张二维图像按照其各二维图像表征的空间方向进行拼接生成VR图像。
  10. 根据权利要求9所述的计算机设备,所述空间拍摄次序的起始拍摄方向为所述水平方向或竖直方向内的任一方向;
    所述根据所述空间拍摄次序依次获取至少两张不同空间方向上的二维图像的步骤,具体包括下述步骤:
    根据预设的旋转角度沿所述空间拍摄次序的起始拍摄方向依次进行旋转;
    依次采集旋转后空间方向上的二维图像。
  11. 根据权利要求10所述的计算机设备,所述依次采集旋转后空间方向上的二维图像的步骤之后,还包括下述步骤:
    获取预设的图像编码规则;
    根据所述图像编码规则对所述二维图像进行命名,其中,所述命名的次序根据所述二维图像的采集顺序依次进行。
  12. 根据权利要求11所述的计算机设备,所述预设的旋转角度为90°,所述将所述至少两张二维图像按照其各二维图像表征的空间方向进行拼接生成VR图像的步骤,具体包括下述步骤:
    根据所述图像编码规则依次获取所述各二维图像;
    将两个相邻二维图像的夹角顺时针旋转至90°;
    将相邻二维图像的边界进行图像拼接形成所述VR图像。
  13. 根据权利要求11或12所述的计算机设备,所述预设的旋转角度为90°,所述将所述至少两张二维图像按照其各二维图像表征的空间方向进行拼接生成VR图像的步骤,具体包括下述步骤:
    获取两个相邻二维图像中像素重叠的交叉图像区域;
    将所述交叉图像区域在所述二维图像中裁剪切除,以使所述相邻二维图像拼接边缘无重叠区域。
  14. 一种存储有计算机可读指令的非易失性存储介质,所述计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行一种VR图像生成方法的下述步骤:
    获取预设的水平方向和竖直方向内的空间拍摄次序;
    根据所述空间拍摄次序依次获取至少两张不同空间方向上的二维图像;
    将所述至少两张二维图像按照其各二维图像表征的空间方向进行拼接生成VR图像。
  15. 根据权利要求14所述的非易失性存储介质,所述空间拍摄次序的起始拍摄方向为所述水平方向或竖直方向内的任一方向;
    所述根据所述空间拍摄次序依次获取至少两张不同空间方向上的二维图像的步骤,具体包括下述步骤:
    根据预设的旋转角度沿所述空间拍摄次序的起始拍摄方向依次进行旋转;
    依次采集旋转后空间方向上的二维图像。
  16. 根据权利要求15所述的非易失性存储介质,所述依次采集旋转后空间方向上的二维图像的步骤之后,还包括下述步骤:
    获取预设的图像编码规则;
    根据所述图像编码规则对所述二维图像进行命名,其中,所述命名的次序根据所 述二维图像的采集顺序依次进行。
  17. 根据权利要求16所述的非易失性存储介质,所述预设的旋转角度为90°,所述将所述至少两张二维图像按照其各二维图像表征的空间方向进行拼接生成VR图像的步骤,具体包括下述步骤:
    根据所述图像编码规则依次获取所述各二维图像;
    将两个相邻二维图像的夹角顺时针旋转至90°;
    将相邻二维图像的边界进行图像拼接形成所述VR图像。
  18. 根据权利要求16或17所述的非易失性存储介质,所述预设的旋转角度为90°,所述将所述至少两张二维图像按照其各二维图像表征的空间方向进行拼接生成VR图像的步骤,具体包括下述步骤:
    获取两个相邻二维图像中像素重叠的交叉图像区域;
    将所述交叉图像区域在所述二维图像中裁剪切除,以使所述相邻二维图像拼接边缘无重叠区域。
  19. 根据权利要求18所述的非易失性存储介质,所述获取两个相邻二维图像中像素重叠的交叉图像区域的步骤,具体包括下述步骤:
    获取所述二维图像的像素矩阵;
    计算两个相邻像素矩阵中相同的列或行,其中相同的列或行表征的图像为像素重叠的交叉图像区域,所述两个相邻二维图像的像素矩阵中相同的列或行表征的图像为像素重叠的交叉图像区域。
  20. 根据权利要求18所述的非易失性存储介质,获取两个相邻二维图像中像素重叠的交叉图像区域的步骤之后,还包括下述步骤:
    获取所述交叉图像区域的像素值;
    将所述交叉图像区域的像素值与所述二维图像的总像素值进行比对;
    当所述交叉图像区域的像素值与所述总像素值的比值大于预设的第一比较阈值时,重新获取所述二维图像。
PCT/CN2018/102873 2018-05-23 2018-08-29 Vr图像生成方法、装置、计算机设备及存储介质 WO2019223158A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810501389.6 2018-05-23
CN201810501389.6A CN108805988A (zh) 2018-05-23 2018-05-23 Vr图像生成方法、装置、计算机设备及存储介质

Publications (1)

Publication Number Publication Date
WO2019223158A1 true WO2019223158A1 (zh) 2019-11-28

Family

ID=64091465

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/102873 WO2019223158A1 (zh) 2018-05-23 2018-08-29 Vr图像生成方法、装置、计算机设备及存储介质

Country Status (2)

Country Link
CN (1) CN108805988A (zh)
WO (1) WO2019223158A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111063034A (zh) * 2019-12-13 2020-04-24 四川中绳矩阵技术发展有限公司 一种时域交互方法
CN114268783A (zh) * 2022-01-04 2022-04-01 深圳星月辰网络科技有限公司 一种基于云服务的3d图像处理方法及装置

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110147838B (zh) * 2019-05-20 2021-07-02 苏州微创关节医疗科技有限公司 一种产品规格录入、检测方法及系统
CN111754516B (zh) * 2020-05-25 2023-06-30 沈阳工程学院 基于计算机视觉反馈的金红石单晶体生长智能控制方法
CN113360378A (zh) * 2021-06-04 2021-09-07 北京房江湖科技有限公司 一种用于生成vr场景的应用程序的回归测试方法及装置
CN114554108B (zh) * 2022-02-24 2023-10-27 北京有竹居网络技术有限公司 图像处理方法、装置和电子设备
CN114782439B (zh) * 2022-06-21 2022-10-21 成都沃特塞恩电子技术有限公司 培育钻石的生长状态检测方法、装置、系统、电子设备
CN115988322A (zh) * 2022-11-29 2023-04-18 北京百度网讯科技有限公司 生成全景图像的方法、装置、电子设备和存储介质
CN116612012A (zh) * 2023-07-17 2023-08-18 南方电网数字电网研究院有限公司 输电线路图像拼接方法、系统、计算机设备和存储介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102013110A (zh) * 2010-11-23 2011-04-13 李建成 三维全景图像生成方法及系统
CN102984453A (zh) * 2012-11-01 2013-03-20 深圳大学 利用单摄像机实时生成半球全景视频图像的方法及系统
CN103813089A (zh) * 2012-11-13 2014-05-21 联想(北京)有限公司 一种获得图像的方法、电子设备以及辅助旋转装置

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102560029B1 (ko) * 2016-09-12 2023-07-26 삼성전자주식회사 가상 현실 콘텐트를 송수신하는 방법 및 장치
CN107578373A (zh) * 2017-05-27 2018-01-12 深圳先进技术研究院 全景图像拼接方法、终端设备及计算机可读存储介质
CN107958441B (zh) * 2017-12-01 2021-02-12 深圳市科比特航空科技有限公司 图像拼接方法、装置、计算机设备和存储介质

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102013110A (zh) * 2010-11-23 2011-04-13 李建成 三维全景图像生成方法及系统
CN102984453A (zh) * 2012-11-01 2013-03-20 深圳大学 利用单摄像机实时生成半球全景视频图像的方法及系统
CN103813089A (zh) * 2012-11-13 2014-05-21 联想(北京)有限公司 一种获得图像的方法、电子设备以及辅助旋转装置

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111063034A (zh) * 2019-12-13 2020-04-24 四川中绳矩阵技术发展有限公司 一种时域交互方法
CN111063034B (zh) * 2019-12-13 2023-08-04 四川中绳矩阵技术发展有限公司 一种时域交互方法
CN114268783A (zh) * 2022-01-04 2022-04-01 深圳星月辰网络科技有限公司 一种基于云服务的3d图像处理方法及装置

Also Published As

Publication number Publication date
CN108805988A (zh) 2018-11-13

Similar Documents

Publication Publication Date Title
WO2019223158A1 (zh) Vr图像生成方法、装置、计算机设备及存储介质
US11688034B2 (en) Virtual lens simulation for video and photo cropping
JP6966421B2 (ja) 角度分離されたサブシーンの合成およびスケーリング
US11748906B2 (en) Gaze point calculation method, apparatus and device
US9582731B1 (en) Detecting spherical images
EP2920758B1 (en) Rotation of an image based on image content to correct image orientation
CN102959946A (zh) 基于相关3d点云数据来扩充图像数据的技术
CN107430498B (zh) 扩展照片的视场
CN115690382B (zh) 深度学习模型的训练方法、生成全景图的方法和装置
CN112598780A (zh) 实例对象模型构建方法及装置、可读介质和电子设备
CN111932681A (zh) 房屋信息显示方法、装置和电子设备
CN112988671A (zh) 媒体文件处理方法、装置、可读介质及电子设备
US20080111814A1 (en) Geometric tagging
Zhu et al. Large-scale architectural asset extraction from panoramic imagery
US11100617B2 (en) Deep learning method and apparatus for automatic upright rectification of virtual reality content
Ancona et al. Mobile vision and cultural heritage: the agamemnon project
CN116708862A (zh) 直播间的虚拟背景生成方法、计算机设备及存储介质
CN112584036A (zh) 云台控制方法、装置、计算机设备及存储介质
CN109688381B (zh) Vr监控方法、装置、设备及存储介质
US20220321774A1 (en) Method for assisting the acquisition of media content at a scene
CN113537194A (zh) 光照估计方法、光照估计装置、存储介质与电子设备
CN110381250A (zh) 提示拍照的方法及装置
Zhou et al. Improved YOLOv7 models based on modulated deformable convolution and swin transformer for object detection in fisheye images
Ban et al. Pixel of matter: new ways of seeing with an active volumetric filmmaking system
TW201807598A (zh) 適地性空間物件資料建立方法、顯示方法與應用系統

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18919631

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18919631

Country of ref document: EP

Kind code of ref document: A1