WO2024066208A1 - 模型外点位的全景图展示方法和装置、设备、介质 - Google Patents

模型外点位的全景图展示方法和装置、设备、介质 Download PDF

Info

Publication number
WO2024066208A1
WO2024066208A1 PCT/CN2023/080479 CN2023080479W WO2024066208A1 WO 2024066208 A1 WO2024066208 A1 WO 2024066208A1 CN 2023080479 W CN2023080479 W CN 2023080479W WO 2024066208 A1 WO2024066208 A1 WO 2024066208A1
Authority
WO
WIPO (PCT)
Prior art keywords
point
preset
dimensional coordinate
target space
space model
Prior art date
Application number
PCT/CN2023/080479
Other languages
English (en)
French (fr)
Inventor
李阳
李蕊
Original Assignee
如你所视(北京)科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 如你所视(北京)科技有限公司 filed Critical 如你所视(北京)科技有限公司
Publication of WO2024066208A1 publication Critical patent/WO2024066208A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • G06Q30/0643Graphical representation of items or shoppers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Definitions

  • the present disclosure relates to the field of VR technology, and in particular to a method, device, equipment, and medium for displaying a panoramic view of points outside a model.
  • the scenes are usually shot inside the space, and the points generated are all points inside the space.
  • the space model obtained by rendering based on the internal points takes the border as the boundary (e.g., the house takes the wall as the boundary), presenting the state of the space model.
  • these panoramic points will be unreachable and imperceptible in the VR panoramic walking mode, which wastes the point resources and loses the meaning of point setting.
  • the embodiments of the present disclosure provide a method, device, equipment, and medium for displaying a panoramic view of points outside a model.
  • a method for displaying a panoramic view of a point outside a model comprising: obtaining a target space model including a plurality of first points and at least one guide tag; wherein each of the guide tags corresponds to a second point outside the target space model at a coordinate position, and each of the guide tags corresponds to at least one preset first point; in response to the panoramic display of the target space model reaching the preset first point, according to the received
  • the trigger instruction of the guide tag is used to obtain a panoramic image of a second point corresponding to the guide tag; and the panoramic image corresponding to the second point is displayed in the target space model.
  • a panoramic display device for points outside a model, comprising: a model acquisition module, for obtaining a target space model including multiple first points and at least one guide tag; wherein each of the guide tags corresponds to a second point outside the target space model with a coordinate position, and each of the guide tags corresponds to at least one preset first point; a tag guidance module, for acquiring a panoramic view of the second point corresponding to the guide tag in response to the panoramic display of the target space model reaching the preset first point, according to a received trigger instruction for the guide tag; and a point display module, for initiating display of the panoramic view corresponding to the second point in the target space model.
  • an electronic device characterized in that it includes: a memory for storing a computer program; a processor for executing the computer program stored in the memory, and when the computer program is executed, the panoramic image display method of the model external points described in any of the above embodiments is implemented.
  • a computer-readable storage medium having computer program instructions stored thereon, wherein when the computer program instructions are executed by a processor, the method for displaying a panoramic view of points outside the model as described in any of the above embodiments is implemented.
  • a computer program product including computer program instructions, characterized in that when the computer program instructions are executed by a processor, the processor executes the method for displaying a panoramic image of points outside the model as described in any of the above embodiments.
  • FIG1 is a schematic flow chart of a method for displaying a panoramic image of a point outside a model provided by an exemplary embodiment of the present disclosure
  • FIG2 is a schematic diagram of the position relationship between the second point and the target space model in an example of the present disclosure
  • FIG3 is a flow chart of step 106 in the embodiment shown in FIG1 of the present disclosure.
  • FIG4 is a schematic diagram of displaying a panoramic view of a second point based on a preset viewing angle in an example of the present disclosure
  • FIG5 is a schematic diagram showing a panoramic view obtained by converting the viewing angle of the embodiment shown in FIG1 of the present disclosure
  • FIG6 is a schematic structural diagram of a panoramic image display device for points outside a model provided by an exemplary embodiment of the present disclosure
  • FIG. 7 is a structural diagram of an electronic device provided by an exemplary embodiment of the present disclosure.
  • plurality may refer to two or more than two, and “at least one” may refer to one, two, or more than two.
  • the term "and/or" in this disclosure is only a description of the association relationship of associated objects, indicating that there may be three relationships.
  • a and/or B can represent: A exists alone, A and B exist at the same time, and B exists alone.
  • the character "/" in this disclosure generally indicates that the previous and next associated objects are in an "or” relationship.
  • the data referred to in this disclosure may include unstructured data such as text, images, and videos, and may also be structured data.
  • the phrase “entity A initiates action B” may mean that entity A issues an instruction to perform action B, but entity A itself does not necessarily perform the action B.
  • the phrase “the point display module initiates displaying the panoramic image corresponding to the second point” may mean that the point display module causes the display to present the panoramic image corresponding to the second point, and the point display module itself does not need to perform the "presentation" action.
  • the disclosed embodiments can be applied to electronic devices such as terminal devices, computer systems, servers, etc., which can operate with many other general or special computing system environments or configurations.
  • Examples of well-known terminal devices, computing systems, environments and/or configurations suitable for use with electronic devices such as terminal devices, computer systems, servers, etc. include, but are not limited to: personal computer systems, server computer systems, thin clients, thick clients, handheld or laptop devices, microprocessor-based systems, set-top boxes, programmable consumer electronics, network personal computers, small computer systems, large computer systems, and distributed cloud computing technology environments including any of the above systems, etc.
  • Electronic devices such as terminal devices, computer systems, servers, etc. can be described in the general context of computer system executable instructions (such as program modules) executed by computer systems.
  • program modules can include routines, programs, object programs, components, logic, data structures, etc., which perform specific tasks or implement specific abstract data types.
  • Computer systems/servers can be implemented in a distributed cloud computing environment, where tasks are performed by remote processing devices linked through a communication network.
  • program modules can be located on local or remote computing system storage media including storage devices.
  • FIG1 is a flow chart of a method for displaying a panoramic view of a point outside a model provided by an exemplary embodiment of the present disclosure. This embodiment can be applied to an electronic device, as shown in FIG1 , and includes the following steps:
  • Step 102 Obtain a target space model including a plurality of first points and at least one guide tag.
  • Each guide tag corresponds to a second point position whose coordinate position is outside the target space model, and each guide tag corresponds to at least one preset first point position.
  • the target space model may be a model corresponding to a target space with specific boundaries, such as a house or a carriage, wherein the first point is a shooting point inside the target space model.
  • the target space model can be obtained by rendering image data corresponding to multiple first point positions (for example, panoramas, etc.), or by any method for obtaining a space model provided in the prior art;
  • the second point position is a point position outside the target space model, and image data is collected based on each second point position respectively, but each second point position is not based on the target space model, and these second points cannot be reached directly by point switching in the target space model;
  • the guide tag in some embodiments is usually set on the opaque physical boundary of the target space, for example, when the target space is a house, the guide tag is set on the wall; when the target space is a car, the guide tag is set on the physical car wall.
  • Step 104 in response to the panoramic display of the target space model reaching a preset first point, a panoramic image of a second point corresponding to the guide tag is acquired according to a received trigger instruction for the guide tag.
  • the perspective conversion of the target space model is realized by converting the points between multiple first points, so as to realize the panoramic display of the target space model.
  • the target space model can be controlled by the rotation instruction to realize the perspective conversion at the first point. Since each guide tag corresponds to a second point and at least one preset first point, when moving to the preset first point in the panoramic display of the target space model, the guide tag corresponding to the preset first point is visible in the current field of view.
  • the panoramic image of the second point corresponding to the guide tag can be obtained with the guide tag as the index.
  • Step 106 Initiate display of the panoramic image corresponding to the second point in the target space model.
  • the panoramic image after obtaining the panoramic image corresponding to the second point, the panoramic image can be displayed in the target space model, which solves the problem that the second point cannot be reached directly through point conversion in the target space model.
  • the above embodiment of the present disclosure provides a method for displaying a panoramic view of a point outside a model, obtaining a target space model including a plurality of first points and at least one guide tag; wherein each guide tag corresponds to a second point outside the target space model at a coordinate position, and each guide tag
  • Each of the guide tags corresponds to at least one preset first point; in response to the panoramic display of the target space model, the preset first point is reached, and according to the received trigger instruction for the guide tag, a panoramic view of the second point corresponding to the guide tag is obtained; and the panoramic view corresponding to the second point is displayed in the target space model; some embodiments set a guide tag in the target space model to achieve reaching a second point outside the target space model through the guide tag, and display the panoramic view corresponding to the second point, thereby solving the problem that points outside the model cannot be reached, making full use of all point information, and avoiding waste of point resources.
  • step 102 the following may also be included:
  • At least one guide tag is determined in the target space model according to the positional relationship between the plurality of first points and at least one second point in a preset coordinate system.
  • the preset coordinate system can be, for example: a world coordinate system, or a coordinate system including three coordinate axes determined by a corner point in the target space model as the origin (for example, the lower left corner point of the target space model is used as the origin, the x-axis and y-axis of the bottom surface of the target space model are used as the x-axis and y-axis of the preset coordinate system, and the high direction of the target space model is used as the z-axis of the preset coordinate system), etc.
  • Some embodiments do not limit the specific coordinate system, and only require the first point and the second point to be in the same coordinate system.
  • the left side of the figure is a cross-sectional view of the target space model in the preset coordinate system, which includes multiple first points 201, and in addition to the target space model, it also includes two second points 202.
  • the user can also be provided with the ability to make a second point visible.
  • the second point is not visible in the panorama because it is rendered outside the target space model (3D space). Therefore, the second point needs to be mapped to the target space model.
  • a guide tag needs to be established in the target space model for each second point in at least one second point. Since each guide tag is used as an index of the corresponding second point, the position of the guide tag must be related to the position of the second point.
  • Some embodiments determine a guide tag in combination with the position between a first point and a second point.
  • any first point can determine a guide tag with a second point, but the guide tags in some embodiments are usually only set on opaque entities.
  • the guide tag is set on the wall of a house model, or on the wall of a car model.
  • the guide tag determined based on the first point is abandoned.
  • multiple guide tags determined by multiple first points and a second point are all located on an opaque entity, one of them can be randomly selected as the guide tag corresponding to the second point, or the guide tag closest to the second point can be selected as the guide tag corresponding to the second point.
  • This embodiment does not limit the specific method for determining the selected guide tag.
  • determining at least one guide tag in the target space model based on the positional relationship between the plurality of first points and the at least one second point in a preset coordinate system includes:
  • Step a1 determining a plurality of first three-dimensional coordinate information of a plurality of first points in a preset coordinate system, and at least one second three-dimensional coordinate information of at least one second point in the preset coordinate system.
  • Step a2 determining at least one guide tag for at least one second point based on the plurality of first three-dimensional coordinate information and at least one second three-dimensional coordinate information.
  • each point has a set position when image data is collected, for example, a three-dimensional coordinate position in the world coordinate system. Therefore, the first three-dimensional coordinate information of each first point in the preset coordinate system and the second three-dimensional coordinate information of each second point in the preset coordinate system can be determined; if the coordinate system corresponding to the second point during image data collection is different from the coordinate system corresponding to the first point, the second point can be converted to the coordinate system corresponding to the first point through coordinate system conversion (achieved through the positional relationship between the origins and the direction transformation of the coordinate axes).
  • the positional relationship between the second point and the first point can be intuitively expressed in the preset coordinate system, and the three-dimensional coordinate information of the first point and the second point can be used on the boundary of the target space model.
  • step a2 in the above embodiment may include:
  • Step b1 for each second point, based on the second three-dimensional coordinate information corresponding to the second point and a plurality of first three-dimensional coordinate information, determine the target position in the target space model mapped to the second point.
  • each second point can determine multiple positions respectively with each first point in multiple first points, and each second point actually corresponds to only one guide label, that is, the corresponding target position.
  • a position on the opaque entity of the target space model can be determined from multiple positions as the target position, and when there are multiple positions on the opaque entity of the target space model, one of them can be randomly selected, or based on the distances between multiple positions and the second point, the position with the shortest distance can be selected as the target position.
  • Step b2 setting a guide tag corresponding to a second point at the target position, and determining a first point of the first three-dimensional coordinate information corresponding to the target position as a preset first point.
  • a guide tag (flag) is used as an index, and the second point corresponding to the guide tag can be directly reached based on the guide tag.
  • the user can view a panoramic view of the corresponding second point by clicking on the guide tag.
  • the guide tag is not visible in all positions in the target space model. Therefore, some embodiments determine that the first point corresponding to the guide tag is a preset first point. When walking in the target space model, the guide tag can be seen when the preset first point is reached. Therefore, some embodiments use the guide tag to display a panoramic view of the second point outside the model in the target space model, and solve the visibility problem of the guide tag in the target space model by presetting the first point.
  • step b1 in the above embodiment may include:
  • a position of one of the multiple intersection points is determined as the target position.
  • the second coordinate information and the multiple first three-dimensional coordinate information can be respectively connected in the preset coordinate system to obtain multiple connecting lines.
  • the target position from multiple intersections first determine whether the objects corresponding to the multiple intersections on the target space model are opaque entities (such as walls, etc.), remove the intersections whose corresponding objects are not opaque entities, and select one from the remaining intersections as the target position.
  • step 102 the following steps may also be included:
  • the spatial model of the known structure is rendered to obtain the target spatial model.
  • some embodiments can render the spatial model through existing panoramic rendering components, such as Sphere Geometry and Box Buffer Geometry of Three.js (common rendering components in the prior art), etc., and accept a panoramic image or 6 cube images through the panoramic rendering component to achieve panoramic rendering;
  • the rendered target spatial model can be browsed in VR, and different parts of the target spatial model can be displayed by moving to different first points;
  • the structure of the spatial model in some embodiments can be obtained by any existing technology, and this application does not limit the method of obtaining the structure of the spatial model.
  • the panoramic image corresponding to the second point can be obtained at the second point through a panoramic image acquisition device (panoramic camera, etc.), or based on image data obtained by an ordinary image acquisition device, and the panoramic image corresponding to the second point is obtained through panoramic rendering.
  • a panoramic image acquisition device panoramic camera, etc.
  • step 106 may include the following steps:
  • Step 1061 initiating display of a panoramic image corresponding to the second point based on a preset viewing angle.
  • the preset viewing angle corresponds to a direction from a preset first point to a preset second point.
  • the guide tag in some embodiments is determined based on a preset first point and a second point (for example, the guide tag is set on the line connecting the preset first point and the second point, etc.), when displaying the panoramic image corresponding to the second point, the viewing direction of the panoramic image corresponding to the preset first point is displayed first, that is, the preset viewing angle corresponds to the direction from the preset first point to the second point; for example, as shown in Figure 4, in the example, a panoramic image corresponding to a second point outside the house model is obtained by clicking on the guide tag set on the wall, and the panoramic image is displayed based on the preset viewing angle.
  • Step 1062 in response to receiving the perspective conversion instruction, initiating display of the panoramic image through other perspectives corresponding to the perspective conversion instruction.
  • the perspective of the panoramic image when displaying the panoramic image corresponding to the second point, can also be converted according to the conversion instruction input by the user (for example, by dragging the mouse, etc.) to view information at different angles.
  • the panoramic image in the example shown in Figure 4 is converted to a preset perspective different from Figure 4, and other perspectives of the panorama are displayed to achieve full utilization of the point resources of the second point.
  • any of the panoramic image display methods for points outside the model provided in the embodiments of the present disclosure can be executed by any appropriate device with data processing capabilities, including but not limited to: terminal devices and servers, etc.
  • any of the panoramic image display methods for points outside the model provided in the embodiments of the present disclosure can be executed by a processor, such as the processor executing any of the panoramic image display methods for points outside the model mentioned in the embodiments of the present disclosure by calling corresponding instructions stored in a memory. This will not be described in detail below.
  • FIG6 is a schematic diagram of a panoramic image display device for a model external point provided by an exemplary embodiment of the present disclosure. As shown in FIG6 , the device provided by this embodiment includes:
  • the model acquisition module 61 is used to obtain a target space model including a plurality of first points and at least one guide tag.
  • Each guide tag corresponds to a second point position whose coordinate position is outside the target space model, and each guide tag corresponds to at least one preset first point position.
  • the tag guiding module 62 is used to obtain a panoramic image of a second point corresponding to the guiding tag in response to the panoramic display of the target space model reaching a preset first point, according to the received trigger instruction for the guiding tag.
  • the point display module 63 is used to initiate display of the panoramic image corresponding to the second point in the target space model.
  • the above-mentioned embodiment of the present disclosure provides a panoramic image display device for a point outside a model, which obtains a target space model including multiple first points and at least one guide tag; wherein each of the guide tags corresponds to a second point outside the target space model with a coordinate position, and each of the guide tags corresponds to at least one preset first point; in response to the panoramic display of the target space model reaching the preset first point, according to the received trigger instruction for the guide tag, a panoramic image of the second point corresponding to the guide tag is obtained; and the panoramic image corresponding to the second point is displayed in the target space model; some embodiments set a guide tag in the target space model to achieve reaching the second point outside the target space model through the guide tag, and display the panoramic image corresponding to the second point, thereby solving the problem that the external points of the model cannot be reached, making full use of all point information, and avoiding the waste of point resources.
  • some embodiments provide an apparatus further comprising:
  • the tag determination module is used to determine at least one guide tag in the target space model according to the positional relationship between multiple first points and at least one second point in a preset coordinate system.
  • the tag determination module includes:
  • a three-dimensional coordinate unit used to determine a plurality of first three-dimensional coordinate information of a plurality of first points in a preset coordinate system, and at least one second three-dimensional coordinate information of at least one second point in the preset coordinate system;
  • the guide tag unit is used to determine at least one guide tag for at least one second point based on a plurality of first three-dimensional coordinate information and at least one second three-dimensional coordinate information.
  • the guide label unit is specifically used to determine, for each second point, the mapping of the second point to the target position in the target space model based on the second three-dimensional coordinate information corresponding to the second point and multiple first three-dimensional coordinate information; set a guide label corresponding to the second point at the target position, and determine the first point of the first three-dimensional coordinate information corresponding to the target position as the preset first point.
  • the guide tag unit determines that the second point is mapped to the target position in the target space model based on the second three-dimensional coordinate information corresponding to the second point and the multiple first three-dimensional coordinate information, it is used to determine multiple connecting lines based on the second three-dimensional coordinate information corresponding to the second point and the multiple first three-dimensional coordinate information; based on the multiple intersections of the multiple connecting lines and the border of the target space model, determine the position of one of the multiple intersections as the target position.
  • some embodiments provide an apparatus further comprising:
  • the model rendering module is used to render the spatial model of the known structure based on the multiple panoramic images corresponding to the multiple first points to obtain the target spatial model.
  • the point display module 63 is specifically used to initiate display of a panoramic image corresponding to a second point based on a preset perspective; wherein the preset perspective corresponds to a direction from a preset first point toward a second point; in response to receiving a perspective conversion instruction, initiating display of the panoramic image through other perspectives corresponding to the perspective conversion instruction.
  • the electronic device may be any one or both of the first device and the second device, or a stand-alone device independent of them, and the stand-alone device may communicate with the first device and the second device to receive the collected input signals from them.
  • FIG. 7 illustrates a block diagram of an electronic device according to an embodiment of the present disclosure.
  • the electronic device 70 includes one or more processors 71 and a memory 72 .
  • the processor 71 may be a central processing unit (CPU) or other forms of processing units having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 70 to perform desired functions.
  • CPU central processing unit
  • the processor 71 may be a central processing unit (CPU) or other forms of processing units having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 70 to perform desired functions.
  • the memory may store one or more computer program products, and the memory may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory.
  • volatile memory may include, for example, a random access memory (RAM) and/or a cache memory (cache), etc.
  • the non-volatile memory may include, for example, a read-only memory (ROM).
  • One or more computer program products can be stored on the computer-readable storage medium, and the processor can run the computer program product to implement the panoramic image display method of the model external point position and/or other desired functions of the various embodiments of the present disclosure described above.
  • the electronic device 70 may further include: an input device 73 and an output device 74, and these components are interconnected via a bus system and/or other forms of connection mechanisms (not shown).
  • the input device 73 may be the microphone or microphone array described above, for capturing input signals from a sound source.
  • the input device 73 may be a communication network connector, for receiving collected input signals from the first device and the second device.
  • the input device 73 may also include, for example, a keyboard, a mouse, and the like.
  • the output device 74 can output various information to the outside, including the determined distance information, direction information, etc.
  • the output device 74 can include, for example, a display, a speaker, a printer, a communication network and a remote output device connected thereto, and the like.
  • FIG7 only shows some of the components related to the present disclosure in the electronic device 70, omitting components such as a bus, an input/output interface, etc.
  • the electronic device 70 may further include any other appropriate components according to specific application scenarios.
  • an embodiment of the present disclosure may also be a computer program product, which includes computer program instructions, which, when executed by a processor, enable the processor to execute the steps of the panoramic image display method of model external points according to various embodiments of the present disclosure described in the above part of this specification.
  • the computer program product may be written in any combination of one or more programming languages to write program code for performing the operations of the disclosed embodiments, including object-oriented programming languages such as Java, C++, etc., and conventional procedural programming languages such as "C" or similar programming languages.
  • the program code may be executed entirely on the user computing device, partially on the user device, as a separate software package, partially on the user computing device and partially on a remote computing device, or entirely on a remote computing device or server.
  • an embodiment of the present disclosure may also be a computer-readable storage medium having computer program instructions stored thereon, which, when executed by a processor, causes the processor to execute the steps of a method for displaying a panoramic view of points outside a model according to various embodiments of the present disclosure described in the above “Exemplary Method” section of this specification.
  • the computer readable storage medium can adopt any combination of one or more readable media.
  • the readable medium can be a readable signal medium or a readable storage medium.
  • the readable storage medium can include, for example, but is not limited to, a system, device or device of electricity, magnetism, light, electromagnetic, infrared, or semiconductor, or any combination of the above.
  • readable storage media include: an electrical connection with one or more wires, a portable disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the above.
  • RAM random access memory
  • ROM read-only memory
  • EPROM or flash memory erasable programmable read-only memory
  • CD-ROM compact disk read-only memory
  • magnetic storage device or any suitable combination of the above.
  • the method and apparatus of the present disclosure may be implemented in many ways.
  • the method and apparatus of the present disclosure may be implemented by software, hardware, firmware, or any combination of software, hardware, and firmware.
  • the above order of steps for the method is for illustration only, and the steps of the method of the present disclosure are not limited to the order specifically described above, unless otherwise specifically stated.
  • the present disclosure may also be implemented as a program recorded in a recording medium, which includes machine-readable instructions for implementing the method according to the present disclosure. Therefore, the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
  • each component or each step can be decomposed and/or recombined. Such decomposition and/or recombination should be regarded as equivalent solutions of the present disclosure.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Computer Graphics (AREA)
  • Finance (AREA)
  • Software Systems (AREA)
  • Accounting & Taxation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种模型外点位的全景图展示方法和装置、设备、介质。该方法包括:获得包括多个第一点位和至少一个引导标签的目标空间模型;其中,每个所述引导标签对应坐标位置在所述目标空间模型之外的一个第二点位,并且,每个所述引导标签对应至少一个预设第一点位;响应于目标空间模型的全景展示到达所述预设第一点位,根据接收的对所述引导标签的触发指令,获取所述引导标签对应的第二点位的全景图;在所述目标空间模型中发起展示所述第二点位对应的全景图。

Description

模型外点位的全景图展示方法和装置、设备、介质
相关申请的交叉引用
本申请要求2022年9月26日提交的中国专利申请第202211170895.4号的优先权,其内容通过引用的方式整体并入本文。
技术领域
本公开涉及VR技术领域,尤其是一种模型外点位的全景图展示方法和装置、设备、介质。
背景技术
对于具有边框的空间(例如,房子等)的VR,通常都是基于空间内的拍摄,生成的点位也都是空间内部点位,基于内部点位渲染的得到的空间模型以边框为边界(例如,房子以墙面为边界),呈现出空间模型状态;但是,对于空间外部也有拍摄点位的空间来说,由于外部场景没有模型为载体,这些全景点位在VR全景游走模态下将不可达,无法感知,浪费了点位资源,失去了点位设置的意义。
发明内容
为了解决上述技术问题,提出了本公开。本公开的实施例提供了一种模型外点位的全景图展示方法和装置、设备、介质。
根据本公开实施例的一个方面,提供了一种模型外点位的全景图展示方法,包括:获得包括多个第一点位和至少一个引导标签的目标空间模型;其中,每个所述引导标签对应坐标位置在所述目标空间模型之外的一个第二点位,并且,每个所述引导标签对应至少一个预设第一点位;响应于所述目标空间模型的全景展示到达所述预设第一点位,根据接收的对所 述引导标签的触发指令,获取所述引导标签对应的第二点位的全景图;在所述目标空间模型中发起展示所述第二点位对应的全景图。
根据本公开实施例的另一方面,提供了一种模型外点位的全景图展示装置,包括:模型获得模块,用于获得包括多个第一点位和至少一个引导标签的目标空间模型;其中,每个所述引导标签对应坐标位置在所述目标空间模型之外的一个第二点位,并且,每个所述引导标签对应至少一个预设第一点位;标签引导模块,用于响应于所述目标空间模型的全景展示到达所述预设第一点位,根据接收的对所述引导标签的触发指令,获取所述引导标签对应的第二点位的全景图;点位展示模块,用于在所述目标空间模型中发起展示所述第二点位对应的全景图。
根据本公开实施例的又一方面,提供了一种电子设备,其特征在于,包括:存储器,用于存储计算机程序;处理器,用于执行所述存储器中存储的计算机程序,且所述计算机程序被执行时,实现上述任一实施例所述的模型外点位的全景图展示方法。
根据本公开实施例的还一方面,提供了一种计算机可读存储介质,其上存储有计算机程序指令,其特征在于,该计算机程序指令被处理器执行时,实现上述任一实施例所述的模型外点位的全景图展示方法。
根据本公开实施例的再一方面,提供了一种计算机程序产品,包括计算机程序指令,其特征在于,该计算机程序指令被处理器执行时,使处理器执行上述任一实施例所述的模型外点位的全景图展示方法。
下面通过附图和实施例,对本公开的技术方案做进一步的详细描述。
附图说明
通过结合附图对本公开实施例进行更详细的描述,本公开的上述以及其他目的、特征和优势将变得更加明显。附图用来提供对本公开实施例的进一步理解,并且构成说明书的一部分,与本公开实施例一起用于解释本 公开,并不构成对本公开的限制。在附图中,相同的参考标号通常代表相同部件或步骤;
图1是本公开一示例性实施例提供的模型外点位的全景图展示方法的流程示意图;
图2是本公开一示例中第二点位与目标空间模型位置关系示意图;
图3是本公开图1所示的实施例中步骤106的一个流程示意图;
图4是本公开一示例中对第二点位的全景图基于预设视角进行展示的示意图;
图5是本公开图1所示的实施例经过视角转换得到全景图展示示意图;
图6是本公开一示例性实施例提供的模型外点位的全景图展示装置的结构示意图;
图7是本公开一示例性实施例提供的电子设备的结构图。
具体实施方式
下面,将参考附图详细地描述根据本公开的示例实施例。显然,所描述的实施例仅仅是本公开的一部分实施例,而不是本公开的全部实施例,应理解,本公开不受这里描述的示例实施例的限制。
应注意到:除非另外具体说明,否则在这些实施例中阐述的部件和步骤的相对布置、数字表达式和数值不限制本公开的范围。
本领域技术人员可以理解,本公开实施例中的“第一”、“第二”等术语仅用于区别不同步骤、设备或模块等,既不代表任何特定技术含义,也不表示它们之间的必然逻辑顺序。
还应理解,在本公开实施例中,“多个”可以指两个或两个以上,“至少一个”可以指一个、两个或两个以上。
还应理解,对于本公开实施例中提及的任一部件、数据或结构,在没有明确限定或者在前后文给出相反启示的情况下,一般可以理解为一个或多个。
另外,本公开中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本公开中字符“/”,一般表示前后关联对象是一种“或”的关系。本公开中所指数据可以包括文本、图像、视频等非结构化数据,也可以是结构化数据。
如本文使用的,短语“实体A发起动作B”可以是指实体A发出执行动作B的指令,但实体A本身并不一定执行该动作B。例如,短语“点位展示模块发起展示第二点位对应的全景图”可以是指点位展示模块使显示器呈现第二点位对应的全景图,而点位展示模块本身不需要执行“呈现”的动作。
还应理解,本公开对各个实施例的描述着重强调各个实施例之间的不同之处,其相同或相似之处可以相互参考,为了简洁,不再一一赘述。
同时,应当明白,为了便于描述,附图中所示出的各个部分的尺寸并不是按照实际的比例关系绘制的。
以下对至少一个示例性实施例的描述实际上仅仅是说明性的,决不作为对本公开及其应用或使用的任何限制。
对于相关领域普通技术人员已知的技术、方法和设备可能不作详细讨论,但在适当情况下,所述技术、方法和设备应当被视为说明书的一部分。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步讨论。
本公开实施例可以应用于终端设备、计算机系统、服务器等电子设备,其可与众多其它通用或专用计算系统环境或配置一起操作。适于与终端设备、计算机系统、服务器等电子设备一起使用的众所周知的终端设备、计算系统、环境和/或配置的例子包括但不限于:个人计算机系统、服务器计算机系统、瘦客户机、厚客户机、手持或膝上设备、基于微处理器的系统、机顶盒、可编程消费电子产品、网络个人电脑、小型计算机系统、大型计算机系统和包括上述任何系统的分布式云计算技术环境,等等。
终端设备、计算机系统、服务器等电子设备可以在由计算机系统执行的计算机系统可执行指令(诸如程序模块)的一般语境下描述。通常,程序模块可以包括例程、程序、目标程序、组件、逻辑、数据结构等等,它们执行特定的任务或者实现特定的抽象数据类型。计算机系统/服务器可以在分布式云计算环境中实施,分布式云计算环境中,任务是由通过通信网络链接的远程处理设备执行的。在分布式云计算环境中,程序模块可以位于包括存储设备的本地或远程计算系统存储介质上。
示例性方法
图1是本公开一示例性实施例提供的模型外点位的全景图展示方法的流程示意图。本实施例可应用在电子设备上,如图1所示,包括如下步骤:
步骤102,获得包括多个第一点位和至少一个引导标签的目标空间模型。
其中,每个引导标签对应坐标位置在目标空间模型之外的一个第二点位,并且,每个引导标签对应至少一个预设第一点位。
在一些实施例中,目标空间模型可以为例如房屋、车厢等具有具体边界的目标空间对应的模型,其中,第一点位为目标空间模型内部的拍摄点 位,目标空间模型可以通过多个第一点位对应的图像数据(例如,全景图等)渲染得到,或通过现有技术中提供的获得空间模型的任意方法得到;第二点位为目标空间模型之外的点位,基于每个第二点位分别采集了图像数据,但是,每个第二点位都不以目标空间模型为依托,在该目标空间模型中无法直接通过点位切换到达这些第二点位;在一些实施例中中的引导标签通常设置在目标空间的不透光的实体边界上,例如,当目标空间为房屋时,引导标签设置在墙体上;当目标空间为车厢时,引导标签设置在实体车厢壁上。
步骤104,响应于目标空间模型的全景展示到达预设第一点位,根据接收的对引导标签的触发指令,获取引导标签对应的第二点位的全景图。
在示例中,根据接收的游走指令,通过多个第一点位之间的点位转换实现对目标空间模型的视角转换,实现目标空间模型的全景展示,在每个第一点位对应的位置,都可以通过转动指令控制目标空间模型在该第一点位实现视角转换;由于每个引导标签分别对应一个第二点位和至少一个预设第一点位,当在目标空间模型的全景展示中移动到预设第一点位时,对应该预设第一点位的引导标签在当前视野中可视,此时,根据对该引导标签的触发指令,以该引导标签为索引,可获得该引导标签对应的第二点位的全景图。
步骤106,在目标空间模型中发起展示第二点位对应的全景图。
在示例中,在获得第二点位对应的全景图后,可以在目标空间模型中展示该全景图,解决了在目标空间模型中,直接通过点位转换无法到达第二点位的问题。
本公开上述实施例提供的一种模型外点位的全景图展示方法,获得包括多个第一点位和至少一个引导标签的目标空间模型;其中,每个所述引导标签对应坐标位置在所述目标空间模型之外的一个第二点位,并且,每 个所述引导标签对应至少一个预设第一点位;响应于目标空间模型的全景展示到达所述预设第一点位,根据接收的对所述引导标签的触发指令,获取所述引导标签对应的第二点位的全景图;在所述目标空间模型中展示所述第二点位对应的全景图;一些实施例通过在目标空间模型中设置引导标签,实现通过引导标签到达位于目标空间模型之外的第二点位,并对第二点位对应的全景图进行展示,解决了模型外部点位无法到达的问题,充分利用了所有点位信息,避免了点位资源的浪费。
在一些示例实施例中,在上述图1提供的实施例的基础上,在步骤102之前,还可以包括:
通过多个第一点位和至少一个第二点位在预设坐标系下的位置关系,在目标空间模型中确定至少一个引导标签。
其中,预设坐标系可以为例如:世界坐标系,或以目标空间模型的中的一个角点位原点确定的包括三个坐标轴的坐标系(例如,以目标空间模型的左下角点为原点,以目标空间模型底面的x轴和y轴作为该预设坐标系的x轴和y轴,以目标空间模型的高方向作为该预设坐标系的z轴)等,一些实施例不限制具体的坐标系,仅需第一点位和第二点位在相同坐标系下即可,例如,如图2所示,图中左侧为目标空间模型在预设坐标系下的一个切面图,其中包括多个第一点位201,而在该目标空间模型之外,还包括两个第二点位202。
在一些实施例中,期望的是在全景VR游走时,也能够提供给用户一个第二点位可见的能力,而实际上第二点位在全景中会因为渲染在目标空间模型(3D空间)外,因此并不可见;所以,还需要将第二点位映射到目标空间模型中;为了实现通过基于引导标签获取第二点位对应的全景图,需要为至少一个第二点位中的每个第二点位在目标空间模型中建立引导标签,由于每个引导标签是作为对应的第二点位的索引,因此,引导标签的位置必然与第二点位的位置有关,并且,为了唯一确定引导标签的位置, 一些实施例结合第一点位和第二点位之间的位置确定引导标签,另外,一些实施例中任意一个第一点位都可以和一个第二点位确定一个引导标签,但一些实施例中的引导标签通常只设置在不透光的实体上,例如,引导标签设置在房屋模型的墙面上,或车厢模型的厢壁上,当一个第一点位与第二点位确定的引导标签的位置在透光的物体(例如,玻璃等)上时,放弃基于该第一点位确定的引导标签,而当多个第一点位与一个第二点位确定的多个引导标签都位于不透光的实体上时,可随机选取其中之一作为该第二点位对应的引导标签,或选取距离第二点位距离最近的引导标签作为该第二点位对应的引导标签,本实施例不限制具体确定选取引导标签的方法。
在一些示例实施例中,通过多个第一点位和至少一个第二点位在预设坐标系下的位置关系,在目标空间模型中确定至少一个引导标签,包括:
步骤a1,确定多个第一点位在预设坐标系下的多个第一三维坐标信息,以及至少一个第二点位在预设坐标系下的至少一个第二三维坐标信息。
步骤a2,基于多个第一三维坐标信息和至少一个第二三维坐标信息,为至少一个第二点位确定至少一个引导标签。
在一些实施例中,每个点位在进行图像数据采集时,具有一个设定位置,例如,在世界坐标系下的三维坐标位置,因此,可确定每个第一点位在预设坐标系下的第一三维坐标信息,以及每个第二点位在预设坐标系下的第二三维坐标信息;如果第二点位在图像数据采集时对应的坐标系与第一点位对应的坐标系不同时,可通过坐标系转换(通过原点之间的位置关系以及坐标轴的方向变换实现)将第二点位转换到第一点位对应的坐标系下,通过在相同的预设坐标系下的第一点位和第二点位对应的三维坐标信息,可直观的在预设坐标系下表达出第二点位与第一点位之间的位置关系,通过第一点位和第二点位的三维坐标信息可在目标空间模型的边界上 确定一个点,该点可表示第二点位与目标空间模型之间的位置方向关系,在该点设置引导标签可跟清楚的表达第二点位与目标空间模型之间的关系,便于在查看第二点位对应的全景图时,可以更好的将该全景图与目标空间模型相对应。
在示例中,上述实施例中的步骤a2,可以包括:
步骤b1,针对每个第二点位,基于第二点位对应的第二三维坐标信息与多个第一三维坐标信息,确定第二点位映射到目标空间模型中的目标位置。
在一些实施例中,每个第二点位都可以与多个第一点位中的每个第一点位分别确定多个位置,而每个第二点位真实对应的引导标签只有一个,即对应目标位置,在示例中,可通过从多个位置中确定在目标空间模型的不透光的实体上的一个位置作为目标位置,而当在目标空间模型的不透光的实体上的位置有多个时,可随机选择其中之一,或者基于多个位置与该第二点位之间的距离,选择距离最小的位置作为目标位置。
步骤b2,在目标位置设置第二点位对应的引导标签,将目标位置对应的第一三维坐标信息的第一点位确定为预设第一点位。
在一些实施例中,通过引导标签(flag)作为一个索引,基于该引导标签可直接到达该引导标签对应的第二点位,在具体应用时,用户可通过点击该引导标签查看对应的第二点位的全景图;而引导标签在目标空间模型中并不是在所有位置都可见,因此,一些实施例确定该引导标签对应的第一点位为预设第一点位,当在目标空间模型中游走时,到达预设第一点位时,方可见该引导标签,因此,一些实施例通过引导标签实现了在目标空间模型内展示模型外第二点位的全景图,并通过预设第一点位解决了引导标签在目标空间模型内的可见性问题。
在示例中,上述实施例中的步骤b1,可以包括:
基于第二点位对应的第二三维坐标信息与多个第一三维坐标信息,确定多条连接线;
基于多条连接线与目标空间模型的边框的多个交点,确定多个交点中的一个交点的位置作为目标位置。
在一些实施例中,在已知第二点位对应的第二三维坐标信息和多个第一三维坐标系信息之后,并且第二三维坐标信息和第一三维坐标信息对应同一预设坐标系,因此,在该预设坐标系下可分别连接第二坐标信息与多个第一三维坐标信息,得到多条连接线,在从多个交点中确定目标位置时,首先确定多个交点在目标空间模型上所对应的物体是否为不透光的实体(如,墙面等),将所对应的物体不是不透光的实体的交点去除,从剩余交点中选择一个作为目标位置。
在一些示例实施例中,在步骤102之前,还可以包括:
基于多个第一点位对应的多个全景图,对已知结构的空间模型进行渲染,得到目标空间模型。
在示例中,一些实施例可通过现有全景渲染组件对空间模型进行渲染,例如,Three.js的Sphere Geometry和Box Buffer Geometry(现有技术中常用的渲染组件)等,通过全景渲染组件接受一张全景图或6张cube图,实现全景渲染;一些实施例中渲染得到的目标空间模型可实现VR浏览,通过移动到不同第一点位实现对目标空间模型的不同部分进行展示;一些实施例中的空间模型的结构可通过任意现有技术获得,本申请不限制空间模型的结构获取方式。另外,一些实施例中的第二点位对应的全景图可以是基于在第二点位通过全景图像采集设备(全景相机等)获得,或基于普通图像采集设备获得图像数据,通过全景渲染得到第二点位对应的全景图。
如图3所示,在上述图1所示实施例的基础上,步骤106可包括如下步骤:
步骤1061,基于预设视角发起展示第二点位对应的全景图。
其中,预设视角对应预设第一点位朝向第二点位的方向。
在示例中,由于一些实施例中的引导标签是基于预设第一点位和第二点位确定的(例如,引导标签设置在预设第一点位和第二点位的连线上等),因此,在对第二点位对应的全景图进行展示时,首先展示全景图对应预设第一点位的视角方向,即预设视角对应预设第一点位朝向第二点位的方向;例如,如图4所示,在示例中通过点击设置在墙面的引导标签获得房屋模型外部的一个第二点位对应的全景图,并基于预设视角展示该全景图。
步骤1062,响应于接收到视角转换指令,通过视角转换指令对应的其他视角对全景图发起展示。
在一些实施例中,在对第二点位对应的全景图进行展示时,还可以根据用户输入的转换指令(例如,通过拖动鼠标完成等)对全景图进行视角转换,以查看不同角度的信息,例如,如图5所示,基于视角转换指令,将图4所示示例中的全景图转换到区别于图4的预设视角,展示该全景图的其他视角,实现对第二点位的点位资源的充分利用。
本公开实施例提供的任一种模型外点位的全景图展示方法可以由任意适当的具有数据处理能力的设备执行,包括但不限于:终端设备和服务器等。或者,本公开实施例提供的任一种模型外点位的全景图展示方法可以由处理器执行,如处理器通过调用存储器存储的相应指令来执行本公开实施例提及的任一种模型外点位的全景图展示方法。下文不再赘述。
示例性装置
图6是本公开一示例性实施例提供的模型外点位的全景图展示装置的结构示意图。如图6所示,本实施例提供的装置包括:
模型获得模块61,用于获得包括多个第一点位和至少一个引导标签的目标空间模型。
其中,每个引导标签对应坐标位置在目标空间模型之外的一个第二点位,并且,每个引导标签对应至少一个预设第一点位。
标签引导模块62,用于响应于目标空间模型的全景展示到达预设第一点位,根据接收的对引导标签的触发指令,获取引导标签对应的第二点位的全景图。
点位展示模块63,用于在目标空间模型中发起展示第二点位对应的全景图。
本公开上述实施例提供的一种模型外点位的全景图展示装置,获得包括多个第一点位和至少一个引导标签的目标空间模型;其中,每个所述引导标签对应坐标位置在所述目标空间模型之外的一个第二点位,并且,每个所述引导标签对应至少一个预设第一点位;响应于目标空间模型的全景展示到达所述预设第一点位,根据接收的对所述引导标签的触发指令,获取所述引导标签对应的第二点位的全景图;在所述目标空间模型中展示所述第二点位对应的全景图;一些实施例通过在目标空间模型中设置引导标签,实现通过引导标签到达位于目标空间模型之外的第二点位,并对第二点位对应的全景图进行展示,解决了模型外部点位无法到达的问题,充分利用了所有点位信息,避免了点位资源的浪费。
在示例中,一些实施例提供的装置还包括:
标签确定模块,用于通过多个第一点位和至少一个第二点位在预设坐标系下的位置关系,在目标空间模型中确定至少一个引导标签。
在示例中,标签确定模块,包括:
三维坐标单元,用于确定多个第一点位在预设坐标系下的多个第一三维坐标信息,以及至少一个第二点位在预设坐标系下的至少一个第二三维坐标信息;
引导标签单元,用于基于多个第一三维坐标信息和至少一个第二三维坐标信息,为至少一个第二点位确定至少一个引导标签。
在示例中,引导标签单元,具体用于针对每个第二点位,基于第二点位对应的第二三维坐标信息与多个第一三维坐标信息,确定第二点位映射到目标空间模型中的目标位置;在目标位置设置第二点位对应的引导标签,将目标位置对应的第一三维坐标信息的第一点位确定为预设第一点位。
在示例中,引导标签单元在基于第二点位对应的第二三维坐标信息与多个第一三维坐标信息,确定第二点位映射到目标空间模型中的目标位置时,用于基于第二点位对应的第二三维坐标信息与多个第一三维坐标信息,确定多条连接线;基于多条连接线与目标空间模型的边框的多个交点,确定多个交点中的一个交点的位置作为目标位置。
在示例中,一些实施例提供的装置还包括:
模型渲染模块,用于基于多个第一点位对应的多个全景图,对已知结构的空间模型进行渲染,得到目标空间模型。
在示例中,点位展示模块63,具体用于基于预设视角发起展示第二点位对应的全景图;其中,预设视角对应预设第一点位朝向第二点位的方向;响应于接收到视角转换指令,通过视角转换指令对应的其他视角对全景图发起展示。
示例性电子设备
下面,参考图7来描述根据本公开实施例的电子设备。该电子设备可以是第一设备和第二设备中的任一个或两者、或与它们独立的单机设备,该单机设备可以与第一设备和第二设备进行通信,以从它们接收所采集到的输入信号。
图7图示了根据本公开实施例的电子设备的框图。
如图7所示,电子设备70包括一个或多个处理器71和存储器72。
处理器71可以是中央处理单元(CPU)或者具有数据处理能力和/或指令执行能力的其他形式的处理单元,并且可以控制电子设备70中的其他组件以执行期望的功能。
存储器可以存储一个或多个计算机程序产品,所述存储器可以包括各种形式的计算机可读存储介质,例如易失性存储器和/或非易失性存储器。所述易失性存储器例如可以包括随机存取存储器(RAM)和/或高速缓冲存储器(cache)等。所述非易失性存储器例如可以包括只读存储器
(ROM)、硬盘、闪存等。在所述计算机可读存储介质上可以存储一个或多个计算机程序产品,处理器可以运行所述计算机程序产品,以实现上文所述的本公开的各个实施例的模型外点位的全景图展示方法以及/或者其他期望的功能。
在一个示例中,电子设备70还可以包括:输入装置73和输出装置74,这些组件通过总线系统和/或其他形式的连接机构(未示出)互连。
例如,在该电子设备是第一设备或第二设备时,该输入装置73可以是上述的麦克风或麦克风阵列,用于捕捉声源的输入信号。在该电子设备是单机设备时,该输入装置73可以是通信网络连接器,用于从第一设备和第二设备接收所采集的输入信号。
此外,该输入装置73还可以包括例如键盘、鼠标等等。
该输出装置74可以向外部输出各种信息,包括确定出的距离信息、方向信息等。该输出装置74可以包括例如显示器、扬声器、打印机、以及通信网络及其所连接的远程输出设备等等。
当然,为了简化,图7中仅示出了该电子设备70中与本公开有关的组件中的一些,省略了诸如总线、输入/输出接口等等的组件。除此之外,根据具体应用情况,电子设备70还可以包括任何其他适当的组件。
示例性计算机程序产品和计算机可读存储介质
除了上述方法和设备以外,本公开的实施例还可以是计算机程序产品,其包括计算机程序指令,所述计算机程序指令在被处理器运行时使得所述处理器执行本说明书上述部分中描述的根据本公开各种实施例的模型外点位的全景图展示方法中的步骤。
所述计算机程序产品可以以一种或多种程序设计语言的任意组合来编写用于执行本公开实施例操作的程序代码,所述程序设计语言包括面向对象的程序设计语言,诸如Java、C++等,还包括常规的过程式程序设计语言,诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算设备上执行、部分地在用户设备上执行、作为一个独立的软件包执行、部分在用户计算设备上部分在远程计算设备上执行、或者完全在远程计算设备或服务器上执行。
此外,本公开的实施例还可以是计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令在被处理器运行时使得所述处理器执行本说明书上述“示例性方法”部分中描述的根据本公开各种实施例的模型外点位的全景图展示方法中的步骤。
所述计算机可读存储介质可以采用一个或多个可读介质的任意组合。可读介质可以是可读信号介质或者可读存储介质。可读存储介质例如可以包括但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。
以上结合具体实施例描述了本公开的基本原理,但是,需要指出的是,在本公开中提及的优点、优势、效果等仅是示例而非限制,不能认为这些优点、优势、效果等是本公开的各个实施例必须具备的。另外,上述公开的具体细节仅是为了示例的作用和便于理解的作用,而非限制,上述细节并不限制本公开为必须采用上述具体的细节来实现。
本说明书中各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其它实施例的不同之处,各个实施例之间相同或相似的部分相互参见即可。对于系统实施例而言,由于其与方法实施例基本对应,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
本公开中涉及的器件、装置、设备、系统的方框图仅作为例示性的例子并且不意图要求或暗示必须按照方框图示出的方式进行连接、布置、配置。如本领域技术人员将认识到的,可以按任意方式连接、布置、配置这些器件、装置、设备、系统。诸如“包括”、“包含”、“具有”等等的词语是开放性词汇,指“包括但不限于”,且可与其互换使用。这里所使用的词汇“或”和“和”指词汇“和/或”,且可与其互换使用,除非上下文明确指示不是如此。这里所使用的词汇“诸如”指词组“诸如但不限于”,且可与其互换使用。
可能以许多方式来实现本公开的方法和装置。例如,可通过软件、硬件、固件或者软件、硬件、固件的任何组合来实现本公开的方法和装置。用于所述方法的步骤的上述顺序仅是为了进行说明,本公开的方法的步骤不限于以上具体描述的顺序,除非以其它方式特别说明。此外,在一些实施例中,还可将本公开实施为记录在记录介质中的程序,这些程序包括用于实现根据本公开的方法的机器可读指令。因而,本公开还覆盖存储用于执行根据本公开的方法的程序的记录介质。
还需要指出的是,在本公开的装置、设备和方法中,各部件或各步骤是可以分解和/或重新组合的。这些分解和/或重新组合应视为本公开的等效方案。
为了例示和描述的目的已经给出了以上描述。此外,此描述不意图将本公开的实施例限制到在此公开的形式。尽管以上已经讨论了多个示例方面和实施例,但是本领域技术人员将认识到其某些变型、修改、改变、添加和子组合。

Claims (17)

  1. 一种模型外点位的全景图展示方法,包括:
    获得包括多个第一点位和至少一个引导标签的目标空间模型,其中,每个所述引导标签对应坐标位置在所述目标空间模型之外的一个第二点位,并且,每个所述引导标签对应至少一个预设第一点位;
    响应于所述目标空间模型的全景展示到达所述预设第一点位,根据接收的对所述引导标签的触发指令,获取所述引导标签对应的第二点位的全景图;
    在所述目标空间模型中发起展示所述第二点位对应的全景图。
  2. 根据权利要求1所述的方法,其中,在获得包括多个第一点位和至少一个引导标签的目标空间模型之前,还包括:
    通过所述多个第一点位和所述至少一个第二点位在预设坐标系下的位置关系,在所述目标空间模型中确定所述至少一个引导标签。
  3. 根据权利要求2所述的方法,其中,所述通过所述多个第一点位和所述至少一个第二点位在预设坐标系下的位置关系,在所述目标空间模型中确定所述至少一个引导标签,包括:
    确定所述多个第一点位在所述预设坐标系下的多个第一三维坐标信息,以及所述至少一个第二点位在所述预设坐标系下的至少一个第二三维坐标信息;
    基于所述多个第一三维坐标信息和所述至少一个第二三维坐标信息,为所述至少一个第二点位确定至少一个引导标签。
  4. 根据权利要求3所述的方法,其中,所述基于所述多个第一三维坐标信息和所述至少一个第二三维坐标信息,为所述至少一个第二点位确定至少一个引导标签,包括:
    针对每个所述第二点位,基于所述第二点位对应的所述第二三维坐标信息与所述多个第一三维坐标信息,确定所述第二点位映射到所述目标空间模型中的目标位置;
    在所述目标位置设置所述第二点位对应的引导标签,将所述目标位置对应的第一三维坐标信息的第一点位确定为预设第一点位。
  5. 根据权利要求4所述的方法,其中,所述基于所述第二点位对应的所述第二三维坐标信息与所述多个第一三维坐标信息,确定所述第二点位映射到所述目标空间模型中的目标位置,包括:
    基于所述第二点位对应的所述第二三维坐标信息与所述多个第一三维坐标信息,确定多条连接线;
    基于所述多条连接线与所述目标空间模型的边框的多个交点,确定所述多个交点中的一个交点的位置作为所述目标位置。
  6. 根据权利要求1-5任一所述的方法,其中,在获得包括多个第一点位和至少一个引导标签的目标空间模型之前,还包括:
    基于所述多个第一点位对应的多个全景图,对已知结构的空间模型进行渲染,得到所述目标空间模型。
  7. 根据权利要求1-5任一所述的方法,其中,所述在所述目标空间模型中发起展示所述第二点位对应的全景图,包括:
    基于预设视角发起展示所述第二点位对应的全景图,其中,所述预设视角对应所述预设第一点位朝向所述第二点位的方向;
    响应于接收到视角转换指令,通过所述视角转换指令对应的其他视角对所述全景图发起展示。
  8. 一种模型外点位的全景图展示装置,包括:
    模型获得模块,用于获得包括多个第一点位和至少一个引导标签的目标空间模型;其中,每个所述引导标签对应坐标位置在所述目标空间模型 之外的一个第二点位,并且,每个所述引导标签对应至少一个预设第一点位;
    标签引导模块,用于响应于所述目标空间模型的全景展示到达所述预设第一点位,根据接收的对所述引导标签的触发指令,获取所述引导标签对应的第二点位的全景图;
    点位展示模块,用于在所述目标空间模型中发起展示所述第二点位对应的全景图。
  9. 根据权利要求8所述的装置,还包括:
    标签确定模块,用于通过所述多个第一点位和所述至少一个第二点位在预设坐标系下的位置关系,在所述目标空间模型中确定所述至少一个引导标签。
  10. 根据权利要求9所述的装置,其中,所述标签确定模块包括:
    三维坐标单元,用于确定所述多个第一点位在所述预设坐标系下的多个第一三维坐标信息,以及所述至少一个第二点位在所述预设坐标系下的至少一个第二三维坐标信息;
    引导标签单元,用于基于所述多个第一三维坐标信息和所述至少一个第二三维坐标信息,为所述至少一个第二点位确定至少一个引导标签。
  11. 根据权利要求10所述的装置,其中,所述引导标签单元进一步用于:
    针对每个所述第二点位,基于所述第二点位对应的所述第二三维坐标信息与所述多个第一三维坐标信息,确定所述第二点位映射到所述目标空间模型中的目标位置;
    在所述目标位置设置所述第二点位对应的引导标签,将所述目标位置对应的第一三维坐标信息的第一点位确定为预设第一点位。
  12. 根据权利要求11所述的装置,其中,所述引导标签单元进一步用于:
    基于所述第二点位对应的所述第二三维坐标信息与所述多个第一三维坐标信息,确定多条连接线;
    基于所述多条连接线与所述目标空间模型的边框的多个交点,确定所述多个交点中的一个交点的位置作为所述目标位置。
  13. 根据权利要求8-12任一所述的装置,还包括:
    模型渲染模块,用于基于所述多个第一点位对应的多个全景图,对已知结构的空间模型进行渲染,得到所述目标空间模型。
  14. 根据权利要求8-12任一所述的装置,其中,所述点位展示模块进一步用于:
    基于预设视角发起展示所述第二点位对应的全景图,其中,所述预设视角对应所述预设第一点位朝向所述第二点位的方向;
    响应于接收到视角转换指令,通过所述视角转换指令对应的其他视角对所述全景图发起展示。
  15. 一种电子设备,包括:
    存储器,用于存储计算机程序;
    处理器,用于执行所述存储器中存储的计算机程序,且所述计算机程序被执行时,实现上述权利要求1-7任一所述的模型外点位的全景图展示方法。
  16. 一种计算机可读存储介质,其上存储有计算机程序指令,该计算机程序指令被处理器执行时,实现上述权利要求1-7任一所述的模型外点位的全景图展示方法。
  17. 一种计算机程序,包括计算机程序指令,所述计算机程序指令当被处理器执行时,使所述处理器执行上述权利要求1-7任一所述的方法。
PCT/CN2023/080479 2022-09-26 2023-03-09 模型外点位的全景图展示方法和装置、设备、介质 WO2024066208A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211170895.4A CN115512046B (zh) 2022-09-26 2022-09-26 模型外点位的全景图展示方法和装置、设备、介质
CN202211170895.4 2022-09-26

Publications (1)

Publication Number Publication Date
WO2024066208A1 true WO2024066208A1 (zh) 2024-04-04

Family

ID=84505989

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/080479 WO2024066208A1 (zh) 2022-09-26 2023-03-09 模型外点位的全景图展示方法和装置、设备、介质

Country Status (2)

Country Link
CN (2) CN117237532A (zh)
WO (1) WO2024066208A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117237532A (zh) * 2022-09-26 2023-12-15 如你所视(北京)科技有限公司 模型外点位的全景图展示方法和装置、设备、介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110619659A (zh) * 2019-06-20 2019-12-27 北京无限光场科技有限公司 一种房源展示方法、装置、终端设备及介质
CN112465971A (zh) * 2020-12-03 2021-03-09 贝壳技术有限公司 模型中点位的引导方法和装置、存储介质、电子设备
CN112907755A (zh) * 2021-01-22 2021-06-04 北京房江湖科技有限公司 三维房屋模型中的模型展示方法及装置
WO2021249390A1 (zh) * 2020-06-12 2021-12-16 贝壳技术有限公司 实现增强现实的方法和装置、存储介质、电子设备
CN115097975A (zh) * 2022-07-08 2022-09-23 北京有竹居网络技术有限公司 用于控制视角转换的方法、装置、设备和存储介质
CN115512046A (zh) * 2022-09-26 2022-12-23 如你所视(北京)科技有限公司 模型外点位的全景图展示方法和装置、设备、介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107659851B (zh) * 2017-03-28 2019-09-17 腾讯科技(北京)有限公司 全景图像的展示控制方法及装置
CN110728755B (zh) * 2018-07-16 2022-09-27 阿里巴巴集团控股有限公司 场景间漫游、模型拓扑创建、场景切换方法及系统
CN111354090B (zh) * 2020-03-06 2023-06-13 贝壳技术有限公司 一种自识别的三维点云模型孔洞修补方法及装置
CN113722625A (zh) * 2021-07-27 2021-11-30 北京城市网邻信息技术有限公司 车源信息的显示方法、装置、电子设备及可读介质
CN114723870B (zh) * 2022-06-07 2022-09-13 深圳市中视典数字科技有限公司 一种三维模型渲染方法及系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110619659A (zh) * 2019-06-20 2019-12-27 北京无限光场科技有限公司 一种房源展示方法、装置、终端设备及介质
WO2021249390A1 (zh) * 2020-06-12 2021-12-16 贝壳技术有限公司 实现增强现实的方法和装置、存储介质、电子设备
CN112465971A (zh) * 2020-12-03 2021-03-09 贝壳技术有限公司 模型中点位的引导方法和装置、存储介质、电子设备
CN112907755A (zh) * 2021-01-22 2021-06-04 北京房江湖科技有限公司 三维房屋模型中的模型展示方法及装置
CN115097975A (zh) * 2022-07-08 2022-09-23 北京有竹居网络技术有限公司 用于控制视角转换的方法、装置、设备和存储介质
CN115512046A (zh) * 2022-09-26 2022-12-23 如你所视(北京)科技有限公司 模型外点位的全景图展示方法和装置、设备、介质

Also Published As

Publication number Publication date
CN115512046A (zh) 2022-12-23
CN117237532A (zh) 2023-12-15
CN115512046B (zh) 2023-11-03

Similar Documents

Publication Publication Date Title
CN111127627B (zh) 一种三维房屋模型中的模型展示方法及装置
CN109313812B (zh) 具有上下文增强的共享体验
WO2022095543A1 (zh) 图像帧拼接方法和装置、可读存储介质及电子设备
WO2024066208A1 (zh) 模型外点位的全景图展示方法和装置、设备、介质
US10620807B2 (en) Association of objects in a three-dimensional model with time-related metadata
CN111008985A (zh) 全景图拼缝检测方法、装置、可读存储介质及电子设备
US20200106959A1 (en) Panoramic light field capture, processing, and display
WO2023202349A1 (zh) 三维标签的交互呈现方法、装置、设备、介质和程序产品
WO2023231435A1 (zh) 视觉感知方法、装置、存储介质和电子设备
CN107978018B (zh) 立体图形模型的构建方法、装置、电子设备及存储介质
WO2022188331A1 (zh) 数据血缘关系展示方法、装置、电子设备及存储介质
CN113689508A (zh) 点云标注方法、装置、存储介质及电子设备
CN111562845B (zh) 用于实现三维空间场景互动的方法、装置和设备
CN115063564B (zh) 用于二维显示图像中的物品标签展示方法、装置及介质
WO2023197657A1 (zh) 用于处理vr场景的方法、装置和计算机程序产品
CN112465971A (zh) 模型中点位的引导方法和装置、存储介质、电子设备
CN113438463B (zh) 正交相机图像的模拟方法和装置、存储介质、电子设备
CN115423920A (zh) Vr场景的处理方法、装置和存储介质
CN111429519B (zh) 三维场景显示方法、装置、可读存储介质及电子设备
CN108920598B (zh) 全景图浏览方法、装置、终端设备、服务器及存储介质
CN114170381A (zh) 三维路径展示方法、装置、可读存储介质及电子设备
CN107749892B (zh) 会议记录的网络读取方法、装置、智能平板和存储介质
CN116228949B (zh) 三维模型处理方法、装置及存储介质
CN115454255B (zh) 物品展示的切换方法和装置、电子设备、存储介质
CN112988276B (zh) 一种资源包的生成方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23869471

Country of ref document: EP

Kind code of ref document: A1