WO2020168493A1 - 三维场景建模方法、装置、设备及存储介质 - Google Patents

三维场景建模方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2020168493A1
WO2020168493A1 PCT/CN2019/075593 CN2019075593W WO2020168493A1 WO 2020168493 A1 WO2020168493 A1 WO 2020168493A1 CN 2019075593 W CN2019075593 W CN 2019075593W WO 2020168493 A1 WO2020168493 A1 WO 2020168493A1
Authority
WO
WIPO (PCT)
Prior art keywords
individual
individuals
types
tags
coordinates
Prior art date
Application number
PCT/CN2019/075593
Other languages
English (en)
French (fr)
Inventor
李伟
Original Assignee
深圳市汇顶科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市汇顶科技股份有限公司 filed Critical 深圳市汇顶科技股份有限公司
Priority to CN201980000292.XA priority Critical patent/CN109997172A/zh
Priority to PCT/CN2019/075593 priority patent/WO2020168493A1/zh
Publication of WO2020168493A1 publication Critical patent/WO2020168493A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Definitions

  • This application relates to the technical field of indoor positioning, and in particular to a three-dimensional scene modeling method, device, equipment and storage medium.
  • indoor 3D scene modeling methods usually combine computer vision, data fusion, visual navigation, 3D scene modeling and other technologies to model physical scenes.
  • Vision-based 3D scene modeling technologies use a large number of 2D In the data, a multi-view collection and three-dimensional structure of a scene are modeled.
  • This application provides a three-dimensional scene modeling method, device, equipment and storage medium, which not only realizes indoor three-dimensional scene modeling, but also saves costs.
  • this application provides a 3D scene modeling method, including:
  • the indoor 3D scene model is constructed according to the respective position coordinates of the M individuals and the individual models of the M individuals, which not only realizes the indoor 3D scene modeling, but also does not need to add drones, laser ranging instruments, Other components such as robots effectively reduce costs.
  • the method further includes:
  • N is an integer greater than or equal to 1.
  • N individual models correspond to N types of individual types.
  • the respective tags of the M individuals are determined.
  • the M individuals include N types of tags, and the N types of tags correspond to the N individual models one-to-one.
  • obtaining the respective position coordinates of the M individuals according to the respective tags of the M individuals includes:
  • the 3D scene modeling method provided in this application further includes:
  • Tag information includes: the identification of the new type tag relative to the N type tag, the individual model corresponding to the identification is added to the model library to update the model library; the indoor 3D is updated according to the tag information and the model library Scene model.
  • the indoor 3D scene model is updated by polling to obtain tag information, which realizes the update of the indoor 3D scene model when no new individual types are added in the room.
  • tag information includes relative Adding the identification of the type tag to the N tag, then update the model library, and then update the indoor 3D scene model, which realizes the update of the indoor 3D scene model when new individual types are added indoors, thereby improving the accuracy of the 3D scene model update Sex.
  • this application provides a three-dimensional scene modeling device, including:
  • the first obtaining module is configured to obtain the position coordinates of the M individuals according to their respective tags, and M is an integer greater than or equal to 1.
  • the construction module is used to construct an indoor three-dimensional scene model according to the respective position coordinates of the M individuals and the respective individual models of the M individuals.
  • the 3D scene modeling device provided in this application further includes:
  • the first determining module is used to determine the types of M individuals to obtain N types of individual types, where N is an integer greater than or equal to 1.
  • the creation module is used to create a model library, which includes N individual models, and the N individual models have a one-to-one correspondence with N types of individual types.
  • the second determination module is used to determine the respective tags of the M individuals according to the N individual models in the model library, the M individuals include N types of tags, and the N types of tags correspond to the N individual models one-to-one.
  • the first acquisition module includes:
  • the first acquisition sub-module is used for acquiring the coordinates of the label of the individual and the relative coordinates of the label and the geometric center point of the individual for each of the M individuals.
  • the second acquisition sub-module is used to acquire the position coordinates of the individual according to the coordinates of the label of the individual and the relative coordinates of the label and the geometric center point of the individual for each of the M individuals.
  • the 3D scene modeling device provided in this application further includes:
  • the second acquisition module is used for polling to acquire tag information.
  • the first update module is used for adding an individual model corresponding to the identification to the model library if the tag information includes the identification of the newly added type tag with respect to the N type tags to update the model library.
  • the second update module is used to update the indoor 3D scene model according to the tag information and the model library.
  • this application provides a device including:
  • the processor and the memory, and the memory is used to store computer-executable instructions so that the processor executes the instructions to implement the three-dimensional scene modeling method as in the first aspect or an optional manner in the first aspect.
  • the present application provides a computer storage medium.
  • the storage medium includes computer instructions.
  • the instructions When the instructions are executed by a computer, the computer realizes the three-dimensional scene modeling method as in the first aspect or an optional manner in the first aspect.
  • the present application provides a computer program product, including computer instructions, which when executed by a computer, cause the computer to implement the first aspect or the three-dimensional scene modeling method in an optional manner of the first aspect.
  • This application provides a three-dimensional scene modeling method, device, equipment, and storage medium.
  • the method includes obtaining the respective position coordinates of M individuals according to their respective tags, where M is an integer greater than or equal to 1, and then according to the M individuals.
  • M is an integer greater than or equal to 1
  • the respective position coordinates and the individual models of the M individuals construct an indoor three-dimensional scene model. Since the indoor three-dimensional scene model is constructed according to the respective position coordinates of the M individuals and the individual models of the M individuals, not only the indoor three-dimensional scene modeling is realized, but also the cost is effectively reduced.
  • FIG. 1 is a flowchart of a three-dimensional scene modeling method provided by an embodiment of the present application
  • FIG. 2 is a flowchart of a three-dimensional scene modeling method provided by another embodiment of the present application.
  • FIG. 3 is a flowchart of a method for modeling a three-dimensional scene provided by still another embodiment of the present application.
  • FIG. 4 is a flowchart of a three-dimensional scene modeling method provided by another embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of a three-dimensional scene modeling device provided by an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a three-dimensional scene modeling apparatus provided by another embodiment of the present application.
  • Fig. 7 is a schematic structural diagram of a three-dimensional scene modeling device provided by another embodiment of the present application.
  • Fig. 8 is a schematic diagram of a terminal device provided by an embodiment of the present invention.
  • the present application provides a method, device, equipment and storage medium for modeling a three-dimensional scene.
  • Figure 1 is a flowchart of a three-dimensional scene modeling method provided by an embodiment of the present application, where the method can be executed by a three-dimensional scene modeling device, which can be implemented by software and/or hardware, for example: the device can be Part or all of the terminal equipment.
  • the terminal equipment can be a mobile phone, a tablet, a positioning device, a medical equipment, a fitness equipment, etc.
  • the device can also be a processor, a single-chip microcomputer, a microcontroller unit (MCU), etc. in the terminal equipment,
  • MCU microcontroller unit
  • Step S101 The terminal obtains the respective position coordinates of the M individuals according to the respective tags of the M individuals, where M is an integer greater than or equal to 1.
  • the individuals to be modeled may include M individuals, where the M individuals may include the same type of individuals, or may include different types of individuals.
  • the individuals may be objects, equipment, and personnel.
  • the present invention The embodiment does not limit the type and number of individuals.
  • the tags may be in different positions of the M individuals.
  • the embodiment of the present invention does not limit the positions of the tags.
  • the terminal can locate the location of the tags through the indoor positioning system, and then obtain the respective location coordinates of the M individuals.
  • the location coordinates of the M individuals are obtained according to the respective tags of the M individuals. There are no restrictions on the method.
  • Bluetooth Low Energy (BLE) technology Ultra Wideband (UWB) technology
  • Radio Frequency Identification (RFID) technology Zigbee technology
  • WiFi Wireless-Fidelity
  • the tags can be BLE tags, UWB tags, RFID tags, Zigbee tags, WiFi tags, etc.
  • the embodiment of the present invention does not limit the types of tags, as long as the tags can be Just get the location of the label.
  • the terminal obtains the respective position coordinates of the M individuals according to their respective tags, and also needs to construct a three-dimensional map frame according to the indoor scene.
  • the construction of the three-dimensional map frame according to the indoor scene can be implemented using any modeling software such as 3dMax, Unity3D, etc., embodiments of the present invention
  • the three-dimensional map frame may include outer walls, ground, etc., and then a three-dimensional coordinate system can be established according to the constructed three-dimensional map frame, and the respective position coordinates of the M individuals can be obtained according to the respective tags of the M individuals in the three-dimensional coordinate system.
  • Step S102 The terminal constructs an indoor three-dimensional scene model according to the respective position coordinates of the M individuals and the respective individual models of the M individuals.
  • the individual model is formed by the terminal modeling the individual according to the individual's shape, size, color and other parameters, and is used to construct a three-dimensional scene model to obtain the individual models of the M individuals, and to locate the coordinates of the respective positions of the M individuals. , Rendering the individual models of M individuals to realize the construction of indoor three-dimensional scene models.
  • the indoor 3D scene model is constructed according to the respective position coordinates of the M individuals and the individual models of the M individuals, which not only realizes the indoor 3D scene modeling, but also does not need to add drones, laser ranging instruments, Other components such as robots effectively reduce costs.
  • FIG. 2 is a flowchart of a three-dimensional scene modeling method provided by another embodiment of the present application.
  • the method can be executed by a three-dimensional scene modeling device, which can be implemented by software and/or hardware.
  • the device can be part or all of a terminal device, which can be a mobile phone, a tablet, a positioning device, or a medical device.
  • Equipment, fitness equipment, etc. the device may also be a processor, a single-chip microcomputer, an MCU, etc. in a terminal device.
  • step S101 may include:
  • Step S201 For each of the M individuals, the terminal obtains the coordinates of the label of the individual and the relative coordinates of the label and the geometric center point of the individual.
  • a three-dimensional coordinate system is established in the indoor scene, and the coordinates of the label of each individual are obtained according to the position of the label on each individual in the indoor scene.
  • the model is rendered at the coordinates of the individual geometric center point. Therefore, it is also necessary to obtain the relative coordinates of the label and the geometric center point of the individual.
  • Step S202 For each of the M individuals, the terminal obtains the position coordinates of the individual according to the coordinates of the individual's label and the relative coordinates of the label and the geometric center point of the individual.
  • FIG. 3 is a flowchart of a three-dimensional scene modeling method provided by another embodiment of the present application, wherein the method can be executed by a three-dimensional scene modeling device, which can be Realized by software and/or hardware, for example: the device can be part or all of the terminal device, the terminal device can be a mobile phone, tablet, positioning device, medical equipment, fitness equipment, etc., the device can also be the processing in the terminal device
  • the terminal device can be a mobile phone, tablet, positioning device, medical equipment, fitness equipment, etc.
  • the device can also be the processing in the terminal device
  • the following describes the three-dimensional scene modeling method with the terminal device as the executive body. As shown in Figure 3, before the terminal obtains the position coordinates of the M individuals according to their tags, the method Including the following steps:
  • Step S301 The terminal determines the types of M individuals to obtain N types of individual types, where N is an integer greater than or equal to 1.
  • N types of M individuals in the indoor scene there may be N types of M individuals in the indoor scene, that is, individuals of the same type may contain multiple individuals, and the individual models of each individual type are consistent.
  • the tag is placed in a fixed position of the individual.
  • Step S302 The terminal creates a model library.
  • the model library includes N individual models, and the N individual models have a one-to-one correspondence with the N types of individual types.
  • the model library includes N individual models.
  • the N individual models correspond to the N types of individual types in a one-to-one correspondence to facilitate the subsequent construction of indoor 3D scene models.
  • Step S303 The terminal determines the respective tags of the M individuals according to the N individual models in the model library, the M individuals include N types of tags, and the N types of tags are in one-to-one correspondence with the N individual models.
  • the respective tags of the M individuals are determined, where each type of tag can be distinguished by writing an identifier in the tag.
  • the identifier can be the model, name or number, symbol, or symbol of the individual type. Coding, etc., the embodiment of the present invention does not limit the specific representation form of the identifier.
  • FIG. 4 is a three-dimensional scene modeling method provided by another embodiment of the present application
  • the method can be executed by a three-dimensional scene modeling device, which can be implemented by software and/or hardware.
  • the device can be part or all of a terminal device, and the terminal device can be a mobile phone, tablet, Positioning equipment, medical equipment, fitness equipment, etc.
  • the device can also be a processor, single-chip microcomputer, MCU, etc. in a terminal device.
  • the method may also include the following steps:
  • Step S401 The terminal polls to obtain label information.
  • the terminal polls to obtain label information according to the cycle of updating the indoor 3D scene model as needed.
  • the label information may include the coordinates of the label and the type of the label.
  • Step S402 If the tag information includes an identifier of a newly-added type tag relative to the N type tags, the individual model corresponding to the identifier is added to the model library to update the model library.
  • step S403 If the tag information includes the identification of the newly-added type tag by obtaining the tag information, the individual model corresponding to the identification is added to the model library to update the model library, and then step S403 is performed. If the tag information does not include the newly-added type tag , Then directly execute step S403.
  • Step S403 The terminal updates the indoor three-dimensional scene model according to the tag information and the model library.
  • the method for updating the indoor three-dimensional scene model according to the tag information and the model library is the method introduced in the foregoing embodiment, and the specific content and details will not be repeated.
  • the indoor 3D scene model is updated by polling to obtain tag information, which realizes the update of the indoor 3D scene model when no new individual types are added in the room.
  • tag information includes relative Adding the identification of the type tag to the N tag, then update the model library, and then update the indoor 3D scene model, which realizes the update of the indoor 3D scene model when new individual types are added indoors, thereby improving the accuracy of the 3D scene model update Sex.
  • Figure 5 is a schematic structural diagram of a three-dimensional scene modeling apparatus provided by an embodiment of the present application.
  • the apparatus can be implemented by software and/or hardware.
  • the apparatus may be part or all of a terminal device, and the terminal device may be a mobile phone. , Tablet, positioning equipment, medical equipment, fitness equipment, etc.
  • the device can also be a processor, a single-chip microcomputer, a micro-control unit MCU in a terminal device, etc.
  • the first obtaining module 51 is configured to obtain the respective position coordinates of the M individuals according to their respective tags, where M is an integer greater than or equal to 1.
  • the construction module 52 is used to construct an indoor three-dimensional scene model according to the respective position coordinates of the M individuals and the respective individual models of the M individuals.
  • FIG. 6 is a schematic structural diagram of a three-dimensional scene modeling apparatus provided by another embodiment of the present application. As shown in FIG. 6, the three-dimensional scene modeling apparatus provided in this application further includes:
  • the first determining module 53 is configured to determine the types of M individuals to obtain N types of individual types, where N is an integer greater than or equal to 1.
  • the creation module 54 is used to create a model library.
  • the model library includes N individual models, and the N individual models have a one-to-one correspondence with the N types of individual types.
  • the second determining module 55 is configured to determine the respective tags of the M individuals according to the N individual models in the model library, the M individuals include N types of tags, and the N types of tags correspond to the N individual models one-to-one.
  • the first obtaining module 51 includes:
  • the first obtaining sub-module 511 is configured to obtain, for each of the M individuals, the coordinates of the label of the individual and the relative coordinates of the label and the geometric center point of the individual.
  • the second obtaining submodule 512 is configured to obtain, for each of the M individuals, the position coordinates of the individual according to the coordinates of the label of the individual and the relative coordinates of the label and the geometric center point of the individual.
  • FIG. 7 is a schematic structural diagram of a three-dimensional scene modeling apparatus provided by another embodiment of the present application. As shown in FIG. 7, the three-dimensional scene modeling apparatus provided by this application further includes:
  • the second obtaining module 56 is used for polling to obtain tag information.
  • the first update module 57 is configured to, if the tag information includes an identifier of a newly added type tag relative to the N type tags, add an individual model corresponding to the identifier in the model library to update the model library.
  • the second update module 58 is used to update the indoor three-dimensional scene model according to the tag information and the model library.
  • FIG. 8 is a schematic diagram of a terminal device provided by an embodiment of the present invention. As shown in FIG. 8, the terminal device provided by this application includes:
  • the processor 81 the memory 82, the transceiver 83, and a computer program; wherein the transceiver 83 implements the sending and receiving of detection signals by the terminal device.
  • the computer program is stored in the memory 82 and is configured to be executed by the processor 81.
  • the program includes instructions for executing the above-mentioned three-dimensional scene modeling method. For the content and effect, please refer to the method embodiment.
  • the storage medium includes computer instructions.
  • the instructions When the instructions are executed by a computer, the computer realizes the above-mentioned three-dimensional scene modeling method.
  • the present application provides a computer program product, including computer instructions, when the instructions are executed by a computer, the computer realizes the above-mentioned three-dimensional scene modeling method.
  • a person of ordinary skill in the art can understand that all or part of the steps in the foregoing method embodiments can be implemented by a program instructing relevant hardware.
  • the aforementioned program can be stored in a computer readable storage medium. When the program is executed, it executes the steps including the foregoing method embodiments; and the foregoing storage medium includes: ROM, RAM, magnetic disk, or optical disk and other media that can store program codes.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本申请提供一种三维场景建模方法、装置、设备及存储介质,包括:根据M个个体各自的标签获取M个个体各自的位置坐标,M为大于等于1的整数;根据M个个体各自的位置坐标以及M个个体各自的个体模型,构建室内三维场景模型。不仅实现了室内三维场景建模,而且有效降低了成本。

Description

三维场景建模方法、装置、设备及存储介质 技术领域
本申请涉及室内定位技术领域,尤其涉及一种三维场景建模方法、装置、设备及存储介质。
背景技术
现有技术中,室内三维场景建模方法,通常是结合计算机视觉、数据融合、视觉导航和三维场景建模等技术对物理场景建模,基于视觉的三维场景建模技术都从大量的二维数据中对某一场景的多视角集合以及三维结构进行建模。
然而现有的三维场景建模技术,对于室内定位系统来说,需要添加无人机、激光测距仪器、机器人等其他组件,成本较高且实现难度较大。
发明内容
本申请提供一种三维场景建模方法、装置、设备及存储介质,不仅实现了室内三维场景建模,而且节约了成本。
第一方面,本申请提供一种三维场景建模方法,包括:
根据M个个体各自的标签获取M个个体各自的位置坐标,M为大于等于1的整数;根据M个个体各自的位置坐标以及M个个体各自的个体模型,构建室内三维场景模型。
本方案中,通过根据M个个体各自的位置坐标以及M个个体各自的个体模型,构建室内三维场景模型,不仅实现了室内三维场景建模,而且不需要添加无人机、激光测距仪器、机器人等其他组件,有效降低了成本。
可选的,根据M个个体各自的标签获取M个个体各自的位置坐标之前,还包括:
确定M个个体的类型,得到N类个体类型,N为大于等于1的整数。
创建模型库,模型库包括N个个体模型,N个个体模型与N类个体类型一一对应。
根据模型库中的N个个体模型确定M个个体各自的标签,M个个体包括N类标签,N类标签与N个个体模型一一对应。
本方案中,通过根据M个个体的N类个体类型创建模型库,并根据模型库中的个体模型确定个体的标签,实现了对每种个体标签的确定,进而可以根据每种个体标签确定该种个体的个体模型。
可选的,根据M个个体各自的标签获取M个个体各自的位置坐标,包括:
针对M个个体中的每个个体,获取该个体的标签的坐标,以及,该标签与该个体的几何中心点的相对坐标;针对M个个体中的每个个体,根据该个体的标签的坐标,以及,该标签与该个体的几何中心点的相对坐标,获取该个体的位置坐标。
本方案中,通过根据个体标签的坐标,以及,该标签与该个体的几何中心电的相对坐标,获取该个体的位置坐标,提高了获取个体位置坐标的准确性。
可选的,本申请提供的三维场景建模方法,还包括:
轮询获取标签信息;若标签信息包括:相对于N类标签新增类型标签的标识,则在模型库中增加该标识对应的个体模型,以更新模型库;根据标签信息和模型库更新室内三维场景模型。
本方案中,一方面,通过轮询获取标签信息对室内三维场景模型进行更新,实现了在室内未增加新的个体类型时,对室内三维场景模型的更新,另一方面,若标签信息包括相对于N类标签新增类型标签的标识,则更新模型库,然后更新室内三维场景模型,实现了在室内增加新个体类型时,对室内三维场景模型的更新,从而提高了三维场景模型更新的准确性。
下面将介绍三维场景建模装置、设备、存储介质及计算机程序产品,其效果可参考方法部分的效果,下面对此不再赘述。
第二方面、本申请提供一种三维场景建模装置,包括:
第一获取模块,用于根据M个个体各自的标签获取M个个体各自的位置坐标,M为大于等于1的整数。
构建模块,用于根据M个个体各自的位置坐标以及M个个体各自的个体模型,构建室内三维场景模型。
可选的,本申请提供的三维场景建模装置,还包括:
第一确定模块,用于确定M个个体的类型,得到N类个体类型,N为大于等于1的整数。
创建模块,用于创建模型库,模型库包括N个个体模型,N个个体模型与N类个体类型一一对应。
第二确定模块,用于根据模型库中的N个个体模型确定M个个体各自的标签,M个个体包括N类标签,N类标签与N个个体模型一一对应。
可选的,第一获取模块,包括:
第一获取子模块,用于针对M个个体中的每个个体,获取该个体的标签的坐标,以及,该标签与该个体的几何中心点的相对坐标。
第二获取子模块,用于针对M个个体中的每个个体,根据该个体的标签的坐标,以及,该标签与该个体的几何中心点的相对坐标,获取该个体的位置坐标。
可选的,本申请提供的三维场景建模装置,还包括:
第二获取模块,用于轮询获取标签信息。
第一更新模块,用于若标签信息包括:相对于N类标签新增类型标签的标识,则在模型库中增加该标识对应的个体模型,以更新模型库。
第二更新模块,用于根据标签信息和模型库更新室内三维场景模型。
第三方面,本申请提供一种设备,包括:
处理器和存储器,存储器用于存储计算机可执行指令,以使处理器执行指令实现如第一方面或第一方面可选方式的三维场景建模方法。
第四方面,本申请提供一种计算机存储介质,存储介质包括计算机指令,当指令被计算机执行时,使得计算机实现如第一方面或第一方面可选方式的三维场景建模方法。
第五方面,本申请提供一种计算机程序产品,包括计算机指令,当指令被计算机执行时,使得计算机实现第一方面或第一方面可选方式的三维场景建模方法。
本申请提供一种三维场景建模方法、装置、设备及存储介质,该方法包括根据M个个体各自的标签获取M个个体各自的位置坐标,M为大于等于1的整数;然后根据M个个体各自的位置坐标以及M个个体各自的个体模型, 构建室内三维场景模型。由于通过根据M个个体各自的位置坐标以及M个个体各自的个体模型,构建室内三维场景模型,不仅实现了室内三维场景建模,而且有效降低了成本。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本申请一实施例提供的三维场景建模方法的流程图;
图2是本申请另一实施例提供的三维场景建模方法的流程图;
图3是本申请再一实施例提供的三维场景建模方法的流程图;
图4是本申请又一实施例提供的三维场景建模方法的流程图;
图5是本申请一实施例提供的三维场景建模装置的结构示意图;
图6是本申请另一实施例提供的三维场景建模装置的结构示意图;
图7是本申请又一实施例提供的三维场景建模装置的结构示意图;
图8是本发明实施例提供的终端设备的示意图。
具体实施方式
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”、“第四”等(如果存在)是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例,例如能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、 系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
随着现代社会的不断发展和大型建筑的日益增多,人们每天有超过80%的时间处于室内环境(家里、办公室、商场、地下停车场、隧道、矿井等),如何提供精确的室内定位与导航服务已经成为新一代信息技术产业的研究重点。目前,虽然二维平面地图可以提供简单的可视化显示,但当环境较为复杂时,平面地图将无法发挥作用,而现有技术中的室内三维场景建模,通常需要添加无人机、激光测距仪器等组件,成本较高。为了解决上述问题,本申请提供一种三维场景建模方法、装置、设备及存储介质。
以下,对本申请实施例的示例性应用场景进行介绍。
在医院、养老院、工厂、学校、会展、博物馆、展览馆、地下管线及矿道、智慧大楼、监狱等室内场景下,可能需要对室内的人员、物品、设备等进行精准定位,或者在复杂的室内环境下实现基于移动终端的实时导航,或随时查看人员、物品在某个时间段内的移动轨迹,便于实现人员岗位查询、人员行为分析、物质调度安排等,也可以用于人员防走失、岗位管理、物质管控等方面。
基于上述应用场景,下面对本申请技术方案进行详细介绍:
图1是本申请一实施例提供的三维场景建模方法的流程图,其中该方法可以由三维场景建模装置执行,该装置可以通过软件和/或硬件的方式实现,例如:该装置可以是终端设备的部分或全部,终端设备可以为手机、平板、定位设备、医疗设备、健身设备等,该装置还可以是终端设备中的处理器、单片机、微控制单元(Microcontroller Unit,MCU)等,下面以终端设备为执行主体对三维场景建模方法进行说明,如图1所示,该方法包括如下步骤:
步骤S101:终端根据M个个体各自的标签获取M个个体各自的位置坐标,M为大于等于1的整数。
在室内场景中,需要进行建模的个体可能包括M个个体,其中,M个个体可能包括相同类型的个体,也可能包括不同类型的个体,该个体可以是物品、设备以及人员等,本发明实施例对个体的类型以及数量不做限制。
M个个体中分别存在各自的标签,标签可能在M个个体的不同的位置,本发明实施例对标签的位置不做限制。根据M个个体各自的标签,终端可以 通过室内定位系统定位标签的位置,进而获取M个个体各自的位置坐标,本申请实施例对根据M个个体各自的标签获取M个个体各自的位置坐标的方式不做限制,例如,可以通过蓝牙低功耗(Bluetooth Low Energy,BLE)技术、超宽带(Ultra Wideband,UWB)技术、射频识别(Radio Frequency Identification,RFID)技术、紫蜂(Zigbee)技术、无线保真(Wireless-Fidelity,WiFi)技术等,其中,标签可以是BLE标签、UWB标签、RFID标签、Zigbee标签、WiFi标签等,本发明实施例对标签的类型不做限制,只要能够根据标签获取标签的位置即可。
终端根据M个个体各自的标签获取M个个体各自的位置坐标,还需要根据室内场景构建三维地图框架,根据室内场景构建三维地图框架可以使用3dMax、Unity3D等任意建模软件实现,本发明实施例对构建三维地图框架的实现方式不做限制。三维地图框架可以包括外层墙体、地面等,然后可以根据构建的三维地图框架建立三维坐标系,在该三维坐标系下根据M个个体各自的标签获取M个个体各自的位置坐标。
步骤S102:终端根据M个个体各自的位置坐标以及M个个体各自的个体模型,构建室内三维场景模型。
个体模型是终端根据个体的形状、大小、颜色等参数对个体进行建模形成的,以用于在构建三维场景模型,获取M个个体各自的个体模型,并在M个个体各自的位置坐标处,渲染M个个体各自的个体模型,以实现构建室内三维场景模型。
本方案中,通过根据M个个体各自的位置坐标以及M个个体各自的个体模型,构建室内三维场景模型,不仅实现了室内三维场景建模,而且不需要添加无人机、激光测距仪器、机器人等其他组件,有效降低了成本。
可选的,在上述实施例的基础上,为了实现根据M个个体各自的标签获取M个个体各自的位置坐标,图2是本申请另一实施例提供的三维场景建模方法的流程图,其中该方法可以由三维场景建模装置执行,该装置可以通过软件和/或硬件的方式实现,例如:该装置可以是终端设备的部分或全部,终端设备可以为手机、平板、定位设备、医疗设备、健身设备等,该装置还可以是终端设备中的处理器、单片机、MCU等,下面以终端设备为执行主体对三维场景建模方法进行说明,如图2所示,步骤S101可以包括:
步骤S201:终端针对M个个体中的每个个体,获取该个体的标签的坐标,以及,该标签与该个体的几何中心点的相对坐标。
通过室内定位系统,在室内场景中建立三维坐标系,并根据每个个体上的标签在室内场景中的位置,获取该个体的标签的坐标,其中,第j个个体的坐标可以用position j=(x j,y j,z j)来表示,(x j,y j,z j)为该标签在室内场景中的坐标,其中,j=1,2,3,……M。
为了准确的构建室内三维场景模型,需要保证个体模型在三维场景模型中的位置与个体在室内场景中的位置保持一致,为了便于构建室内三维场景模型,在一种可能的实施方式中,可以在个体几何中心点的坐标处渲染模型,因此,还需要获取该标签与该个体的几何中心点的相对坐标,其中,第j个个体的标签与第j个个体几何中心点的相对坐标可以用diffPos j=(diffx j,diffy j,diffz j)来表示,其中,j=1,2,3,……M。
步骤S202:终端针对M个个体中的每个个体,根据该个体的标签的坐标,以及,该标签与该个体的几何中心点的相对坐标,获取该个体的位置坐标。
根据第j个个体的标签的坐标position j=(x j,y j,z j)以及第j个个体的标签与第j个个体几何中心点的相对坐标diffPos j=(diffx j,diffy j,diffz j),获取第j个个体的位置坐标realPosition j=position j-diffPos j
在获取个体的位置坐标之后,根据个体位置坐标以及个体模型构建室内三维场景模型,例如,获取第j个个体的位置坐标realPosition j=position j-diffPos j之后,在第j个个体的位置坐标realPosition j处,渲染第j个个体的个体模型,根据上述方式渲染M个个体的个体模型,实现构建室内三维场景模型。
可选的,为了实现根据标签确定个体的个体模型,图3是本申请再一实施例提供的三维场景建模方法的流程图,其中该方法可以由三维场景建模装置执行,该装置可以通过软件和/或硬件的方式实现,例如:该装置可以是终端设备的部分或全部,终端设备可以为手机、平板、定位设备、医疗设备、健身设备等,该装置还可以是终端设备中的处理器、单片机、MCU等,下面以终端设备为执行主体对三维场景建模方法进行说明,如图3所示,在终端根据M个个体各自的标签获取M个个体各自的位置坐标之前,该方法包括如下步骤:
步骤S301:终端确定M个个体的类型,得到N类个体类型,N为大于等于1的整数。
室内场景中的M个个体,可能存在N种类型,也就是说,同一种类型的个体可能包含有多个个体,每种个体类型的个体模型一致,且为了提高三维场景建模效率,针对同一种类型的个体,其标签放置在该个体的固定位置。
步骤S302:终端创建模型库,模型库包括N个个体模型,N个个体模型与N类个体类型一一对应。
根据N类个体类型,创建模型库,模型库中包括N个个体模型,其中,N个个体模型与N类个体类型一一对应,以便于后续构建室内三维场景模型。
步骤S303:终端根据模型库中的N个个体模型确定M个个体各自的标签,M个个体包括N类标签,N类标签与N个个体模型一一对应。
根据M个个体各自的个体类型,确定M个个体各自的标签,其中,每一种标签可以通过在标签中写入标识符来区分,标识符可以是个体类型的型号、名称或者数字、符号、编码等,本发明实施例对标识符的具体表示形式不做限制,例如,根据不同类型的标签,可以通过符号tag k来表示,其中,k=1,2,3,……,N,表示M个标签共分为N类,每一类标签对应于一种个体模型。
因此,在上述实施例的步骤S201中,获取该个体的标签的坐标时,还可以获取该个体标签的类型,其中,第j个个体的坐标可以用position j=(x j,y j,z j,tag k)来表示,(x j,y j,z j)为该标签在室内场景中的坐标,tag k为该个体的标签的个体类型,其中,j=1,2,3,……M,k=1,2,3,……N。在上述实施例的S202中,获取该个体的位置坐标,可以是根据第j个个体的标签的坐标position j=(x j,y j,z j,tag k)以及第j个个体的标签类型tag k,获取第j个个体的标签与第j个个体几何中心点的相对坐标diffPos k=(diffx k,diffy k,diffz k),得到第j个个体的位置坐标realPosition j=position j-diffPos k
本方案中,通过根据M个个体的N类个体类型创建模型库,并根据模型库中的个体模型确定个体的标签,实现了对每种个体标签类型的确定,进而可以根据每种个体标签确定该种个体的个体模型,提高了室内三维场景建模的效率。
可选的,在应用场景中,室内场景中的个体可能会存在删除、移动、增加等情况,为了监测室内场景中的个体状态,图4是本申请又一实施例提供的三维场景建模方法的流程图,其中该方法可以由三维场景建模装置执行,该装置可以通过软件和/或硬件的方式实现,例如:该装置可以是终端设备的部分或全部,终端设备可以为手机、平板、定位设备、医疗设备、健身设备等,该装置还可以是终端设备中的处理器、单片机、MCU等,下面以终端设备为执行主体对三维场景建模方法进行说明,如图4所示,该方法还可以包括如下步骤:
步骤S401:终端轮询获取标签信息。
终端根据所需要更新室内三维场景模型的周期,轮询获取标签信息,标签信息可以包括标签的坐标以及标签的类型。
步骤S402:若标签信息包括:相对于N类标签新增类型标签的标识,则在模型库中增加该标识对应的个体模型,以更新模型库。
若通过获取标签信息,标签信息中包括新增类型标签的标识,则在模型库中增加该标识对应的个体模型,以更新模型库,然后执行步骤S403,若标签信息中不包括新增类型标签的标识,则直接执行步骤S403。
步骤S403:终端根据标签信息和模型库更新室内三维场景模型。
根据标签信息和模型库更新室内三维场景模型的方法,如上述实施例中说介绍的方法,其具体内容和细节不再赘述。
本方案中,一方面,通过轮询获取标签信息对室内三维场景模型进行更新,实现了在室内未增加新的个体类型时,对室内三维场景模型的更新,另一方面,若标签信息包括相对于N类标签新增类型标签的标识,则更新模型库,然后更新室内三维场景模型,实现了在室内增加新个体类型时,对室内三维场景模型的更新,从而提高了三维场景模型更新的准确性。
下面将介绍三维场景建模装置、设备、存储介质及计算机程序产品,其效果可参考方法部分的效果,下面对此不再赘述。
图5是本申请一实施例提供的三维场景建模装置的结构示意图,该装置可以通过软件和/或硬件的方式实现,例如:该装置可以是终端设备的部分或全部,终端设备可以为手机、平板、定位设备、医疗设备、健身设备等,该装置还可以是终端设备中的处理器、单片机、微控制单元MCU等,下面以 终端设备为执行主体对三维场景建模方法进行说明,如图5所示,本申请实施例提供的三维场景建模装置可以包括:
第一获取模块51,用于根据M个个体各自的标签获取M个个体各自的位置坐标,M为大于等于1的整数。
构建模块52,用于根据M个个体各自的位置坐标以及M个个体各自的个体模型,构建室内三维场景模型。
可选的,在上述实施例的基础上,图6是本申请另一实施例提供的三维场景建模装置的结构示意图,如图6所示,本申请提供的三维场景建模装置还包括:
第一确定模块53,用于确定M个个体的类型,得到N类个体类型,N为大于等于1的整数。
创建模块54,用于创建模型库,模型库包括N个个体模型,N个个体模型与N类个体类型一一对应。
第二确定模块55,用于根据模型库中的N个个体模型确定M个个体各自的标签,M个个体包括N类标签,N类标签与N个个体模型一一对应。
可选的,如图6所示,第一获取模块51,包括:
第一获取子模块511,用于针对M个个体中的每个个体,获取该个体的标签的坐标,以及,该标签与该个体的几何中心点的相对坐标。
第二获取子模块512,用于针对M个个体中的每个个体,根据该个体的标签的坐标,以及,该标签与该个体的几何中心点的相对坐标,获取该个体的位置坐标。
可选的,在上述实施例的基础上,图7是本申请又一实施例提供的三维场景建模装置的结构示意图,如图7所示,本申请提供的三维场景建模装置还包括:
第二获取模块56,用于轮询获取标签信息。
第一更新模块57,用于若标签信息包括:相对于N类标签新增类型标签的标识,则在模型库中增加该标识对应的个体模型,以更新模型库。
第二更新模块58,用于根据标签信息和模型库更新室内三维场景模型。
本申请提供一种终端设备,图8是本发明实施例提供的终端设备的示意图,如图8所示,本申请提供的终端设备包括:
处理器81、存储器82、收发器83以及计算机程序;其中,收发器83实现终端设备对检测信号的发送和接收,计算机程序被存储在存储器82中,并且被配置为由处理器81执行,计算机程序包括用于执行上述三维场景建模方法的指令,其内容及效果请参考方法实施例。
本申请提供一种计算机存储介质,存储介质包括计算机指令,当指令被计算机执行时,使得计算机实现如上述的三维场景建模方法。
本申请提供一种计算机程序产品,包括计算机指令,当指令被计算机执行时,使得计算机实现上述的三维场景建模方法。
本领域普通技术人员可以理解:实现上述各方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成。前述的程序可以存储于一计算机可读取存储介质中。该程序在执行时,执行包括上述各方法实施例的步骤;而前述的存储介质包括:ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
最后应说明的是:以上各实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述各实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。

Claims (10)

  1. 一种三维场景建模方法,其特征在于,包括:
    根据M个个体各自的标签获取所述M个个体各自的位置坐标,M为大于等于1的整数;
    根据所述M个个体各自的位置坐标以及所述M个个体各自的个体模型,构建室内三维场景模型。
  2. 根据权利要求1所述的方法,其特征在于,所述根据M个个体各自的标签获取所述M个个体各自的位置坐标之前,还包括:
    确定所述M个个体的类型,得到N类个体类型,N为大于等于1的整数;
    创建模型库,所述模型库包括N个个体模型,所述N个个体模型与所述N类个体类型一一对应;
    根据所述模型库中的N个个体模型确定M个个体各自的标签,所述M个个体包括N类标签,所述N类标签与所述N个个体模型一一对应。
  3. 根据权利要求1或2所述的方法,其特征在于,所述根据M个个体各自的标签获取所述M个个体各自的位置坐标,包括:
    针对所述M个个体中的每个个体,获取该个体的标签的坐标,以及,该标签与该个体的几何中心点的相对坐标;
    针对所述M个个体中的每个个体,根据该个体的标签的坐标,以及,该标签与该个体的几何中心点的相对坐标,获取该个体的位置坐标。
  4. 根据权利要求2所述的方法,其特征在于,还包括:
    轮询获取标签信息;
    若所述标签信息包括:相对于所述N类标签新增类型标签的标识,则在所述模型库中增加该标识对应的个体模型,以更新所述模型库;
    根据所述标签信息和所述模型库更新所述室内三维场景模型。
  5. 一种三维场景建模装置,其特征在于,包括:
    第一获取模块,用于根据M个个体各自的标签获取所述M个个体各自的位置坐标,M为大于等于1的整数;
    构建模块,用于根据所述M个个体各自的位置坐标以及所述M个个体各自的个体模型,构建室内三维场景模型。
  6. 根据权利要求5所述的装置,其特征在于,还包括:
    第一确定模块,用于确定所述M个个体的类型,得到N类个体类型,N为大于等于1的整数;
    创建模块,用于创建模型库,所述模型库包括N个个体模型,所述N个个体模型与所述N类个体类型一一对应;
    第二确定模块,用于根据所述模型库中的N个个体模型确定M个个体各自的标签,所述M个个体包括N类标签,所述N类标签与所述N个个体模型一一对应。
  7. 根据权利要求5或6所述的装置,其特征在于,所述第一获取模块,包括:
    第一获取子模块,用于针对所述M个个体中的每个个体,获取该个体的标签的坐标,以及,该标签与该个体的几何中心点的相对坐标;
    第二获取子模块,用于针对所述M个个体中的每个个体,根据该个体的标签的坐标,以及,该标签与该个体的几何中心点的相对坐标,获取该个体的位置坐标。
  8. 根据权利要求6所述的装置,其特征在于,还包括:
    第二获取模块,用于轮询获取标签信息;
    第一更新模块,用于若所述标签信息包括:相对于所述N类标签新增类型标签的标识,则在所述模型库中增加该标识对应的个体模型,以更新所述模型库;
    第二更新模块,用于根据所述标签信息和所述模型库更新所述室内三维场景模型。
  9. 一种设备,其特征在于,包括:处理器和存储器,
    所述存储器用于存储计算机可执行指令,以使所述处理器执行所述指令实现如权利要求1至4中任一项权利要求所述的三维场景建模方法。
  10. 一种计算机存储介质,其特征在于,所述存储介质包括计算机指令,当所述指令被计算机执行时,使得所述计算机实现如权利要求1至4中任一项权利要求所述的三维场景建模方法。
PCT/CN2019/075593 2019-02-20 2019-02-20 三维场景建模方法、装置、设备及存储介质 WO2020168493A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980000292.XA CN109997172A (zh) 2019-02-20 2019-02-20 三维场景建模方法、装置、设备及存储介质
PCT/CN2019/075593 WO2020168493A1 (zh) 2019-02-20 2019-02-20 三维场景建模方法、装置、设备及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/075593 WO2020168493A1 (zh) 2019-02-20 2019-02-20 三维场景建模方法、装置、设备及存储介质

Publications (1)

Publication Number Publication Date
WO2020168493A1 true WO2020168493A1 (zh) 2020-08-27

Family

ID=67136915

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/075593 WO2020168493A1 (zh) 2019-02-20 2019-02-20 三维场景建模方法、装置、设备及存储介质

Country Status (2)

Country Link
CN (1) CN109997172A (zh)
WO (1) WO2020168493A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110779479B (zh) * 2019-09-02 2022-01-14 腾讯科技(深圳)有限公司 一种应用于室内地图的对象处理方法
CN114339601B (zh) * 2020-10-09 2023-12-26 美的集团股份有限公司 基于uwb的自动配网方法及装置

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101363910A (zh) * 2008-09-26 2009-02-11 黄以华 一种基于贝叶斯理论的无线射频定位方法
CN104637090A (zh) * 2015-02-06 2015-05-20 南京大学 一种基于单张图片的室内场景建模方法
CN107978012A (zh) * 2017-11-23 2018-05-01 联想(北京)有限公司 一种数据处理方法及电子设备

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9305196B2 (en) * 2012-05-22 2016-04-05 Trimble Navigation Limited Entity tracking
CN103198392A (zh) * 2013-04-02 2013-07-10 深圳供电局有限公司 一种电力物资仓储管理方法和系统
US9571986B2 (en) * 2014-05-07 2017-02-14 Johnson Controls Technology Company Systems and methods for detecting and using equipment location in a building management system
CN106909215B (zh) * 2016-12-29 2020-05-12 深圳市皓华网络通讯股份有限公司 基于精确定位和增强现实的消防作战三维可视化指挥系统
CN108898879A (zh) * 2018-07-05 2018-11-27 北京易路行技术有限公司 停车数据检测系统及方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101363910A (zh) * 2008-09-26 2009-02-11 黄以华 一种基于贝叶斯理论的无线射频定位方法
CN104637090A (zh) * 2015-02-06 2015-05-20 南京大学 一种基于单张图片的室内场景建模方法
CN107978012A (zh) * 2017-11-23 2018-05-01 联想(北京)有限公司 一种数据处理方法及电子设备

Also Published As

Publication number Publication date
CN109997172A (zh) 2019-07-09

Similar Documents

Publication Publication Date Title
Chen et al. A BIM-based location aware AR collaborative framework for facility maintenance management.
US11436388B2 (en) Methods and apparatus for procedure tracking
CN106780735B (zh) 一种语义地图构建方法、装置及一种机器人
US11398088B2 (en) Systems, methods and apparatuses to generate a fingerprint of a physical location for placement of virtual objects
US9728009B2 (en) Augmented reality based management of a representation of a smart environment
CN106296815B (zh) 一种交互式三维数字城市的构建和显示方法
US20100228602A1 (en) Event information tracking and communication tool
CN107655480A (zh) 一种机器人定位导航方法、系统、存储介质及机器人
KR20160033495A (ko) 증강현실을 이용한 가구 배치 장치 및 방법
US20170256072A1 (en) Information processing system, information processing method, and non-transitory computer-readable storage medium
EP2974509A1 (en) Personal information communicator
WO2018076777A1 (zh) 机器人的定位方法和装置、机器人
CN109996220A (zh) 基于蓝牙寻找移动终端的方法、装置及存储介质
WO2020168493A1 (zh) 三维场景建模方法、装置、设备及存储介质
CN113971628A (zh) 图像匹配方法、装置和计算机可读存储介质
CN108734734A (zh) 室内定位方法及系统
CN112150072A (zh) 基于智能机器人的资产盘点方法、装置、电子设备及介质
CN108648266B (zh) 一种全通透扫描3d空间模型的管理方法和系统
DE102022122084A1 (de) Umgebungs-Mapping basierend auf UWB-Tags
CN105009114A (zh) 预测性地呈现搜索能力
CN107958040B (zh) 一种用于室内物品定位、管理以及分析的智能系统
Xu et al. A novel radio frequency identification three-dimensional indoor positioning system based on trilateral positioning algorithm
Kawanishi et al. Parallel line-based structure from motion by using omnidirectional camera in textureless scene
CN103457809A (zh) 用于设备监控的移动设备、系统、信息获取和通信方法
Wang et al. Towards rich, portable, and large-scale pedestrian data collection

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19916301

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19916301

Country of ref document: EP

Kind code of ref document: A1