CN113838209A - Information management method of target environment and display method of related augmented reality - Google Patents

Information management method of target environment and display method of related augmented reality Download PDF

Info

Publication number
CN113838209A
CN113838209A CN202111056910.8A CN202111056910A CN113838209A CN 113838209 A CN113838209 A CN 113838209A CN 202111056910 A CN202111056910 A CN 202111056910A CN 113838209 A CN113838209 A CN 113838209A
Authority
CN
China
Prior art keywords
target
information
voxel
semantic information
voxels
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111056910.8A
Other languages
Chinese (zh)
Inventor
盛崇山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen TetrasAI Technology Co Ltd
Original Assignee
Shenzhen TetrasAI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen TetrasAI Technology Co Ltd filed Critical Shenzhen TetrasAI Technology Co Ltd
Priority to CN202111056910.8A priority Critical patent/CN113838209A/en
Publication of CN113838209A publication Critical patent/CN113838209A/en
Priority to PCT/CN2022/074966 priority patent/WO2023035548A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • G06T17/205Re-meshing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses an information management method of a target environment and a display method of related augmented reality, wherein the information management method of the target environment comprises the following steps: acquiring three-dimensional information of a target environment; carrying out voxel division on the three-dimensional information to obtain a plurality of target voxels corresponding to a target environment; semantic information of each target voxel is determined according to the attribute information of the target environment corresponding to each target voxel. By the method, the information of the real environment can be merged into the semantic information.

Description

Information management method of target environment and display method of related augmented reality
Technical Field
The present application relates to the field of augmented reality technologies, and in particular, to an information management method for a target environment, and a display method, device, and storage medium for related augmented reality.
Background
The Augmented Reality (AR) technology is a technology that skillfully fuses virtual information and the real world, and widely uses various technical means such as multimedia, three-dimensional modeling, real-time tracking and registration, intelligent interaction, sensing and the like.
However, when augmented reality display is performed, various information of the real world also needs to be obtained, so that the displayed content can be richer and meet the requirements of people.
Therefore, how to obtain more information about the real world is of great significance to promote further development of augmented reality technology.
Disclosure of Invention
The application provides a management method of a target environment and a display method of related augmented reality.
A first aspect of the present application provides a method for managing information of a target environment, where the method includes: acquiring three-dimensional information of a target environment; carrying out voxel division on the three-dimensional information to obtain a plurality of target voxels corresponding to a target environment; semantic information of each target voxel is determined according to the attribute information of the target environment corresponding to each target voxel.
Therefore, the target voxels obtained by voxel division of the three-dimensional information of the target environment are obtained, and the attribute information of the target environment corresponding to each target voxel is determined as the semantic information of each target voxel, so that the semantic information of the target voxel can reflect the attribute information of the target environment, and the effect of integrating the information of the real environment into the semantic information is realized.
The attribute information of the target environment comprises at least one of space attribute information, social attribute information and object attribute information; the semantic information of the target voxel corresponds to the attribute information of the target environment, and the attribute information comprises at least one of space semantic information, social semantic information and object semantic information; the determining semantic information of each target voxel according to the attribute information of the target environment corresponding to each target voxel includes: determining the spatial semantic information of each target voxel according to the spatial attribute information of the target environment corresponding to each target voxel; and/or determining the social semantic information of each target voxel according to the social attribute information of the target environment corresponding to each target voxel; and/or determining object semantic information of each target voxel according to the object attribute information of the target environment corresponding to each target voxel.
Therefore, by separately specifying the spatial attribute information, the social attribute information, and the object attribute information in the attribute information of the target environment, the spatial semantic information, the social semantic information, and the object semantic information of the target voxel can be specified correspondingly.
The spatial attribute information comprises real path information and object blocking information; the social attribute information comprises region category information, and the object attribute information comprises object material information; the space semantic information comprises path semantic information and blocking semantic information, the social semantic information comprises category semantic information, and the object semantic information comprises material semantic information; the determining the spatial semantic information of each target voxel according to the spatial attribute information of the target environment corresponding to each target voxel includes: determining path semantic information and blocking semantic information of each target voxel according to the real path information and the object blocking information of the target environment corresponding to each target voxel; the determining social semantic information of each target voxel according to the social attribute information of the target environment corresponding to each target voxel includes: determining the category semantic information of each target voxel according to the region category information of the target environment corresponding to each target voxel; the determining the object semantic information of each target voxel according to the object attribute information of the target environment corresponding to each target voxel includes: and determining the object material semantic information of each target voxel according to the object material information of the target environment corresponding to each target voxel.
Therefore, by determining the real path information and the object blocking information of the target environment corresponding to the target voxel, the path semantic information and the blocking semantic information of the target voxel can be determined. Furthermore, by determining the region class information of the target environment corresponding to each target voxel, the class semantic information of each target voxel can be determined accordingly. In addition, by determining the object material information of the target environment corresponding to each target voxel, the object material semantic information of each target voxel can be determined accordingly.
Wherein the determining the path semantic information of each target voxel according to the real path information of the target environment corresponding to each target voxel includes: determining intersection target voxels at intersections of different paths in a target environment, and determining path semantic information of the intersection target voxels based on real path information of the different paths at the intersections.
Therefore, the path semantic information of the intersection target voxels is determined based on the real path information of different paths at the intersection, so that the path semantic information of the intersection target voxels can contain the semantic information of the intersection path, and the subsequent path information can be conveniently searched.
The three-dimensional information comprises point cloud information or grid information; carrying out voxel division on the three-dimensional information to obtain a plurality of target voxels corresponding to a target environment, wherein the voxel division comprises the following steps: and carrying out voxel division on the point cloud information or the grid information according to a preset division method to obtain a plurality of original voxels corresponding to the target environment. Determining whether each original voxel meets the point cloud partition requirement or the grid partition requirement; and taking the original voxels meeting the requirements as target voxels corresponding to the target environment.
Therefore, by determining whether each original voxel meets the point cloud division requirement or the grid division requirement, the original voxels which do not meet the requirement can be excluded, so that the number of target voxels can be reduced, and the occupation of storage space can be reduced.
The above-mentioned taking the original voxel which meets the requirement as the target voxel corresponding to the target environment includes: determining surrounding original voxels surrounded by other original voxels in the original voxels satisfying the requirement; the other original voxels excluding the original voxel will be the target voxels corresponding to the target environment.
By using other original voxels excluding the surrounding original voxels as target voxels corresponding to the target environment, the occupation of storage space can be reduced when storing the target voxels.
The above-mentioned taking the original voxel which meets the requirement as the target voxel corresponding to the target environment includes: performing voxel division on the point cloud information or the grid information according to a preset division method to obtain a plurality of original voxels corresponding to a target environment; determining surrounding original voxels surrounded by other original voxels in the original voxels; the other original voxels excluding the original voxel will be taken as several target voxels corresponding to the target environment.
Therefore, by not using the surrounding original voxel as the target voxel, the occupation of the storage space can be reduced when storing the target voxel.
After determining semantic information of each target voxel according to attribute information of a target environment corresponding to each target voxel, the information management method of the target environment further includes: acquiring other semantic information related to the target environment; other semantic information is stored in a target voxel corresponding to the target environment.
Therefore, the target voxel can store other semantic information associated with the whole target environment by storing the other semantic information in the target voxel corresponding to the target environment.
A second aspect of the present application provides an augmented reality display method, including: obtaining semantic information of a target voxel in a target environment, wherein the semantic information of the target voxel is obtained by the information management method of the target environment described in the first aspect; semantic information is displayed at corresponding locations of target voxels of the target environment.
Therefore, the semantic information is displayed at the corresponding position of the target voxel of the target environment, so that a user can conveniently and quickly know the target environment by looking up the semantic information.
The third aspect of the present application further provides an augmented reality display method, including: obtaining semantic information of a target voxel in an environment where the virtual object is located, wherein the semantic information of the target voxel is obtained by the information management method of the target environment described in the first aspect; determining the corresponding behavior of the virtual object based on the semantic information of the target voxel displays the corresponding behavior of the virtual object.
Thus, by controlling the behavior of the virtual object based on the semantic information of the target voxel, the behavior of the virtual object may be made more natural and realistic.
A fourth aspect of the present application provides an electronic device, which includes a processor and a memory coupled to each other, wherein the processor is configured to execute a computer program stored in the memory to implement the information management method of the target environment described in the first aspect or the augmented reality display method described in the second and third aspects.
A fifth aspect of the present application provides a computer-readable storage medium, on which program instructions are stored, and the program instructions, when executed by a processor, implement the information management method of the target environment described in the first aspect above, or the augmented reality display method described in the second and third aspects above.
According to the scheme, the target voxels obtained by voxel division of the three-dimensional information of the target environment are obtained, and the attribute information of the target environment corresponding to each target voxel is determined to be the semantic information of each target voxel, so that the semantic information of the target voxel can reflect the attribute information of the target environment, and the effect of integrating the information of the real environment into the semantic information is achieved.
Drawings
FIG. 1 is a first flowchart of an embodiment of a method for managing information in a target environment of the present application;
FIG. 2 is a second flowchart of an embodiment of a method for managing information in a target environment of the present application;
FIG. 3 is a flow chart illustrating an information management method of a target environment according to another embodiment of the present application;
FIG. 4 is a schematic flowchart illustrating an embodiment of an augmented reality display method according to the present application;
FIG. 5 is a schematic flowchart of another embodiment of an augmented reality display method according to the present application;
FIG. 6 is a block diagram of an embodiment of an information management apparatus in a target environment of the present application;
FIG. 7 is a block diagram of an embodiment of an electronic device of the present application;
FIG. 8 is a block diagram of an embodiment of a computer-readable storage medium of the present application.
Detailed Description
The following describes in detail the embodiments of the present application with reference to the drawings attached hereto.
In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular system structures, interfaces, techniques, etc. in order to provide a thorough understanding of the present application.
The terms "system" and "network" are often used interchangeably herein. The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship. Further, the term "plurality" herein means two or more than two.
Referring to fig. 1, fig. 1 is a first flowchart illustrating an information management method of a target environment according to an embodiment of the present application. Specifically, the method may include the steps of:
step S11: and acquiring three-dimensional information of the target environment.
The three-dimensional information of the target environment may be considered as three-dimensional data that is obtained based on the target environment and that can express the situation of the target environment. The three-dimensional information is, for example, point cloud information, mesh information, and the like created based on the target environment. The target environment may be any environment in the real world.
In one embodiment, the target environment may be subjected to image information acquisition, so as to generate a high-precision map, and then point cloud information or grid information of the target environment is obtained based on the high-precision map.
Step S12: and carrying out voxel division on the three-dimensional information to obtain a plurality of target voxels corresponding to the target environment.
After the three-dimensional information of the target environment is obtained, the three-dimensional information may be subjected to voxel division, so as to obtain a plurality of target voxels corresponding to the target environment. The voxel division is, for example, to obtain a target voxel by rasterizing three-dimensional information, for example, by rasterizing point cloud information or by rasterizing mesh information.
Step S13: semantic information of each target voxel is determined according to the attribute information of the target environment corresponding to each target voxel.
The attribute information of the target environment may be regarded as information that can be interpreted by a person in the target environment. It is to be understood that information that is related to the target environment and can be known to a person can be regarded as attribute information of the target environment. In one embodiment, the attribute information of the target environment may include attribute information automatically generated by the system, and attribute information determined by human input.
In one embodiment, the attribute information of the target environment may include at least one of spatial attribute information, social attribute information, and object attribute information. Correspondingly, the semantic information of the target voxel may correspond to attribute information of the target environment, and the semantic information of the target voxel may also include at least one of spatial semantic information, social semantic information, and object semantic information.
The spatial attribute information may be considered information that characterizes a space in which the target environment is located. For example, if the target environment is a square, the spatial attribute information may be spatial information such as a position, a size, and an area of the square, which region the square belongs to, which road is connected to, and the like. Social attribute information may be considered to pertain to various human-defined, person-related information in the target environment. For example, the target environment is a hazardous area, a prohibited noisy area, etc. The object property information may be considered information that is characteristic of an object present in the target environment. For example, if a road paved by rubble exists in the target environment, the object attribute information may be rubble. As another example, if a sculpture exists in the target environment, the object attribute information may be the material, shape, and the like of the sculpture. If a transparent square exists in the target environment, the attribute information of the square may further include transparent material and the like.
Since each target voxel is part of the target environment and each target voxel may also be individually distinguished in the target environment. Therefore, the semantic information of each target voxel may include attribute information of the entire target environment, or may include specific attribute information of the target environment corresponding to the target voxel. For example, if the target environment is a corridor, the semantic information of all target voxels corresponding to the corridor may include attribute information of the corridor, the target voxels corresponding to the channel in the corridor may further include semantic information of the corridor channel, and the target voxels corresponding to the wall of the corridor may further include semantic information of the corridor wall.
Therefore, the target voxels obtained by voxel division of the three-dimensional information of the target environment are obtained, and the attribute information of the target environment corresponding to each target voxel is determined as the semantic information of each target voxel, so that the semantic information of the target voxel can reflect the attribute information of the target environment, and the effect of integrating the information of the real environment into the semantic information is realized.
Referring to fig. 2, fig. 2 is a second flow chart of an embodiment of an information management method of a target environment of the present application. In this embodiment, the three-dimensional information mentioned in the present application includes point cloud information or mesh information of the target environment. The step of "performing voxel division on the three-dimensional information to obtain a plurality of target voxels corresponding to the target environment" includes steps S121 to S123.
Step S121: and carrying out voxel division on the point cloud information or the grid information according to a preset division method to obtain a plurality of original voxels corresponding to the target environment.
The preset partitioning method may be a general rasterization method, and the application is not limited.
Step S122: it is determined whether each of the original voxels meets a point cloud partitioning requirement or a meshing requirement.
For each of the obtained original voxels, there may be some voxels that do not satisfy the requirement, and therefore it is necessary to determine whether each of the original voxels satisfies the point cloud partition requirement or the mesh partition requirement, so as to exclude the original voxels that do not satisfy the requirement. The point cloud partition requires that point cloud information contained in the original voxel is less than a preset threshold, and the mesh partition requires any surface or piece in the original voxel. The original voxels that do not satisfy the requirement may be considered as voxels that do not provide sufficient information, and therefore need to be excluded.
Step S123: and taking the original voxels meeting the requirements as target voxels corresponding to the target environment.
The original voxels meeting the requirement can be used for the subsequently determined semantic information corresponding to the original voxels, so that the original voxels meeting the requirement can be used as target voxels corresponding to the target environment.
Therefore, by determining whether each original voxel meets the point cloud division requirement or the grid division requirement, the original voxels which do not meet the requirement can be excluded, so that the number of target voxels can be reduced, and the occupation of storage space can be reduced.
In a specific embodiment, the "original voxel meeting the requirement as the target voxel corresponding to the target environment" mentioned in the above step may specifically include step S1231 and step S1232.
Step S1231: among the original voxels that satisfy the requirement, surrounding original voxels surrounded by other original voxels are determined.
The six faces surrounding the original voxel, i.e. the voxel, surrounded by other original voxels have other original voxels connected to them.
Step S1232: the other original voxels excluding the original voxel will be the target voxels corresponding to the target environment.
Since the surrounding original voxel may not be visible to the user in the subsequent display of the augmented reality, the original voxel may not be stored, i.e., the surrounding original voxel is not used as the target voxel.
Therefore, by setting other original voxels excluding the original voxel as target voxels corresponding to the target environment, the occupation of the storage space can be reduced when storing the target voxels.
In another embodiment, if the semantic information included in the original voxel surrounding the original voxel on the four sides includes the semantic information of the transparent material, the surrounding original voxel may be finally set as the target voxel.
In a specific embodiment, before the step of "determining whether each original voxel meets the point cloud division requirement or the grid division requirement" is performed, surrounding original voxels surrounded by other original voxels may be determined in the original voxels, and then other original voxels except surrounding the original voxels may be determined as the original voxels for determining whether the point cloud division requirement or the grid division requirement is met. That is, in the present embodiment, the surrounding original voxels are not used for the subsequent determination of the point cloud division requirement or the mesh division requirement. Therefore, the number of original voxels which are required to judge whether the point cloud division requirement or the grid division requirement is met can be reduced, the speed of the information management method of the target environment is improved, the surrounding original voxels are not used for subsequent judgment, the surrounding original voxels cannot be used as target voxels, and therefore when the target voxels are stored, the occupation of storage space can be reduced.
In one embodiment, the step of "performing voxel division on the three-dimensional information to obtain a plurality of target voxels corresponding to the target environment" mentioned in the above step specifically includes steps S124 to S126.
Step S124: and carrying out voxel division on the point cloud information or the grid information according to a preset division method to obtain a plurality of original voxels corresponding to the target environment.
For a detailed description of this step, please refer to step S121 above, which is not described herein again.
Step S125: surrounding original voxels surrounded by other original voxels are determined in the original voxels.
In this embodiment, the surrounding original voxels surrounded by other original voxels, i.e. six faces of voxels all have other original voxels connected to them.
Step S126: the other original voxels excluding the original voxel will be taken as several target voxels corresponding to the target environment.
Since the surrounding original voxel may not be visible to the user in the subsequent augmented reality display, the surrounding original voxel may not be used as the target voxel, so that the occupation of the storage space may be reduced when the target voxel is stored.
In one embodiment, the "determining semantic information of each target voxel according to the attribute information of the target environment corresponding to each target voxel" mentioned in the above step specifically includes: and determining the spatial semantic information of each target voxel according to the spatial attribute information of the target environment corresponding to each target voxel.
The spatial attribute information of the target environment may be information that represents a space in which the target environment is located. For example, the target environment is a road, with a store alongside the road; and if the target environment corresponding to the target voxel is the wall surface of the shop, the spatial semantic information of the target voxel can be determined to be the wall surface of the shop of a certain road based on the space where the wall surface is located.
In one embodiment, the spatial attribute information includes reality path information and object blocking information, and correspondingly, the spatial semantic information includes path semantic information and blocking semantic information. The real path information may be considered as relevant road, channel information of the target environment. The path semantic information may be considered as semantic information capable of representing road information in a real environment, channel information, and the path semantic information is, for example, a road to which a target environment corresponding to a target voxel belongs, a link with which the road is connected, a destination to which the road can be routed, and the like. The object blocking information may be considered as information on whether an object passes through a space of a target environment corresponding to a target voxel and a collision occurs. For example, if a space in which a target environment corresponding to a certain target voxel is located is a fire hydrant on a roadside, it may be considered that the fire hydrant collides with the space passing through the space. The corresponding blocking semantic information may be that there is a blocking condition for the region. At this time, the "determining the spatial semantic information of each target voxel according to the spatial attribute information of the target environment corresponding to each target voxel" mentioned in the above step may specifically be: and determining path semantic information and blocking semantic information of each target voxel according to the real path information and the object blocking information of the target environment corresponding to each target voxel.
Therefore, by determining the real path information and the object blocking information of the target environment corresponding to the target voxel, the path semantic information and the blocking semantic information of the target voxel can be determined.
In one embodiment, the connection intersection of different roads and paths may be considered as a part of any one of the connected roads and paths, respectively. For example, the intersection of the a-way and the B-way may be considered to be part of the a-way or part of the B-way. For another example, if the passage a is a hall corridor and the passage B is a stair passage, the intersection of the passage a and the passage B may be considered to be a part of the passage a or a part of the passage B. And defining the target voxels at the junctions of different paths as junction target voxels. For the intersection target voxel, the step of determining the path semantic information may specifically be: determining intersection target voxels at intersections of different paths in a target environment, and determining path semantic information of the intersection target voxels based on real path information of the different paths at the intersections. Specifically, it is determined which roads the intersection of the paths where the intersection target voxels are located can belong to, and then the path semantic information of these roads is used as the path semantic information of the intersection target voxels.
Therefore, the path semantic information of the intersection target voxels is determined based on the real path information of different paths at the intersection, so that the path semantic information of the intersection target voxels can contain the semantic information of the intersection path, and the subsequent path information can be conveniently searched.
In one embodiment, the "determining semantic information of each target voxel according to the attribute information of the target environment corresponding to each target voxel" mentioned in the above step specifically includes: and determining social semantic information of each target voxel according to the social attribute information of the target environment corresponding to each target voxel. For example, the target environment corresponding to the target voxel is a defined single-vehicle parking area, the social attribute information represented by the target environment is the single-vehicle parking area, and the corresponding social semantic information is also the single-vehicle parking area. If the target environment corresponding to the target voxel is a reading room of a library, the social attribute information of the target environment is the reading room, and the corresponding social semantic information is also a reading area. Therefore, by determining the social attribute information of the target environment corresponding to the target voxel, the social semantic information of each target voxel can be determined accordingly.
In one embodiment, the social attribute information includes region category information. The social semantic information includes category semantic information. Specifically, the class semantic information of each target voxel may be determined according to the region class information of the target environment corresponding to each target voxel. The target environment may be divided into different regions according to different criteria. For example, classification from the viewpoint of whether or not speech is possible may be divided into a quiet area, a speech area, and the like. From the viewpoint of safety of passerby walking, the area can be classified into a careful walking area, a no-walking area, a free-passing area, and the like. The classification criteria may be preset, and are not limited herein. The region category information is the type of the target environment. The category semantic information may also be type information to which the target environment belongs. It is to be understood that when there are a plurality of classification criteria, the class semantic information of one target voxel may also be class semantic information containing a plurality of different classification criteria. Thus, by based on the region class information of the target environment corresponding to each target voxel, the class semantic information of each target voxel can be determined accordingly.
In one embodiment, the "determining semantic information of each target voxel according to the attribute information of the target environment corresponding to each target voxel" mentioned in the above step specifically includes: and determining object semantic information of each target voxel according to the object attribute information of the target environment corresponding to each target voxel.
In this application, object property information of a target environment may be considered to be information that is characteristic of a target object present in the target environment. The target object may be any object in the target environment, and the object attribute information of the target object may specifically be various information related to the target object. For example, if a car is present in the target environment, the information characterized by the car may be the size, brand, size, etc. of the car. The corresponding object semantic information may be various information related to the object. Thus, by determining the object attribute information of the target environment corresponding to each target voxel, the object semantic information of each target voxel can be determined accordingly.
In one embodiment, the object attribute information includes object texture information. The object semantic information includes object material semantic information. Specifically, the object material semantic information of each target voxel may be determined according to the object material information of the target environment corresponding to each target voxel. The object material information includes a composition material of the object, for example, a material of a surface of the object. In one example, the target is a large plush doll having a plush surface. It may be determined that the object material information of the doll may be that the surface is a nap, and the corresponding object material semantic information may be the surface of the nap. For another example, the target object is a sand pool, the surfaces of the sand pool are all sand, the object material information may be sand, and the corresponding object material semantic information is the sand surface. Thus, by determining the object material information of the target environment corresponding to each target voxel, the object material semantic information of each target voxel may be determined accordingly.
Referring to fig. 3, fig. 3 is a schematic flowchart of another embodiment of an information management method of a target environment of the present application. After the step "determining semantic information of each target voxel according to attribute information of a target environment corresponding to each target voxel" mentioned in the above embodiment, the information management method of a target environment of the present application further includes the following steps S21 and S22.
Step S21: other semantic information related to the target environment is obtained.
The other semantic information may be other semantic information related to the target environment as a whole. For example, if the target environment is an area of a tourist attraction, the other semantic information may also be other semantic information of the tourist attraction. Other semantic information can be determined manually or matched automatically by the system, and the determination mode is not limited.
Step S22: other semantic information is stored in a target voxel corresponding to the target environment.
Because the target voxel is derived based on the target environment, other semantic information may be stored in the target voxel corresponding to the target environment, thereby enabling the target voxel to store information that includes other semantic information associated with the entire target environment.
Referring to fig. 4, fig. 4 is a schematic flowchart illustrating an embodiment of an augmented reality display method according to the present application. In this embodiment, the augmented reality display method includes the following steps:
step S31: semantic information of a target voxel in a target environment is obtained.
In the present application, the device performing the augmented reality display method steps is, for example, a mobile phone, AR glasses, or the like.
In one embodiment, semantic information of a target voxel in a target environment may be obtained by obtaining an image of the surrounding environment and matching the environment in the image with an established target environment based on image recognition techniques. The semantic information of the target voxel may be semantic information obtained by the above-described embodiment of the information management method of the target environment.
Step S32: semantic information is displayed at corresponding locations of target voxels of the target environment.
Because the semantic information of the target voxel can reflect the attribute information of the target environment corresponding to the target voxel, the semantic information can be displayed at the corresponding position of the target voxel in the target environment.
Therefore, the semantic information is displayed at the corresponding position of the target voxel of the target environment, so that a user can conveniently and quickly know the target environment by looking up the semantic information.
Referring to fig. 5, fig. 5 is a schematic flowchart illustrating another embodiment of the augmented reality display method of the present application. In this embodiment, the augmented reality display method includes the following steps:
step S41: semantic information of a target voxel in an environment where the virtual object is located is obtained.
The virtual object may be a virtual object generated by an electronic device simulation, for example, a virtual character generated by a mobile phone simulation executing the augmented reality display method of the present application.
After the virtual object is generated, semantic information of a target voxel in the environment where the virtual object is located can be acquired, so that the environment where the virtual object is located can be determined through the acquired semantic information.
Step S42: based on the semantic information of the target voxel, the corresponding behavior of the virtual object is determined.
In one embodiment, based on the semantic information of the target voxel, the behavior of the virtual object is controlled by determining the correspondence between the semantic information of the target voxel and the preset reaction of the virtual object, so that the virtual object can react to the semantic information of the target voxel, and the virtual object can be more vivid.
Step S43: displaying the corresponding behavior of the virtual object.
Since the corresponding behavior of the virtual object is a reaction to the target environment, the corresponding behavior can be perceived by the user by enabling the corresponding behavior of the virtual object to be displayed.
In one example, the virtual object is a virtual character, the target environment includes a tile-laid ground and a sand-stone ground on which a sculpture is standing, and the semantic information of the target voxel in the target environment includes the tile ground, the sand-stone ground, and the sculpture. After the virtual character obtains the semantic information in the target environment, when the virtual character walks on the sand and stone road surface, the behavior of the virtual object can be controlled to be rather unnatural because the sand hurts the feet according to the semantic information of the sand and stone road surface. When the avatar walks in front of the sculpture, the avatar may be controlled to bypass because the sculpture may block semantic information. When the virtual character walks to the tile floor, the virtual character can be controlled to walk slowly because the tile floor is slippery.
Thus, by controlling the behavior of the virtual object based on the semantic information of the target voxel, the behavior of the virtual object may be made more natural and realistic.
Referring to fig. 6, fig. 6 is a schematic diagram of a framework of an embodiment of an information management apparatus in a target environment of the present application. The information management apparatus 60 includes an acquisition module 61, a division module 62, and a determination module 63. The obtaining module 61 is configured to obtain three-dimensional information of a target environment; the dividing module 62 is configured to perform voxel division on the three-dimensional information to obtain a plurality of target voxels corresponding to a target environment; the determining module 63 is configured to determine semantic information of each target voxel according to the attribute information of the target environment corresponding to each target voxel.
The attribute information of the target environment comprises at least one of space attribute information, social attribute information and object attribute information; the semantic information of the target voxel corresponds to the attribute information of the target environment, and the attribute information comprises at least one of space semantic information, social semantic information and object semantic information; the determining module 63 is configured to determine semantic information of each target voxel according to the attribute information of the target environment corresponding to each target voxel, and includes: determining the spatial semantic information of each target voxel according to the spatial attribute information of the target environment corresponding to each target voxel; and/or determining social semantic information of each target voxel according to the social attribute information of the target environment corresponding to each target voxel; and/or determining object semantic information of each target voxel according to the object attribute information of the target environment corresponding to each target voxel.
The spatial attribute information comprises real path information and object blocking information; the social attribute information comprises region category information, and the object attribute information comprises object material information; the space semantic information comprises path semantic information and blocking semantic information, the social semantic information comprises category semantic information, and the object semantic information comprises material semantic information; the determining module 63 is configured to determine the spatial semantic information of each target voxel according to the spatial attribute information of the target environment corresponding to each target voxel, and includes: determining path semantic information and blocking semantic information of each target voxel according to the real path information and the object blocking information of the target environment corresponding to each target voxel; the determining module 63 is configured to determine social semantic information of each target voxel according to social attribute information of a target environment corresponding to each target voxel, and includes: determining the category semantic information of each target voxel according to the region category information of the target environment corresponding to each target voxel; the determining module 63 is configured to determine the object semantic information of each target voxel according to the object attribute information of the target environment corresponding to each target voxel, and includes: and determining the object material semantic information of each target voxel according to the object material information of the target environment corresponding to each target voxel.
The determining module 63 is configured to determine path semantic information of each target voxel according to the real path information of the target environment corresponding to each target voxel, and includes: determining intersection target voxels at different path intersections in a target environment, and determining path semantic information of the intersection target voxels based on real path information of different paths at the intersections.
The three-dimensional information comprises point cloud information or grid information; the dividing module 62 is configured to perform voxel division on the three-dimensional information to obtain a plurality of target voxels corresponding to a target environment, and includes: performing voxel division on the point cloud information or the grid information according to a preset division method to obtain a plurality of original voxels corresponding to the target environment; determining whether each original voxel meets the point cloud partition requirement or the grid partition requirement; and taking the original voxels meeting the requirements as target voxels corresponding to the target environment.
The above-mentioned dividing module 62 is configured to use an original voxel that meets the requirement as a target voxel corresponding to a target environment, and includes: determining surrounding original voxels surrounded by other original voxels in the original voxels satisfying the requirement; the other original voxels excluding the original voxel will be the target voxels corresponding to the target environment.
The dividing module 62 is configured to perform voxel division on the three-dimensional information to obtain a plurality of target voxels corresponding to a target environment, and includes: performing voxel division on the point cloud information or the grid information according to a preset division method to obtain a plurality of original voxels corresponding to a target environment; determining surrounding original voxels surrounded by other original voxels in the original voxels; the other original voxels excluding the original voxel will be taken as several target voxels corresponding to the target environment.
The information management apparatus 60 further includes a semantic information obtaining module, where the determining module 63 is configured to determine semantic information of each target voxel according to attribute information of a target environment corresponding to each target voxel, and the semantic information obtaining module is configured to obtain other semantic information related to the target environment; other semantic information is stored in a target voxel corresponding to the target environment.
The application also discloses an augmented reality display device, which comprises a first acquisition module and a first display module. The first obtaining module is configured to obtain semantic information of a target voxel in a target environment, where the semantic information of the target voxel is obtained by the information management method embodiment of the target environment. The first display module is configured to display semantic information at a corresponding location of a target voxel of a target environment.
The application also discloses another augmented reality display device which comprises a second acquisition module, a determination module and a second display module. The second obtaining module is configured to obtain semantic information of a target voxel in an environment where the virtual object is located, where the semantic information of the target voxel is obtained by the information management method embodiment of the target environment. The determining module is used for determining corresponding behaviors of the virtual object based on the semantic information of the target voxel; the second display module is used for displaying the corresponding behavior of the virtual object.
Referring to fig. 7, fig. 7 is a schematic diagram of a frame of an electronic device according to an embodiment of the present application. The electronic device 70 comprises a memory 71 and a processor 72 coupled to each other, and the processor 72 is configured to execute program instructions stored in the memory 71 to implement the steps of the information management method or the augmented reality display method embodiment of any one of the above target environments.
In one particular implementation scenario, the electronic device 70 may include, but is not limited to: a microcomputer, a server, and the electronic device 70 may also include a mobile device such as a notebook computer, a tablet computer, and the like, which is not limited herein.
In particular, the processor 72 is configured to control itself and the memory 71 to implement the steps of the information management method or augmented reality display method embodiment of any one of the above-mentioned target environments. The processor 72 may also be referred to as a CPU (Central Processing Unit). The processor 72 may be an integrated circuit chip having signal processing capabilities. The Processor 72 may also be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. Additionally, the processor 72 may be collectively implemented by an integrated circuit chip.
Referring to fig. 8, fig. 8 is a block diagram illustrating an embodiment of a computer-readable storage medium according to the present application. The computer readable storage medium 80 stores program instructions 81 executable by the processor, the program instructions 81 being for implementing the steps of any of the above-described information management method or augmented reality display method embodiments of the target environment.
According to the scheme, the target voxels obtained by voxel division of the three-dimensional information of the target environment are obtained, and the attribute information of the target environment corresponding to each target voxel is determined to be the semantic information of each target voxel, so that the semantic information of the target voxel can reflect the attribute information of the target environment, and the effect of integrating the information of the real environment into the semantic information is achieved.
The disclosure relates to the field of augmented reality, and aims to detect or identify relevant features, states and attributes of a target object by means of various visual correlation algorithms by acquiring image information of the target object in a real environment, so as to obtain an AR effect combining virtual and reality matched with specific applications. For example, the target object may relate to a face, a limb, a gesture, an action, etc. associated with a human body, or a marker, a marker associated with an object, or a sand table, a display area, a display item, etc. associated with a venue or a place. The vision-related algorithms may involve visual localization, SLAM, three-dimensional reconstruction, image registration, background segmentation, key point extraction and tracking of objects, pose or depth detection of objects, and the like. The specific application can not only relate to interactive scenes such as navigation, explanation, reconstruction, virtual effect superposition display and the like related to real scenes or articles, but also relate to special effect treatment related to people, such as interactive scenes such as makeup beautification, limb beautification, special effect display, virtual model display and the like.
The detection or identification processing of the relevant characteristics, states and attributes of the target object can be realized through the convolutional neural network. The convolutional neural network is a network model obtained by performing model training based on a deep learning framework.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
The foregoing description of the various embodiments is intended to highlight various differences between the embodiments, and the same or similar parts may be referred to each other, and for brevity, will not be described again herein.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a module or a unit is merely one type of logical division, and an actual implementation may have another division, for example, a unit or a component may be combined or integrated with another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some interfaces, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on network elements. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

Claims (12)

1. An information management method of a target environment, comprising:
acquiring three-dimensional information of the target environment;
carrying out voxel division on the three-dimensional information to obtain a plurality of target voxels corresponding to the target environment;
and determining semantic information of each target voxel according to the attribute information of the target environment corresponding to each target voxel.
2. The method of claim 1, wherein the attribute information of the target environment includes at least one of spatial attribute information, social attribute information, and object attribute information; the semantic information of the target voxel corresponds to the attribute information of the target environment and comprises at least one of space semantic information, social semantic information and object semantic information; determining semantic information of each target voxel according to attribute information of a target environment corresponding to each target voxel, including:
determining the spatial semantic information of each target voxel according to the spatial attribute information of the target environment corresponding to each target voxel; and/or the presence of a gas in the gas,
determining social semantic information of each target voxel according to social attribute information of a target environment corresponding to each target voxel; and/or the presence of a gas in the gas,
and determining object semantic information of each target voxel according to the object attribute information of the target environment corresponding to each target voxel.
3. The method of claim 2, wherein the spatial attribute information includes real path information and object blocking information; the social attribute information comprises region category information, and the object attribute information comprises object material information; the spatial semantic information comprises path semantic information and blocking semantic information, the social semantic information comprises category semantic information, and the object semantic information comprises material semantic information;
the determining the spatial semantic information of each target voxel according to the spatial attribute information of the target environment corresponding to each target voxel includes: determining path semantic information and blocking semantic information of each target voxel according to the real path information and object blocking information of the target environment corresponding to each target voxel;
determining social semantic information of each target voxel according to social attribute information of a target environment corresponding to each target voxel, including: determining the category semantic information of each target voxel according to the region category information of the target environment corresponding to each target voxel;
determining object semantic information of each target voxel according to object attribute information of a target environment corresponding to each target voxel, including: and determining the object material semantic information of each target voxel according to the object material information of the target environment corresponding to each target voxel.
4. The method of claim 3, wherein determining path semantic information for each of the target voxels from the real path information of the target environment corresponding to each of the target voxels comprises:
determining intersection target voxels at intersections of different paths in the target environment, and determining path semantic information of the intersection target voxels based on real path information of the different paths at the intersections.
5. The method according to any one of claims 1 to 4, wherein the three-dimensional information comprises point cloud information or is mesh information; the voxel division of the three-dimensional information to obtain a plurality of target voxels corresponding to the target environment includes:
carrying out voxel division on the point cloud information or the grid information according to a preset division method to obtain a plurality of original voxels corresponding to the target environment;
determining whether each of the original voxels meets a point cloud partitioning requirement or a meshing requirement;
and taking the original voxel meeting the requirement as a target voxel corresponding to the target environment.
6. The method according to claim 5, wherein the using the original voxel that satisfies the requirement as a target voxel corresponding to the target environment comprises: determining surrounding original voxels surrounded by other original voxels among the original voxels satisfying a requirement;
and taking other original voxels excluding the surrounding original voxels as target voxels corresponding to the target environment.
7. The method according to claim 1, wherein the voxel-dividing the three-dimensional information to obtain a plurality of target voxels corresponding to the target environment comprises: carrying out voxel division on the point cloud information or the grid information according to a preset division method to obtain a plurality of original voxels corresponding to the target environment;
determining surrounding original voxels surrounded by other original voxels in the original voxels;
and taking other original voxels excluding the surrounding original voxels as a plurality of target voxels corresponding to the target environment.
8. The method according to any one of claims 1-4, wherein after determining semantic information for each of the target voxels based on attribute information of a target environment corresponding to each of the target voxels, the method further comprises:
acquiring other semantic information related to the target environment;
storing the other semantic information in the target voxel corresponding to the target environment.
9. An augmented reality display method, comprising:
obtaining semantic information of a target voxel in a target environment, wherein the semantic information of the target voxel is obtained by the method of any one of claims 1-8;
and displaying the semantic information at the corresponding position of the target voxel of the target environment.
10. An augmented reality display method, comprising:
obtaining semantic information of a target voxel in an environment where a virtual object is located, wherein the semantic information of the target voxel is obtained by the method of any one of claims 1-8;
determining a corresponding behavior of the virtual object based on semantic information of the target voxel;
and displaying the corresponding behavior of the virtual object.
11. An electronic device comprising a processor and a memory coupled to each other, wherein,
the processor is configured to execute the memory-stored computer program to perform the method of any one of claims 1 to 8, or the method of claim 10.
12. A computer-readable storage medium, in which a computer program is stored which can be executed by a processor, the computer program being adapted to carry out the method of any one of claims 1 to 8, or the method of claim 9, or the method of claim 10.
CN202111056910.8A 2021-09-09 2021-09-09 Information management method of target environment and display method of related augmented reality Pending CN113838209A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111056910.8A CN113838209A (en) 2021-09-09 2021-09-09 Information management method of target environment and display method of related augmented reality
PCT/CN2022/074966 WO2023035548A1 (en) 2021-09-09 2022-01-29 Information management method for target environment and related augmented reality display method, electronic device, storage medium, computer program, and computer program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111056910.8A CN113838209A (en) 2021-09-09 2021-09-09 Information management method of target environment and display method of related augmented reality

Publications (1)

Publication Number Publication Date
CN113838209A true CN113838209A (en) 2021-12-24

Family

ID=78958836

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111056910.8A Pending CN113838209A (en) 2021-09-09 2021-09-09 Information management method of target environment and display method of related augmented reality

Country Status (2)

Country Link
CN (1) CN113838209A (en)
WO (1) WO2023035548A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023035548A1 (en) * 2021-09-09 2023-03-16 上海商汤智能科技有限公司 Information management method for target environment and related augmented reality display method, electronic device, storage medium, computer program, and computer program product

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109242963A (en) * 2018-09-29 2019-01-18 深圳阜时科技有限公司 A kind of three-dimensional scenic simulator and equipment
CN112581629A (en) * 2020-12-09 2021-03-30 中国科学院深圳先进技术研究院 Augmented reality display method and device, electronic equipment and storage medium
CN113018847A (en) * 2021-03-31 2021-06-25 广州虎牙科技有限公司 Voxel building generation method and device, electronic equipment and storage medium
CN113140032A (en) * 2020-01-17 2021-07-20 苹果公司 Floor plan generation based on room scanning

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017117675A1 (en) * 2016-01-08 2017-07-13 Sulon Technologies Inc. Head mounted device for augmented reality
CN110827295A (en) * 2019-10-31 2020-02-21 北京航空航天大学青岛研究院 Three-dimensional semantic segmentation method based on coupling of voxel model and color information
CN113139992A (en) * 2020-01-17 2021-07-20 苹果公司 Multi-resolution voxel gridding
CN113160411A (en) * 2021-04-23 2021-07-23 杭州电子科技大学 Indoor three-dimensional reconstruction method based on RGB-D sensor
CN113838209A (en) * 2021-09-09 2021-12-24 深圳市慧鲤科技有限公司 Information management method of target environment and display method of related augmented reality

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109242963A (en) * 2018-09-29 2019-01-18 深圳阜时科技有限公司 A kind of three-dimensional scenic simulator and equipment
CN113140032A (en) * 2020-01-17 2021-07-20 苹果公司 Floor plan generation based on room scanning
CN112581629A (en) * 2020-12-09 2021-03-30 中国科学院深圳先进技术研究院 Augmented reality display method and device, electronic equipment and storage medium
CN113018847A (en) * 2021-03-31 2021-06-25 广州虎牙科技有限公司 Voxel building generation method and device, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023035548A1 (en) * 2021-09-09 2023-03-16 上海商汤智能科技有限公司 Information management method for target environment and related augmented reality display method, electronic device, storage medium, computer program, and computer program product

Also Published As

Publication number Publication date
WO2023035548A1 (en) 2023-03-16

Similar Documents

Publication Publication Date Title
US11315308B2 (en) Method for representing virtual information in a real environment
US11580704B2 (en) Blending virtual environments with situated physical reality
EP2936441B1 (en) Method for representing virtual information in a real environment
CN105637564B (en) Generate the Augmented Reality content of unknown object
Phan et al. Interior design in augmented reality environment
US11256958B1 (en) Training with simulated images
EP2973433A2 (en) Mapping augmented reality experience to various environments
CN105122304A (en) Real-time design of living spaces with augmented reality
Montero et al. Designing and implementing interactive and realistic augmented reality experiences
Kido et al. Diminished reality system with real-time object detection using deep learning for onsite landscape simulation during redevelopment
Bouvier et al. Crowd simulation in immersive space management
KR101507776B1 (en) methof for rendering outline in three dimesion map
Gutierrez et al. AI and virtual crowds: Populating the Colosseum
CN113838209A (en) Information management method of target environment and display method of related augmented reality
EP3594906B1 (en) Method and device for providing augmented reality, and computer program
Liu et al. Game engine-based point cloud visualization and perception for situation awareness of crisis indoor environments
Sivalaya et al. Implementation of augmented reality application using unity engine deprived of prefab
Santosa et al. 3D Spatial Development of Historic Urban Landscape to Promote a Historical Spatial Data System
Bågling Navigating to real life objects in indoor environments using an Augmented Reality headset
US20220335675A1 (en) Physical phenomena expressing method for expressing the physical phenomeana in mixed reality, and mixed reality apparatus that performs the method
Pike et al. Implementing Material Changes in Augmented Environments
Sherstyuk et al. Collision-free navigation with extended terrain maps
Morini et al. Visibility techniques applied to robotics
Lee et al. Reproducing works of Calder
Byszewski Real-Time Visualization of the Human Evacuation Algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40058661

Country of ref document: HK

RJ01 Rejection of invention patent application after publication

Application publication date: 20211224