WO2022089061A1 - 物体标注信息呈现方法、装置、电子设备及存储介质 - Google Patents

物体标注信息呈现方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2022089061A1
WO2022089061A1 PCT/CN2021/118121 CN2021118121W WO2022089061A1 WO 2022089061 A1 WO2022089061 A1 WO 2022089061A1 CN 2021118121 W CN2021118121 W CN 2021118121W WO 2022089061 A1 WO2022089061 A1 WO 2022089061A1
Authority
WO
WIPO (PCT)
Prior art keywords
area
visible
elevation
labeling
marked
Prior art date
Application number
PCT/CN2021/118121
Other languages
English (en)
French (fr)
Inventor
李威阳
杨浩
康泽慧
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP21884791.1A priority Critical patent/EP4227907A4/en
Publication of WO2022089061A1 publication Critical patent/WO2022089061A1/zh
Priority to US18/307,386 priority patent/US20230260218A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/004Annotating, labelling

Definitions

  • the present application relates to the field of computer vision, and in particular, to a method, device, electronic device and storage medium for presenting object labeling information.
  • users can obtain information about surrounding buildings through the AR map and the location of the current scene so that users can choose the direction.
  • developers usually pre-set information labels corresponding to the building on the model corresponding to the building in the AR map.
  • the static information on the building model is marked and displayed on the AR map in the user terminal, where the marking can be a three-dimensional marking, and the user can clearly see the marked content corresponding to the building from a certain perspective.
  • the embodiments of the present application provide a method, device, electronic device, and storage medium for presenting labeling information of objects, which can select the labeling facade according to the projection of the visible facade on the display interface of the specified scene, which improves the efficiency of labeling information.
  • the display effect is as follows:
  • a method for presenting object annotation information comprising:
  • the labeling information of the target object is presented on the labeling facade of the target object presented on the display interface; the labeling facade is presented on the display interface according to at least two visible facades of the target object respectively
  • the projection area is determined from the at least two visible elevations; the visible elevations are the outer elevations of the target object that are visible to the target position.
  • the visible elevation of the target object is the visible elevation in the outer elevation of the target object when the target object is presented at the target position, that is, the specified target object is presented at the target position.
  • the projection area refers to the area where the visible facade corresponding to the target object is presented on the display interface.
  • the target object can be presented in the specified scene corresponding to the target position, and the projection area of the visible elevation of the target object projected on the display interface can be obtained, and then one of the visible elevations can be determined as the marked elevation according to the projection area, and the
  • the labeling facade presents the labeling information of the target object, that is to say, according to the target position, one of the visible facades corresponding to the target object can be selected as the labeling facade to present the labeling information of the target object, and consider when presenting the labeling information.
  • the orientation relationship between the target position and the facade of the target object is improved, and the display effect of the annotation information is improved.
  • the specified scene is an augmented reality scene or a virtual reality scene presented at the target location.
  • the target location may be the location where the augmented reality device is located, and the scene presented in the augmented reality scene is the image corresponding to the augmented reality device at the current location through the augmented reality device
  • the scene obtained by the acquisition component; and in the virtual reality scene, the target position may be the position of the virtual character corresponding to the virtual reality device in the virtual scene in a three-dimensional virtual scene calculated and modeled by the virtual reality device in the background.
  • the scene presented in the real scene is the three-dimensional virtual scene presented by the virtual reality device corresponding to the perspective and position corresponding to the virtual character.
  • the marked facade is the one with the largest area of the projection area presented on the display interface among the at least two visible facades.
  • the one with the largest projected area displayed on the display interface as the label elevation to display the label information, so that when the label information corresponding to the label elevation is presented on the display interface, it can be displayed in the largest size. presentation, which improves the display effect of annotation information.
  • the method further includes:
  • the to-be-projected areas of the at least two visible elevations are acquired; the visible areas are the corresponding visible elevations in the specified scene. the visible area of the target position;
  • the visible area refers to the area of the visible part on the visible elevation of the target object that is presented on the visible elevation corresponding to the projection area of the display interface, that is, Said, the visible area on the visible facade is the area on the visible facade of the target object in the three-dimensional scene presented in the virtual reality scene, or the target in the three-dimensional scene calculated by the computer in the background of the augmented reality scene The area on the visible elevation of the object.
  • the visible area and the area to be projected are both areas on the visible elevation, and the area to be projected can be the entire area or part of the visible area of the visible elevation, that is, the visible elevation corresponds to
  • the visible area can be entirely projected to the display interface, or part of it can be projected to the display interface.
  • the projection area on the display interface can be presented on the projection screen according to the area to be projected in any shape.
  • the labeling elevation is selected according to the projection area, and the appropriate labeling elevation can be selected according to the labeling information of any shape.
  • acquiring the to-be-projected areas of the at least two visible elevations according to the visible areas of the at least two visible elevations includes:
  • the presenting the labeling information of the target object on the labeling elevation of the target object presented on the display interface includes:
  • the marked area is the area with the largest area in the first shape area included in the visible area of the marked facade;
  • the labeling information of the target object is presented on the labeling area in the labeling elevation presented by the display interface.
  • the marked area is the area with the largest area of the first shape included in the visible area of the marked facade, that is, the marked area is all or part of the visible area corresponding to the marked facade.
  • the determining the marked area from the visible area of the marked facade includes:
  • Acquire occlusion information of the labeled elevation where the occlusion information is used to indicate the occluded vertices and occluded edges of the labeled elevation;
  • the marked area is determined.
  • the occlusion information is that the marked elevation is projected on the display interface, and the corresponding information is not presented, that is, the information corresponding to the area other than the visible area in the marked elevation;
  • the vertices of are the vertices in the labeling elevation that are outside the visible area;
  • the occluded edges are the edges in the labeling elevation, all of which are outside the visible area.
  • the first shape is a rectangle
  • determining the marked area in the visible area of the marked facade according to the occlusion information includes:
  • the diagonal vertex of the occluded vertex is used as the first target point
  • a first end point is determined on the non-adjacent edge corresponding to the first target point, so that the rectangle with the line segment between the first end point and the first target point as the diagonal is the label vertical line.
  • the area where the rectangle is located with the line segment between the first end point and the first target point as the diagonal is determined as the labeling area.
  • the labeling area for presenting labeling information is a rectangle
  • the labeling elevation determined according to the projection area is presented on the display interface
  • there is a vertex that is occluded. Determine the first endpoint of the rectangle corresponding to the labeling area on the non-adjacent edge of the diagonal vertex of the In the shape corresponding to the visible area of the label, the edge that is not directly connected to the first end point.
  • the first shape is a rectangle
  • determining the marked area in the visible area of the marked facade according to the occlusion information includes:
  • the occlusion information indicates that there are two occluded vertices on the labeled elevation, and the edges between the two occluded vertices are completely occluded, the adjacent unoccluded vertices of the labeled elevation are The vertex with the largest sum of the lengths of the unoccluded parts of the edge is obtained as the second target point;
  • a second end point is determined on the non-adjacent edge corresponding to the second target point; the second end point is in the visible area of the marked elevation, and the distance between the second end point and the second target point is
  • the line segment is a diagonal rectangle, which is the rectangle with the largest area in the visible area of the marked elevation;
  • the area where the rectangle is located with the line segment between the second end point and the second target point as the diagonal is determined as the labeling area.
  • the first shape is a rectangle
  • determining the marked area in the visible area of the marked facade according to the occlusion information includes:
  • a target point set is acquired;
  • the target point set includes the unoccluded vertices of the labeled elevation And the dividing point on the adjacent edges of the two occluded vertices; the dividing point is used to distinguish the occluded area and the unoccluded area of the visible facade;
  • the rectangle with the line segment between the third end point and the third target point as the diagonal is the area in the visible area of the marked elevation The largest rectangle; the third target point is one of the target points set;
  • the area where the rectangle is located with the line segment between the third end point and the third target point as the diagonal is determined as the labeling area.
  • the first shape is a rectangle
  • determining the marked area in the visible area of the marked facade according to the occlusion information includes:
  • the occlusion information indicates that there are three occluded vertices on the labeled elevation, acquiring the unoccluded vertices of the labeled elevation as the fourth target point;
  • the presenting the labeling information of the target object on the labeling elevation of the target object presented on the display interface includes:
  • the three-dimensional model of the annotation information is presented on a parallel plane of the annotation area presented on the display interface; the parallel plane is a plane located in front of the annotation elevation and parallel to the annotation elevation.
  • the labeling information may be presented on the display interface in a three-dimensional model structure, and the size of the three-dimensional model of the labeling information may be determined by the size of the labeling area. Since the 3D model has depth information, it is first necessary to obtain the plane parallel to the labeled façade in front of the labeled elevation. The labeled information is located on the parallel plane. The three-dimensional model of the annotation information is presented on the display interface. At this time, the annotation information presented on the display interface also has three-dimensional characteristics, which improves the display effect of the annotation information.
  • acquiring the to-be-projected areas of the at least two visible elevations according to the visible areas of the at least two visible elevations includes:
  • the candidate labeling regions are respectively determined from the visible regions of the at least two visible facades; the candidate labeling regions are the regions with the largest area among the regions of the second shape included in the visible regions of the corresponding facades;
  • the candidate marked regions corresponding to the at least two visible elevations are acquired as to-be-projected regions of the at least two visible elevations.
  • the presenting the labeling information of the target object on the labeling elevation of the target object presented on the display interface includes:
  • the labeling information of the target object is presented on the labeling facade displayed on the display interface corresponding to the candidate labeling area corresponding to the labeling facade.
  • the visible area including the area with the largest area of the second shape is obtained as the area to be projected, that is, the area to be projected is obtained.
  • the part of the visible area is used as the area to be projected, and then projected onto the display interface; the candidate labeling area corresponding to the projection area with the largest area on the display interface is used as the area for presenting labeling information, that is, each available area is first compared.
  • the maximum area of the specified shape of the viewing elevation, the area projected on the display interface, the candidate labeling area of the visible elevation corresponding to the projection area with the largest area is used to present the labeling information, considering that each visible elevation is projected on the display
  • the maximum area of the specified shape on the interface can be displayed on the display interface with the largest information label of the specified shape, which improves the display effect of the label information.
  • the labeling information of the target object is presented on the labeling elevation of the target object presented on the display interface, including:
  • the labeling information of the target object is presented on the labeling facade presented on the display interface.
  • an apparatus for presenting object annotation information comprising:
  • a target object acquisition unit used to acquire a target object in a specified scene, where the specified scene is a scene presented at a target position;
  • An annotation information presentation unit configured to present the annotation information of the target object on the annotation elevation of the target object presented on the display interface, where the annotation elevation is based on each of at least two visible elevations of the target object
  • the projection area presented on the display interface is determined from the at least two visible elevations; the visible elevation is the outer elevation of the target object that is visible to the target position noodle.
  • an electronic device in yet another aspect, includes a processor and a memory, the memory stores computer instructions, and the computer instructions are loaded and executed by the processor to realize the above-mentioned presentation of object annotation information method.
  • a computer-readable storage medium stores at least one instruction, at least one piece of program, code set or instruction set, the at least one instruction, the at least one piece of program, the code
  • the set or instruction set is loaded and executed by the processor to implement the above method for presenting object annotation information.
  • a computer program product or computer program comprising computer instructions stored in a computer-readable storage medium.
  • the processor of the terminal reads the computer instruction from the computer-readable storage medium, and the processor executes the computer instruction, so that the terminal executes the above method for presenting object annotation information.
  • the visible elevation of the outer elevation of the target object is obtained, and according to the projection of the visible elevation on the display interface, in the visible elevation of the target object, Determine the labeling facade, and present the labeling information of the target object in the area corresponding to the visible area of the labeling facade.
  • FIG. 1 is a schematic structural diagram of a system for presenting object annotation information according to an exemplary embodiment
  • FIG. 2 is a schematic flowchart of a method for presenting object annotation information according to an exemplary embodiment
  • FIG. 3 is a method flowchart of a method for presenting object annotation information provided according to an exemplary embodiment
  • Fig. 4 shows a kind of facade vertex occlusion classification diagram involved in the embodiment shown in Fig. 3;
  • FIG. 5 shows a schematic diagram of obtaining a viewable area of a facade corresponding to a single vertex occlusion according to the embodiment shown in FIG. 3;
  • Fig. 6 shows a schematic diagram of a method for calculating a marked range involved in the embodiment shown in Fig. 3;
  • Fig. 7 shows a schematic diagram of a method for calculating a marked range involved in the embodiment shown in Fig. 3;
  • Fig. 8 shows a schematic diagram of a method for calculating a marked range involved in the embodiment shown in Fig. 3;
  • Fig. 9 shows a flow chart of a data resource involved in the embodiment shown in Fig. 3;
  • Fig. 10 shows a structural diagram of a method for labeling presentation involved in the embodiment shown in Fig. 3;
  • Fig. 11 shows a flow chart for calculating the visible facade of a building involved in the embodiment shown in Fig. 3;
  • Fig. 12 shows a flowchart of a method corresponding to the calculation of a visible area involved in the embodiment shown in Fig. 3;
  • Fig. 13 shows a calculation flowchart corresponding to a text labeling range involved in the embodiment shown in Fig. 3;
  • Fig. 14 shows a flow chart of real-time pose simulation of the camera involved in the embodiment shown in Fig. 3;
  • FIG. 15 shows a schematic diagram of the comparison between the embodiment shown in FIG. 3 and an AR map technology
  • 16 is a method flowchart of a method for presenting object annotation information provided according to an exemplary embodiment
  • FIG. 17 shows a schematic flowchart of a method for presenting object annotation information
  • FIG. 18 is a structural block diagram of an apparatus for presenting object annotation information according to an exemplary embodiment
  • FIG. 19 is a schematic block diagram illustrating an electronic device provided by an exemplary embodiment
  • FIG. 20 is a schematic structural diagram illustrating an electronic device provided by an exemplary embodiment.
  • Computer vision is a science that studies how to make machines "see”, which refers to the use of cameras and computers instead of human eyes to identify, track and measure targets, and further perform graphics processing to make computer processing more suitable for human eyes.
  • Vision is an integral part of various intelligent/autonomous systems in various application fields such as manufacturing, inspection, document analysis, medical diagnosis, and military.
  • the challenge of computer vision is to develop human-level visual abilities for computers and robots.
  • Machine vision requires image signals, texture and color modeling, geometric processing and reasoning, and object modeling.
  • Augmented reality technology is a technology that ingeniously integrates virtual information with the real world. After 3D model, music, video and other virtual information is simulated and applied to the real world, the two kinds of information complement each other, thereby realizing the "enhancement" of the real world.
  • Augmented reality technology also known as augmented reality, is a relatively new technical content that promotes the integration of real world information and virtual world information content.
  • the simulation processing is implemented, and the virtual information content is effectively applied in the real world, and can be perceived by human senses in the process, so as to achieve a sensory experience beyond reality.
  • Virtual reality technology also known as spiritual technology, is a brand-new practical technology developed in the 20th century.
  • Virtual reality technology includes computer, electronic information, and simulation technology.
  • Virtual Reality (VR) is a computer simulation system that can create and experience virtual worlds. It uses computers to generate a simulated environment and immerse users in the environment.
  • Virtual reality technology is to use data in real life, electronic signals generated by computer technology, and combine them with various output devices to transform them into phenomena that can be felt by people. These phenomena can be real objects in reality. , or it can be a substance that cannot be seen by the naked eye, and is represented by a three-dimensional model.
  • FIG. 1 is a schematic structural diagram of a system for presenting object annotation information according to an exemplary embodiment.
  • the system includes: a server 120 and a user terminal 140 .
  • the server 120 is a server, or includes several servers, or a virtualization platform, or a cloud computing service center, etc., which is not limited in this application.
  • the user terminal 140 may be a terminal device with a display function, or a terminal device with a VR or AR function.
  • the user terminal may be a wearable device (such as VR glasses, AR glasses, smart glasses), a mobile phone, a tablet computer Or e-book readers, etc.
  • the number of user terminals 140 is not limited.
  • the user terminal 140 may have a client installed therein, and the client may be a three-dimensional map client, an instant messaging client, a browser client, or the like. This embodiment of the present application does not limit the software type of the client.
  • the user terminal 140 and the server 120 are connected through a communication network.
  • the communication network is a wired network or a wireless network.
  • the server 120 may send the 3D modeling data of the target object to the user terminal 140, and the user terminal 140 can perform 3D modeling of the target object in the VR scene according to the 3D modeling data, or perform the 3D modeling of the target object in the AR scene according to the 3D modeling data.
  • the three-dimensional modeling of the target object is performed in the computer background corresponding to the scene.
  • the above-mentioned wireless network or wired network uses standard communication technologies and/or protocols.
  • the network is usually the Internet, but can be any network, including but not limited to Local Area Network (LAN), Metropolitan Area Network (MAN), Wide Area Network (WAN), mobile, wired or wireless network, private network, or any combination of virtual private networks).
  • data exchanged over a network is represented using technologies and/or formats including Hyper Text Mark-up Language (HTML), Extensible Markup Language (XML), and the like.
  • HTML Hyper Text Mark-up Language
  • XML Extensible Markup Language
  • you can also use services such as Secure Socket Layer (SSL), Transport Layer Security (TLS), Virtual Private Network (VPN), Internet Protocol Security (IPsec) and other conventional encryption techniques to encrypt all or some of the links.
  • custom and/or dedicated data communication techniques may also be used in place of or in addition to the data communication techniques described above.
  • FIG. 2 is a schematic flowchart of a method for presenting object annotation information according to an exemplary embodiment.
  • the method may be performed by an electronic device, where the electronic device may be a terminal or a server, or the electronic device may include a terminal and a server, where the terminal may be the user in the embodiment shown in FIG. 1 above
  • the terminal 140 and the server may be the server 120 in the embodiment shown in FIG. 1 above.
  • the process of the method for presenting object annotation information may include the following steps:
  • Step 21 Acquire a target object in a specified scene, where the specified scene is a scene presented at the target position.
  • the specified scene is an augmented reality scene or a virtual reality scene presented at the target location.
  • the augmented reality scene or the virtual reality scene may be a scene presented on a display device corresponding to the user terminal.
  • the labeling information of the target object is presented on the labeling elevation of the target object presented on the display interface;
  • the labeling elevation is a projection presented on the display interface according to the at least two visible elevations of the target object.
  • the area is determined from the at least two visible elevations, where the visible elevations are the outer elevations of the target object that are visible to the target location.
  • the above-mentioned scene presented at the target location may refer to a scene presented corresponding to the above-mentioned target location. That is to say, the scene picture presented on the display interface is a picture obtained by observing the above-mentioned specified scene from the above-mentioned target position as a viewpoint.
  • the labeling elevation is an elevation that is used to display labeling information of the target object among the visible elevations corresponding to the target object.
  • the annotation information may be text annotation information, image annotation information or video annotation information, or any form of annotation information that can be used to display information.
  • the annotation information may be two-dimensional annotation information or three-dimensional annotation information. This application does not limit this.
  • the visible elevation of the outer elevation of the target object is obtained, and the visible elevation is displayed on the display interface according to the visible elevation.
  • the visible elevation of the target object determine the labeling elevation, and present the labeling information of the target object in the area corresponding to the visible area of the labeling elevation.
  • FIG. 3 is a method flowchart of a method for presenting object annotation information according to an exemplary embodiment.
  • the method may be performed by an electronic device, where the electronic device may be a terminal or a server, or the electronic device may include a terminal and a server, where the terminal may be the user in the embodiment shown in FIG. 1 above
  • the terminal 140 and the server may be the server 120 in the embodiment shown in FIG. 1 above.
  • the process of the object labeling information presentation method may include the following steps:
  • Step 301 acquiring a target object in a specified scene.
  • the specified scene is the scene presented at the target location.
  • the specified scene is an augmented reality scene or a virtual reality scene presented at the target location.
  • the target position may be the position where the AR device is located; wherein, the position may be an absolute position, for example, when the AR device is used in an AR map, the AR The device can obtain the positioning information of the device at this time to obtain the absolute position corresponding to the AR device; the position can also be a relative position. For example, when the AR device is used indoors, the relative position of the AR device and a certain point in the indoor environment can be obtained. position information to obtain the relative position of the AR device in the room and the point.
  • the target position may also be a virtual character corresponding to the VR device or a position corresponding to a virtual camera in the virtual three-dimensional scene constructed by the VR device.
  • Each point in the virtual three-dimensional scene can have its own coordinates, and according to the coordinate data, information corresponding to the target position can be obtained.
  • the visible façade of the target object is the façade visible in the outer façade of the target object when the target object is presented at the target position.
  • the visible area refers to the area of the visible portion on the visible elevation of the target object that is presented on the visible elevation corresponding to the projection area of the display interface.
  • a three-dimensional model corresponding to the object is constructed according to the three-dimensional model data corresponding to the target object.
  • a 3D model corresponding to the object is constructed in the VR scene according to the 3D model data corresponding to the target object;
  • the specified scene is an AR scene, according to The three-dimensional model data corresponding to the target object is constructed in the computer background corresponding to the AR scene, and the three-dimensional model corresponding to the object is constructed.
  • the three-dimensional data corresponding to the target object may include size information, coordinate information, and the like of the target object.
  • the visible areas of at least two visible facades of the target object in the specified scene are acquired.
  • a minimum bounding box corresponding to the target object is obtained, and the minimum bounding box is the smallest circumscribing of the target object. Cuboid, and then the minimum bounding box corresponding to the target object is used as the approximate model of the target object.
  • the corresponding calculation of the target object can be realized through the approximate model, which can effectively reduce the amount of calculation. .
  • the outer surface corresponding to the smallest bounding box of the target object is obtained as the outer surface corresponding to the target object.
  • the normal vector of each outer facade of the target object in the specified scene is obtained, and the normal vector of each outer facade of the target object is the difference between the normal vector and the vector corresponding to the display interface of the specified scene.
  • the facade with a positive inner area is obtained as the visible facade.
  • the vector corresponding to the display interface of the specified scene may be a normal vector corresponding to the display interface of the specified scene.
  • the display interface of the specified scene may be a user's observation interface simulated by a virtual camera in the specified scene, that is to say, the display interface of the specified scene may be performed by a user using a terminal.
  • the scene corresponding to the observation Taking the specified scene as an AR map as an example, the user terminal obtains real-time location information, and according to the AR map client, obtains the 3D building model information of the surrounding environment corresponding to the target location at this time, and the display interface of the specified scene is the user
  • the interface of the building corresponding to the orientation is displayed on the terminal; the vector corresponding to the display interface of the specified scene is the vector of the orientation of the user terminal.
  • the terminal can display the building model corresponding to the direction according to the current orientation of the terminal, and according to the normal vector of each facade corresponding to the building model, and the direction corresponding to the display interface of the specified scene
  • the inner product of the direction vectors of determines the relationship between the facade and the user's orientation, so as to determine whether the facade is visible in the user's perspective.
  • the visible area of the visible facade may be determined according to the connection between the target position and each point on the visible facade.
  • the three-dimensional model corresponding to the target object can obtain several points on the visible elevation in the three-dimensional model corresponding to the target object, and the several points They are respectively connected to the target position.
  • the position corresponding to the point is determined as a visible area; when a point is connected to the target position If there are 3D models corresponding to other buildings in the connection line, it means that the connection line corresponding to the point and the target position is blocked by other buildings. Therefore, the point can be determined as the invisible area corresponding to the target position.
  • Several visible area points and invisible area points can obtain the visible area of the visible elevation.
  • the visible area of the visible facade may be determined according to the boundary points between the target position and the visible facade.
  • the demarcation point is a point on the edge of the visible facade, which is used to divide the visible area and the non-visible area.
  • Step 302 Acquire the to-be-projected areas of the at least two visible elevations according to the visible areas of the at least two visible elevations.
  • the visible area is an area of the corresponding visible facade visible to the target position in the specified scene.
  • the respective entire areas of the visible areas of the at least two visible facades are acquired as the to-be-projected areas of the at least two visible facades.
  • Step 303 Project the to-be-projected areas of the at least two visible elevations to the display interface of the designated scene, and obtain the visual areas of the at least two visible elevations that are presented on the display interface of the designated scene. projection area.
  • the normal vector of the display interface of the specified scene is obtained, and the normal vector of the at least two visible facades is obtained according to the angle between the normal vector of the visible facade and the normal vector of the display interface.
  • the respective projection areas of the visible area on the display interface of the specified scene is obtained.
  • the visible area corresponding to the at least two visible elevations is projected to the direction corresponding to the display interface of the designated area, that is, when the visible elevation is not directly opposite to the display interface of the designated area, that is, When the visible elevation has a certain angle with the display interface of the designated area, the visible elevation corresponding to the visible elevation can be determined according to the angle value corresponding to the normal vector of the visible elevation and the normal vector of the display interface.
  • the viewing area is projected to the direction corresponding to the display interface of the designated area directly observed by the user.
  • Step 304 according to the respective projection areas of the visible areas of the at least two visible elevations presented on the display interface, determine a marked elevation from the at least two visible elevations.
  • the marked facade is the one with the largest area of the projection area presented on the display interface among the at least two visible facades.
  • the elevation can be The face is set to label the elevation.
  • the visible area areas of the at least two visible elevations are obtained; according to the visible area areas of the at least two visible elevations, and the relationship between the at least two visible elevations and the The orientation relationship of the display interface of the specified scene is obtained, and the area of the projection area of each of the visible areas of the at least two visible elevations on the display interface of the specified scene is obtained.
  • the occlusion information of the at least two visible elevations is obtained, where the occlusion information is used to indicate the occluded vertices and the occluded edges of the at least two visible elevations; according to the occlusion information , to obtain the respective corresponding visible area areas of the at least two visible facades.
  • the approximate visible area of the visible elevation can be obtained according to the number of occluded vertices and the number of occluded edges of the visible elevation. to obtain the visible area of the visible elevation.
  • the area of the visible elevation is obtained as the area of the visible area of the visible elevation.
  • a boundary point on the adjacent edge of the occluded vertex is obtained; the boundary point is used to distinguish the occluded area of the visible facade and the unoccluded area; according to the dividing point and the unoccluded vertices of the visible façade, obtain the area of the visible area of the visible façade.
  • FIG. 4 shows a classification diagram of facade vertex occlusion involved in an embodiment of the present application. As shown in Figure 4, there are at least seven visible facade situations as shown in the figure when the vertices of the facade are blocked.
  • FIG. 5 shows a schematic diagram of obtaining a visible area of a facade corresponding to a single vertex being occluded according to an embodiment of the present application.
  • the electronic device acquires a façade viewable area 502 corresponding to one of the façades of the object according to the occlusion situation 501 of the façade of the object. It can be seen from the façade visible area 502 that after point c in the façade is occluded, points a, b and d are visible.
  • the demarcation points c1 and c2 on the adjacent edge take the demarcation points c1 and c2 as visible endpoints, and form a new graph with the visible vertices a, b, d that are not occluded, and use the graph as the corresponding visible vertices of the facade. Viewing area, the area corresponding to the graphic is regarded as the visible area area of the facade.
  • the adjacent edges of the two occluded vertices have the same
  • the dividing points corresponding to the two occluded vertices are the same as the method shown in FIG. 5 , and the dividing points corresponding to the two occluded vertices are recursively obtained through the bisection method as the visible endpoints.
  • the visible endpoint and the two unobstructed visible vertices form a new graph, and the graph is used as the visible area corresponding to the elevation, and the area corresponding to the graph is used as the area of the visible area of the elevation. It should be noted that, in the case of visible elevation 403, when the distance between the two dividing points and the corresponding occluded vertex is the same, the visible area may present a rectangle.
  • the two occluded vertices are off-diagonal vertices, that is, the two occluded vertices share an adjacent edge, and the adjacent edge is not
  • the adjacent edges corresponding to each occluded vertex respectively have boundary points corresponding to the occluded vertex.
  • the boundary points corresponding to the two vertices are obtained as the visible endpoints.
  • a new graph is formed, and the graph is used as the visible area corresponding to the elevation, and the area corresponding to the graph is used as The visible area of the façade.
  • each occluded vertex corresponds to Adjacent edges have demarcation points corresponding to the occluded vertex.
  • the boundary points corresponding to the two vertices are obtained as the visible endpoints.
  • a new graph is formed, and the graph is used as the visible area corresponding to the elevation, and the area corresponding to the graph is used as The visible area of the façade.
  • the visible vertex is obtained by the bisection method
  • Step 305 Determine a marked area from the visible area of the marked facade; the marked area is the area with the largest area among the areas of the first shape included in the visible area of the marked facade.
  • the occlusion information of the marked facade is obtained, where the occlusion information is used to indicate the occluded vertices and the occluded edges of the marked facade; In the viewing area, determine the label area.
  • the first shape is a rectangle
  • the diagonal vertex of the occluded vertex is used as the first target point
  • a first endpoint is determined on a non-adjacent edge corresponding to a target point, so that a rectangle with a diagonal line segment between the first endpoint and the first target point is the area within the visible area of the marked elevation
  • the largest rectangle the area where the rectangle is located with the line segment between the first endpoint and the first target point as the diagonal is determined as the marked area.
  • FIG. 6 shows a schematic diagram of a method for calculating a marked range involved in an embodiment of the present application.
  • the diagonal vertex a the first target point
  • another point the first target point
  • the non-adjacent line segment is the line segment that is not directly connected to a in the visible area that constitutes the facade.
  • the first shape is a rectangle
  • the occlusion information indicates that there are two occluded vertices on the marked facade, and the edge between the two occluded vertices is completely occluded
  • the vertex with the largest sum of the lengths of the unoccluded parts of adjacent sides is obtained as the second target point
  • the second end point is determined on the non-adjacent edge corresponding to the second target point
  • the rectangle whose two endpoints are in the visible area of the marked facade, and the line segment between the second endpoint and the second target point is the diagonal is the rectangle with the largest area in the visible area of the marked facade
  • the area where the rectangle is located with the line segment between the second end point and the second target point as the diagonal is determined as the marked area.
  • the first shape is a rectangle
  • the occlusion information indicates that there are two occluded vertices on the marked elevation, and there is no completely occluded edge
  • a target point set is obtained;
  • the target point The set includes the unoccluded vertices of the labeled facade and the boundary points on the adjacent edges of the two occluded vertices; the boundary points are used to distinguish the occluded area and the unoccluded area of the visible facade; in the labeling
  • the third endpoint is determined in the visible area of the facade; the rectangle with the line segment between the third endpoint and the third target point as the diagonal is the rectangle with the largest area in the visible area of the marked facade; the first The three target points are one of the target points set; the area where the rectangle with the diagonal line segment between the third end point and the third target point is determined as the marked area.
  • FIG. 7 shows a schematic diagram of a method for calculating a marked range involved in an embodiment of the present application.
  • the visible elevation when there are two occluded vertices in the visible elevation, it can be divided into three cases for analysis.
  • the first case 701 when the occluded area exceeds the edge, that is, two occluded vertices are When the edge between the vertices is completely blocked, the visible area is a quadrilateral, in which point a and point b are the two unoccluded vertices corresponding to the marked elevation. It can be seen from 701 that the two adjacent edges corresponding to a are not blocked.
  • the sum of the lengths of the occluded parts is greater than the sum of the lengths of the unoccluded parts of the two adjacent edges corresponding to b. Therefore, a can be used as the second target vertex (base point), and a non-adjacent line segment of a can be found to form the largest composition with a.
  • the second endpoint of the rectangle is the point c1 in 701 .
  • the rectangle formed with a and c1 as diagonal points is the rectangle with the largest area in the visible area corresponding to 701 .
  • the occluded area when the occluded area does not exceed the edge, and the two unoccluded vertices of the visible facade are diagonal points, the occluded area may form an octagon as shown in 702; In three cases 703, when the occluded area does not exceed the edge, and the two occluded vertices of the visible elevation share an adjacent edge, the occluded area may form an octagon as shown in 703.
  • the target vertex set of the labeled elevation that is, obtain the unoccluded vertices of the labeled elevation and the corresponding boundary points on the adjacent edges of the two occluded vertices, as the target A point in the vertex set is the base point, and the rectangle with the largest area in the visible area of the marked elevation corresponding to this point is obtained.
  • the first shape is a rectangle.
  • the occlusion information indicates that there are three occluded vertices on the labeled elevation
  • the unoccluded vertices of the labeled elevation are obtained as the fourth target point;
  • the area where the rectangle formed by the fourth target point and the boundary points on the two adjacent sides of the fourth target point is located is determined as the marked area; the boundary point is used to distinguish the occluded area and the unobstructed area of the visible facade. shaded area.
  • FIG. 8 shows a schematic diagram of a method for calculating a marked range involved in an embodiment of the present application.
  • the unoccluded vertex ie point a
  • the rectangle with the largest area is formed according to the corresponding boundary points c1 and c2 on the adjacent side of point a, And obtain the area corresponding to the rectangle as the label area.
  • the first shape is a rectangle.
  • the occlusion information indicates that there are three occluded vertices on the labeled elevation
  • the unoccluded vertices of the labeled elevation are obtained as the fourth target point
  • a fourth endpoint is determined in the visible area of the marked facade
  • the rectangle with the line segment between the fourth endpoint and the fourth target point as the diagonal is the rectangle with the largest area in the visible area of the marked facade ; Determine the area where the rectangle with the line segment between the fourth end point and the fourth target point as the diagonal line is located as the labeling area.
  • the fourth endpoint in the visible area of the marked area can also be obtained directly according to the unoccluded vertices of the marked elevation, and the fourth endpoint and the unoccluded vertices form a rectangle with the largest area The corresponding area is obtained as the marked area.
  • the rectangle formed by the fourth end point and the unoccluded vertex is the same as the largest rectangle corresponding to FIG. 8 .
  • the marked area is an area of the specified shape in the visible elevation.
  • the marked area is a rectangular area in the visible elevation.
  • the marked area may also be an area with a specified shape such as a circle or a triangle. No restrictions apply.
  • Step 306 Present the labeling information of the target object on the labeling elevation of the target object presented on the display interface.
  • the labeling information of the target object may be presented on the labeling elevation of the target object presented on the display interface according to the depth information.
  • the model corresponding to the target object is a model constructed based on 3D data, so the label information may have depth attributes in the label area of the label elevation, that is, the label information of the model corresponding to the target object
  • the labeling information presented may be labeling information with three-dimensional attributes.
  • a three-dimensional model of the marked information is generated based on the size of the marked area; the three-dimensional model of the marked information is presented on a parallel plane of the marked area presented on the display interface; the parallel plane is located in the marked area. The plane in front of the dimension elevation and parallel to the dimension elevation.
  • the size of the annotation information is generated based on the size of the annotation area; based on the size of the annotation information, in the parallel plane of the annotation elevation, a three-dimensional representation of the annotation information is displayed for the annotation area.
  • the model is displayed on the display interface corresponding to the target position. That is, the size of the annotation information is related to the size of the annotation area.
  • the size of the annotation area is larger, the size of the annotation information is also larger, and the size of the 3D model of the annotation information displayed on the display interface is also larger.
  • the display direction of the label information is determined according to the length in the horizontal direction and the length in the vertical direction of the label area of the label facade; according to the display direction of the label information, the target presented on the display interface is The labeling information of the target object is presented on the labeling elevation of the object.
  • the presentation direction of the annotation information may be determined according to the horizontal and vertical lengths of the annotation area of the annotation facade. For example, when the horizontal length of the marked area is greater than the vertical length, the marked information may be in the horizontal direction of the marked area and presented on the marked area presented on the display interface; when the vertical length of the marked area When the length is greater than the horizontal length of the marked area, the marked information may be presented on the marked area presented on the display interface in the vertical direction of the marked area.
  • the distance information corresponding to the target object and the display interface of the designated scene is obtained; when the distance information is less than a threshold, the target is presented on the marked area corresponding to the marked facade displayed on the display interface
  • the annotation information corresponding to the object is obtained.
  • the target object that is not displayed on the display interface corresponds to The label information is presented on the label elevation of the .
  • the target is displayed on the display interface corresponding to the visible area of the marked facade Annotation information of the object.
  • the labeling area displayed on the labeling area in the display interface may also be difficult to identify. At this time, the labeling facade corresponding to the target object presented on the display interface is not displayed. Label information.
  • FIG. 9 shows a flowchart of a data resource involved in an embodiment of the present application.
  • the solutions shown in the embodiments of the present application can be implemented by deploying AR SDK (Software Development Kit, software development kit) platform software and a terminal of a three-dimensional building data model.
  • the program code runs in the host memory and/or GPU (Graphics Processing Unit, graphics processor) memory of the terminal device, and loads the building models around the current location from the server.
  • the terminal can perform the calculation process and rendering display of text annotations, including the calculation of the visible area of the building, the area of the projected visible area, the calculation of the display range of text annotations, and the display of text annotations. and conflict resolution.
  • FIG. 10 shows a structural diagram of an annotation presentation method involved in an embodiment of the present application.
  • the method structure consists of bottom layer support 1001 , annotation text calculation 1002 , and rendering display 1003 , and the method is executed by an electronic device.
  • the method shown in this application can be developed on the 3D rendering engine Unity software, where the Unity software provides an encapsulated 3D rendering pipeline, provides functions such as collision detection and 3D rendering, and loads the data of the city buildings according to the data corresponding to the 3D buildings. 3D model to provide the underlying support for this method.
  • the electronic device determines the visible facade of the building according to the minimum bounding box; and then according to the minimum bounding box of the building, determines the visible vertex corresponding to the visible facade of the building and the visible area range of the facade; The area projection of the visible area in the direction corresponding to the user, and the one with the largest area projection is selected as the display facade.
  • the largest text annotation display range on the display elevation is calculated, and the text annotation is displayed on the text annotation display area corresponding to the three-dimensional building model of the largest text annotation building.
  • the content of the text annotation is rendered and displayed on the facade, there may be occlusion when the projection is displayed on the screen.
  • the text annotation that is close to the text annotation blocks the text annotation behind it, and a conflict is formed at this time. Therefore, the conflict display of the text annotation needs to be processed.
  • the camera starts to simulate the real-time pose, simulates the real-time position and angle of the user through the electronic device, and verifies the correctness of the display of the text annotation.
  • FIG. 11 shows a flow chart of calculating a visible facade of a building involved in an embodiment of the present application. As shown in Figure 11:
  • the electronic device determines the building to be marked.
  • S1103 Obtain the vector Vcam from the center of the building to the camera, calculate the normal vector Vnor of each facade of the target model, and calculate the projection length value L of Vnor on the Vcam vector.
  • FIG. 12 shows a flowchart of a method corresponding to the calculation of a viewable area involved in an embodiment of the present application.
  • the electronic device determines the building to be marked.
  • S1203 traverse the visible facades in the set, and determine whether the connection line between the camera and the facade vertices of the visible facade of the building is Building occlusion, to determine whether the vertices of the facade are visible in the camera's field of view.
  • the size of the range of the visible area on the screen is obtained, that is, the size of the range that the user can actually observe on the screen.
  • FIG. 13 shows a calculation flowchart corresponding to a text labeling range involved in an embodiment of the present application.
  • the size of the range observed on the screen ie, the projected area of the visible area of the elevation.
  • S1301 compare the area of the visible area, and use the elevation with the largest area of the visible area as the marked elevation.
  • the labeling information displayed according to the labeling area is relatively small, and the labeling information displayed at this time may be difficult for the user to see clearly, so the labeling area may not be displayed; when the distance between the building and the camera is far away At this time, the display area of the building on the terminal may also be small, and the user may also not be able to see the displayed label information clearly, so the label range may not be displayed.
  • FIG. 14 shows a flowchart of a real-time camera pose simulation involved in an embodiment of the present application.
  • the steps shown in Figure 14 can be executed in real time, by inputting corresponding commands to the terminal, controlling the movement of the camera, rotating the angle of view and other response events, simulating the scene browsing of people, to The information labeling method of the building shown in the embodiment of the present application is verified.
  • the terminal being implemented as a personal computer as an example.
  • the user may input corresponding instructions through external devices such as a keyboard, a mouse, etc., to control the camera object in the simulated three-dimensional scene.
  • external devices such as a keyboard, a mouse, etc.
  • the electronic device allows the camera to respond to the action events of translation and angle of view rotation according to user input such as a keyboard and a mouse in real time.
  • this application can also be used for information annotation in various forms such as pictures and videos.
  • video advertisements are placed on the facade of buildings for playback, new car models are placed for exhibition, store information is introduced on shopping mall buildings, and museums are used to introduce the history of cultural relics and other application scenarios.
  • multimedia information can be displayed in the scene, and the user can receive the information of the AR scene from different angles, enriching the AR experience effect.
  • the embodiment of the present application also proposes to calculate the visible area of the building model in the three-dimensional scene, and the projected area of the visible area under the direction of the human eye line of sight, so as to determine the facade direction of the building marked and displayed.
  • Classify and calculate the occlusion of the vertices of the building facade calculate the area within the visible area, and project the area of the visible area in the direction of the human eye to obtain the size of the visible area in the facade on the screen display. ; Then calculate the visible area in the building facade, and display the text label in the visible area.
  • the position of the text annotation at this time is in the visible area of the building, and the direction of the text annotation is the elevation direction of the building.
  • Text annotations are three-dimensional and are not obscured by buildings in the current frame. The three-dimensional text annotation will change in distance on the screen display. The text that is closer to the user will be displayed in a larger size on the screen, and the text direction is consistent with the direction of the building, which can better express the relationship between the text annotation and the marked building. affiliation between.
  • the annotation text is adaptively displayed according to different viewing angles.
  • the solution shown in the embodiment of the present application performs dynamic calculation in the range of the visible area, and calculates the rectangle with the largest area.
  • the visible area is a two-dimensional plane, and the shape will change with the user's perspective. It is necessary to determine the base point of the rectangle according to the visibility of the vertices of the facade and the relationship between the vertices, and determine the position of the symmetrical point of the rectangle to determine the rectangle with the largest area.
  • the content of the buildings in the scene will change, and the text annotations of the buildings will change with the scene changes, presenting the annotations to the user from a better perspective.
  • FIG. 15 is a schematic diagram illustrating a comparison between an embodiment of the present application and an AR map technology.
  • the labeling of buildings will change under different viewing angles.
  • the buildings are labeled in the visible range of the scene, and the direction is in the direction of the building facade.
  • the user can simply It associates the annotation with the marked building, and provides users with changes in distance and distance.
  • the annotation can provide a certain degree of position and direction guidance, bringing users a better AR experience.
  • the label is not strongly associated with the marked building.
  • Annotation in AR scene has the following characteristics: 1: When the user is browsing, the building annotation information of the current frame cannot be too much, and the priority is sorted according to the distance from the user and the size of the visible area. The larger the area, the higher the priority. 2: When the vertical axis of the visible area is longer than the horizontal axis, the building information can be displayed in a vertical arrangement. 3: When the range of text annotation is too small to effectively display building information, it can be ignored.
  • the information labeling method adopted by AR map technology 1502 in AR scenes is to identify the range of buildings on the screen, and tile the text labels on the screen range of the buildings.
  • the information labels are still two-dimensional planes, and the depth in the AR scene is not used. information, and the tiled information label is not strongly related to the building to which it belongs, which will occlude other contents of the scene.
  • the difference between the embodiment 1501 of the present application and the AR map technology 1502 is that three-dimensional information labels are used, so that the information labels are displayed in the visible area of the building. It is more closely related to the building to which it belongs and becomes a part of the AR scene, and it blocks other contents in the scene less, and users have a better feeling in the AR experience.
  • the labels of buildings in the 3D scene are static, and users may have blind spots when viewing labels from different perspectives, or the label information may be wrongly reversed; by calculating the visible area of buildings from the user's perspective in the current frame , the text content is displayed in the visible area; in the solution shown in the embodiment of the present application, when the user browses the building under different viewing angles, the labeling information of the building will be automatically adjusted and change with the change of the viewing angle.
  • This method enables the annotated information to be presented dynamically, making the information annotation in the 3D scene more flexible and intelligent.
  • the annotation of buildings can also be expressed in multimedia methods such as pictures and videos.
  • Annotation information is an important interactive entrance in AR applications.
  • the labels of the buildings in the 3D scene are tiled on the screen, which is not closely related to the building to which they belong, and will block other scene contents; while the solution shown in the embodiment of the present application , calculate the 3D annotation in the 3D scene, make the annotation direction consistent with the facade direction of the building, and display the far and near changes of the annotation information on the screen.
  • the relationship between the annotation and the marked building is enhanced, and the text is a linear expression, which can indicate the orientation information of the building to a certain extent, and the user's three-dimensional experience in the AR scene is further enriched.
  • the visible elevation of the outer elevation of the target object is obtained, and the visible elevation is displayed according to the visible elevation.
  • the labeling elevation is determined, and the labeling information of the target object is presented in the area corresponding to the visible area of the labeling elevation.
  • FIG. 16 is a method flowchart of a method for presenting object annotation information according to an exemplary embodiment.
  • the method may be performed by an electronic device, where the electronic device may be a terminal or a server, or the electronic device may include a terminal and a server, where the terminal may be the user in the embodiment shown in FIG. 1 above
  • the terminal 140 and the server may be the server 120 in the embodiment shown in FIG. 1 above.
  • the process of the object labeling information presentation method may include the following steps:
  • Step 1601 acquiring a target object in a specified scene.
  • the target object when the specified scene is a VR scene, the target object may be an object constructed by the VR device according to the 3D model data; when the specified scene is an AR scene, the target object may be an AR device through a camera component photographed object.
  • Step 1602 Determine candidate labeling regions respectively from the visible regions of the at least two visible elevations.
  • the candidate labeling area is the area with the largest area in the area of the second shape included in the visible area of the corresponding facade.
  • the method for determining the candidate labeling area is similar to the method for confirming the labeling area in the labeling elevation shown in step 305 corresponding to FIG. 3 , and details are not repeated here.
  • Step 1603 Obtain the corresponding candidate marked areas of the at least two visible elevations as the to-be-projected areas of the at least two visible elevations.
  • the candidate labeling area is the visible area of the corresponding elevation, the second shape area with the largest area, that is to say, in the embodiment of the present application, a part of the visible area of the visible elevation may be used as The to-be-projected area, that is, the candidate marked area can be used as the to-be-projected area.
  • the candidate marked area is used as the area to be projected, and projected onto the display interface of the designated scene, and the candidate marked areas of the at least two visible elevations are acquired on the display interface of the designated scene respectively. on the projection area.
  • Step 1604 according to the respective projection areas of the visible areas of the at least two visible elevations on the display interface of the designated scene, determine a labeling elevation from the at least two visible elevations.
  • the marked facade is determined from the at least two visible facades.
  • the candidate annotation area of the specified shape of each visible elevation uses the candidate annotation area of each visible elevation as the area to be projected, and project it onto the display interface of the specified scene , and then determine the marked elevation according to the projected area of the area to be projected on the display interface of the specified scene. Because the angle formed between the target position and different visible elevations is different, the projected area of the visible area of a certain visible elevation may be larger, but the projected area corresponding to the corresponding marked area is smaller. It is also possible to first compare the projected areas of the marked facades of each facade, and then determine the marked facades according to the comparison results.
  • Step 1605 corresponding to the candidate labeling area corresponding to the labeling facade, present the labeling information of the target object on the labeling facade presented on the display interface.
  • the corresponding labeling information of the target object may be presented in the presentation area of the candidate labeling region corresponding to the labeling facade on the display interface. That is, first compare the maximum area of the specified shape of each visible facade, the projected area on the display interface, and use the candidate labeling area of the visible facade corresponding to the projection area with the largest area to present labeling information.
  • the visible elevation of the outer elevation of the target object is obtained, and according to the projection of the visible elevation on the display interface, In the visible elevation of the target object, the labeling elevation is determined, and the labeling information of the target object is presented in the area corresponding to the visible area of the labeling elevation.
  • the label elevation can be selected according to the target position and the projection of the visible elevation on the display interface of the specified scene, which improves the display effect of label information.
  • FIG. 17 shows a schematic flowchart of a method for presenting object annotation information.
  • the user terminal 1700 constructs the user terminal according to the three-dimensional model data 1701 and the position information and orientation information corresponding to the user terminal at this time.
  • the visible facade 1703 corresponding to the model and the direction of the user terminal is obtained, and each visible facade is obtained according to the visible facade corresponding to the target building and the direction of the user terminal.
  • the visible area 1704 corresponding to the surface, and the area of the visible area corresponding to each visible area 1704 .
  • the visible elevation corresponding to the visible area with the largest projected area of the visible area in the direction of the user terminal is obtained as the marked elevation 1706 .
  • the maximum area of the specified shape in the marked facade is obtained as the marked area 1707 corresponding to the marked facade.
  • the marked area 1707 the corresponding three-dimensional building model of the building is displayed on the Information is marked 1708 and presented on the display interface.
  • Fig. 18 is a block diagram showing the structure of an apparatus for presenting object annotation information according to an exemplary embodiment.
  • the apparatus for presenting object annotation information may implement all or part of the steps in the method provided by the embodiment shown in FIG. 2 , FIG. 3 or FIG. 16 .
  • the device for presenting object annotation information may include:
  • a target object acquisition unit 1801 configured to acquire a target object in a specified scene, where the specified scene is a scene presented at the target position;
  • An annotation information presentation unit 1802 configured to present the annotation information of the target object on the annotation elevation of the target object presented on the display interface, where the annotation elevation is based on at least two visible elevations of the target object
  • the respective projection areas presented on the display interface are determined from the at least two visible elevations, and the visible elevations are the outer elevations of the target object that are visible to the target position facade.
  • the specified scene is an augmented reality scene or a virtual reality scene presented at the target location.
  • the marked facade is the one with the largest area of the projection area presented on the display interface among the at least two visible facades.
  • the apparatus further includes:
  • a to-be-projected area acquisition unit configured to acquire the to-be-projected areas of the at least two visible elevations according to the visible areas of the at least two visible elevations; the visible areas are the corresponding visible elevations In the specified scene, the area visible to the target location;
  • a projection area acquisition unit configured to project the to-be-projected areas of the at least two visible elevations to the display interface, and obtain the visible areas of the at least two visible elevations on the display interface respectively Rendered projection area.
  • the to-be-projected area acquisition unit is configured to:
  • the annotation information presentation unit 1802 includes:
  • an area determination subunit configured to determine a marked area from the visible area of the marked facade; the marked area is the area with the largest area in the first shape area included in the visible area of the marked facade;
  • the labeling information presentation subunit is configured to present labeling information of the target object on the labeling elevation of the target object presented on the display interface.
  • the area determination subunit includes:
  • an occlusion information acquisition subunit configured to acquire occlusion information of the labeled facade, where the occlusion information is used to indicate occluded vertices and occluded edges of the labeled facade;
  • the labeling area determining subunit is configured to determine the labeling area in the visible area of the labeling facade according to the occlusion information.
  • the first shape is a rectangle
  • the labeling area determination subunit is used for:
  • the diagonal vertex of the occluded vertex is used as the first target point
  • a first end point is determined on the non-adjacent edge corresponding to the first target point, so that the rectangle with the line segment between the first end point and the first target point as the diagonal is the label vertical line.
  • the area where the rectangle is located with the line segment between the first end point and the first target point as the diagonal is determined as the labeling area.
  • the first shape is a rectangle
  • the labeling area determination subunit is used for:
  • the occlusion information indicates that there are two occluded vertices on the labeled elevation, and the edges between the two occluded vertices are completely occluded, the adjacent unoccluded vertices of the labeled elevation are The vertex with the largest sum of the lengths of the unoccluded parts of the edge is obtained as the second target point;
  • a second end point is determined on the non-adjacent edge corresponding to the second target point; the second end point is in the visible area of the marked elevation, and the distance between the second end point and the second target point is
  • the line segment is a diagonal rectangle, which is the rectangle with the largest area in the visible area of the marked elevation;
  • the area where the rectangle is located with the line segment between the second end point and the second target point as the diagonal is determined as the labeling area.
  • the first shape is a rectangle
  • the labeling area determination subunit is used for:
  • a target point set is acquired;
  • the target point set includes the unoccluded vertices of the labeled elevation And the dividing point on the adjacent edges of the two occluded vertices; the dividing point is used to distinguish the occluded area and the unoccluded area of the visible facade;
  • the rectangle with the line segment between the third end point and the third target point as the diagonal is the area in the visible area of the marked elevation The largest rectangle; the third target point is one of the target points set;
  • the area where the rectangle is located with the line segment between the third end point and the third target point as the diagonal is determined as the labeling area.
  • the first shape is a rectangle
  • the labeling area determination subunit is used for:
  • the occlusion information indicates that there are three occluded vertices on the labeled elevation, acquiring the unoccluded vertices of the labeled elevation as the fourth target point;
  • the label information presentation subunit includes:
  • An annotation information model generation subunit configured to generate a three-dimensional model of the annotation information based on the size of the annotation area
  • An annotation information model presentation subunit configured to present the three-dimensional model of the annotation information on a parallel plane of the annotation area presented by the display interface; the parallel plane is located in front of the annotation elevation and parallel to the Describe the plane on which the elevation is marked.
  • the to-be-projected area acquisition unit includes:
  • the candidate labeling area determination subunit is used to respectively determine the candidate labeling area from the visible areas of the at least two visible elevations; the candidate labeling area is the second shape included in the visible area of the corresponding elevation. In the area, the area with the largest area;
  • the to-be-projected area acquisition subunit is configured to acquire the corresponding candidate marked areas of the at least two visible elevations as the to-be-projected areas of the at least two visible elevations.
  • annotation information presentation unit 1802 is further configured to:
  • the labeling information of the target object is presented on the labeling facade displayed on the display interface corresponding to the candidate labeling area corresponding to the labeling facade.
  • annotation information presentation unit 1802 is configured to:
  • the labeling information of the target object is presented on the labeling facade presented on the display interface .
  • the visible elevation of the outer elevation of the target object is obtained, and the visible elevation is displayed according to the visible elevation.
  • the labeling elevation is determined, and the labeling information of the target object is presented in the area corresponding to the visible area of the labeling elevation.
  • the electronic device includes corresponding hardware structures and/or software modules (or units) for performing each function.
  • the embodiments of this application can be implemented in hardware or a combination of hardware and computer software. Whether a function is performed by hardware or computer software driving hardware depends on the specific application and design constraints of the technical solution. Those skilled in the art may use different methods for each specific application to implement the described functions, but such implementation should not be considered beyond the scope of the technical solutions of the embodiments of the present application.
  • the electronic device may be divided into functional units according to the foregoing method examples.
  • each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units. It should be noted that the division of units in the embodiments of the present application is illustrative, and is only a logical function division, and other division methods may be used in actual implementation.
  • FIG. 19 shows a possible schematic structural diagram of the electronic device involved in the above embodiment.
  • the electronic device 1900 includes: a processing unit 1902 and a communication unit 1903 .
  • the processing unit 1902 is used to control and manage the actions of the electronic device 1900.
  • the processing unit 1902 is configured to support the electronic device 1900 to perform steps 21 to 22 in the embodiment shown in FIG. 2 , steps 301 to 306 in the embodiment shown in FIG. Steps 1601 to 1605 in the embodiment shown in 16, and/or other steps for performing the techniques described herein.
  • the electronic device 1900 may further include a storage unit 1901 for storing program codes and data of the electronic device 1900 .
  • the storage unit 1901 stores the three-dimensional model data described above.
  • the processing unit 1902 may be a processor or a controller, such as a central processing unit (Central Processing Unit, CPU), a general-purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application-specific integrated circuit (Application-Specific Integrated Circuit) Integrated Circuit, ASIC), Field Programmable Gate Array (Field Programmable Gate Array, FPGA) or other programmable logic devices, transistor logic devices, hardware components or any combination thereof. It may implement or execute the various exemplary logical blocks, modules and circuits described in connection with this disclosure.
  • the processor may also be a combination that implements computing functions, such as a combination of one or more microprocessors, a combination of a DSP and a microprocessor, and the like.
  • the communication unit 1903 may be a communication interface, a transceiver, a transceiver circuit, etc., where the communication interface is a general term and may include one or more interfaces.
  • the storage unit 1901 may be a memory.
  • the processing unit 1902 is a processor
  • the communication unit 1903 is a communication interface
  • the storage unit 1901 is a memory
  • the electronic device involved in this embodiment of the present application may be the electronic device shown in FIG. 20 .
  • the electronic device 2010 includes: a processor 2012 , a communication interface 2013 , and a memory 2011 .
  • the electronic device 2010 may also include a bus 2014 .
  • the communication interface 2013, the processor 2012 and the memory 2011 can be connected to each other through the bus 2014;
  • the bus 2014 can be a Peripheral Component Interconnect (PCI for short) bus or an Extended Industry Standard Architecture (Extended Industry Standard Architecture, for short) EISA) bus, etc.
  • the bus 2014 can be divided into an address bus, a data bus, a control bus, and the like. For ease of presentation, only one thick line is shown in FIG. 20, but it does not mean that there is only one bus or one type of bus.
  • the electronic device shown in FIG. 19 or FIG. 20 may be a user terminal or a server.
  • the steps of the method or algorithm described in conjunction with the disclosure of the embodiments of this application may be implemented in a hardware manner, or may be implemented in a manner in which a processor executes software instructions.
  • Software instructions can be composed of corresponding software modules (or units), and the software modules (or units) can be stored in random access memory (Random Access Memory, RAM), flash memory, read only memory (Read Only Memory, ROM), optional Erasable Programmable Read-Only Memory (Erasable Programmable ROM, EPROM), Electrically Erasable Programmable Read-Only Memory (Electrically EPROM, EEPROM), Register, Hard Disk, Mobile Hard Disk, CD-ROM, Compact Disc Read-Only Memory) or any other form of storage medium known in the art.
  • An exemplary storage medium is coupled to the processor, such that the processor can read information from, and write information to, the storage medium.
  • the storage medium can also be an integral part of the processor.
  • the processor and storage medium may reside in an ASIC.
  • the ASIC may be located in an electronic device.
  • the processor and storage medium may also exist in the electronic device as discrete components.
  • the application also provides a computer program product or computer program, the computer program product or computer program comprising computer instructions stored in a computer-readable storage medium.
  • the processor of the electronic device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the electronic device executes the above method for presenting object annotation information.
  • Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another.
  • a storage medium can be any available medium that can be accessed by a general purpose or special purpose computer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Architecture (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本申请是关于一种物体标注信息呈现方法、装置、电子设备及存储介质。所述方法涉及计算机视觉领域。所述方法包括: 获取指定场景中的目标物体,指定场景是在目标位置呈现的场景; 在显示界面呈现的目标物体的标注立面上呈现目标物体的标注信息,该标注立面为根据目标物体的至少两个可视立面各自在显示界面上呈现的投影区域,从至少两个可视立面中确定的,该可视立面是目标物体的外立面中,对目标位置可见的立面。上述方法在 AR 或 VR 场景下显示目标物体的标注时,可以根据可视立面在指定场景的显示界面上的投影,进行标注立面的选择,从而能够在虚拟现实/增强现实场景中,动态的选择可视区域较大的立面来显示标注信息,提高了标注信息的显示效果。

Description

物体标注信息呈现方法、装置、电子设备及存储介质
本申请要求于2020年10月30日提交的、申请号为202011191117.4、发明名称为“物体标注信息呈现方法、装置、电子设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机视觉领域,特别涉及一种物体标注信息呈现方法、装置、电子设备及存储介质。
背景技术
随着智能终端的不断发展,用户可以通过AR地图,通过当前场景所处位置获取周围建筑物的信息以便用户选择方向。
在相关技术中,开发者通常在AR地图的建筑物对应的模型上,预先设置与该建筑物对应的信息标注,当识别用户当前场景中的建筑物或POI信息,通过该建筑物或POI信息对应的位置,将建筑物模型上的静态信息标注显示在用户终端中的AR地图,其中标注可以是三维标注,用户可以在一定视角下清晰地看到建筑物对应的标注内容。
然而,相关技术中三维建筑物的信息标注是静态的,标注的显示效果较差。
发明内容
本申请实施例提供了一种物体标注信息呈现方法、装置、电子设备及存储介质,可以根据可视立面在指定场景的显示界面上的投影,进行标注立面的选择,提高了标注信息的显示效果,该技术方案如下:
一方面,提供了一种物体标注信息呈现方法,所述方法包括:
获取指定场景中的目标物体,所述指定场景是在目标位置呈现的场景;
在显示界面呈现的所述目标物体的标注立面上呈现所述目标物体的标注信息;所述标注立面为根据所述目标物体的至少两个可视立面各自在所述显示界面上呈现的投影区域,从所述至少两个可视立面中确定的;所述可视立面是所述目标物体的外立面中,对所述目标位置可见的立面。
在本申请实施例提供的方案中,目标物体的可视立面,是目标物体在目标位置呈现时,目标物体的外立面中可见的立面,也就是目标物体呈现在目标位置对应的指定场景时,没有被完全遮挡的立面;其中,投影区域是指目标物体对应的可视立面在显示界面上呈现的区域。可以在目标位置对应的指定场景中呈现目标物体,并获取目标物体的可视立面投影在显示界面上的投影区域,再根据投影区域将其中一个可视立面确定为标注立面,并在标注立面上呈现目标物体的标注信息,也就是说,可以根据目标位置,在目标物体对应的可视立面中选择一个作为标注立面以呈现目标物体的标注信息,在呈现标注信息时考虑了目标位置与目标物体立面之间的方位关系,提高了标注信息的显示效果。
在一种可能的实现方式中,所述指定场景是在所述目标位置呈现的增强现实场景或虚拟现实场景。
在本申请实施例提供的方案中,在增强现实场景中,目标位置可以是增强现实设备所在的位置,在增强现实场景中呈现的场景则是增强现实设备在当前位置通过增强现实设备对应的图像获取组件获取到的场景;而在虚拟现实场景中,目标位置可以是通过虚拟现实设备后台计算建模出的一个三维虚拟场景中,虚拟现实设备对应的虚拟人物在虚拟场景中的位置,在虚拟现实场景中呈现的场景则是虚拟现实设备以虚拟人物对应的视角与位置,对应呈现出的三维虚拟场景。
在一种可能的实现方式中,所述标注立面为所述至少两个可视立面中,在所述显示界面上呈现的投影区域的面积最大的一个。
通过将目标物体的可视立面中,在显示界面上呈现的投影面积最大的一个作为标注立面以显示标注信息,使得标注立面对应的标注信息呈现在显示界面时,可以以最大的尺寸呈现,提高了标注信息的显示效果。
在一种可能的实现方式中,所述方法还包括:
根据所述至少两个可视立面的可视区域,获取所述至少两个可视立面的待投影区域;所述可视区域是对应的可视立面在所述指定场景中,对所述目标位置可见的区域;
将所述至少两个可视立面的待投影区域向所述显示界面做投影,获得所述至少两个可视立面的可视区域各自在所述显示界面上呈现的投影区域。
在本申请实施例提供的方案中,可视区域是指,目标物体的可视立面上的,呈现于显示界面的投影区域所对应的可视立面上的可视部分的区域,也就是说,可视立面上的可视区域,是虚拟现实场景中呈现出的三维场景中的目标物体的可视立面上的区域,或者是增强现实场景后台计算机计算出的三维场景中的目标物体的可视立面上的区域。其中,可视区域与待投影区域都是可视立面上的区域,待投影区域可以是可视立面的可视区域中的,全部区域或者部分区域,也就是说,可视立面对应的可视区域,可以全部投影至显示界面,也可以部分投影至显示界面。此时,显示界面上的投影区域可以是根据任意形状的待投影区域呈现在投影屏幕上的,此时根据投影区域选择标注立面,可以根据到任意形状的标注信息选择适合的标注立面,增强了标注立面的显示效果。
在一种可能的实现方式中,所述根据所述至少两个可视立面的可视区域,获取所述至少两个可视立面的待投影区域,包括:
将所述至少两个可视立面的可视区域各自的全部区域,获取为所述至少两个可视立面的待投影区域。
在一种可能的实现方式中,所述在显示界面呈现的所述目标物体的标注立面上呈现所述目标物体的标注信息,包括:
从所述标注立面的可视区域中确定标注区域;所述标注区域是所述标注立面的可视区域包含的第一形状的区域中,面积最大的区域;
在所述显示界面呈现的所述标注立面中的所述标注区域上,呈现所述目标物体的所述标注信息。
在本申请实施例提供的方案中,标注区域是标注立面的可视区域包含的第一形状的面积最大的区域,即标注区域是标注立面对应的可视区域中全部或部分区域。
在一种可能的实现方式中,所述从所述标注立面的可视区域中确定标注区域,包括:
获取所述标注立面的遮挡信息,所述遮挡信息用于指示所述标注立面被遮挡的顶点和被遮挡的边;
根据所述遮挡信息,在所述标注立面的可视区域中,确定所述标注区域。
在本申请实施例提供的方案中,遮挡信息是标注立面投影在显示界面上,未呈现部分对应的信息,也就是标注立面中除可见区域之外的区域对应的信息;其中,被遮挡的顶点为标注立面中,位于可视区域之外的顶点;被遮挡的边为标注立面中,全部位置均位于可视区域之外的边。
在一种可能的实现方式中,所述第一形状为矩形,所述根据所述遮挡信息,在所述标注立面的可视区域中确定所述标注区域,包括:
当所述遮挡信息指示所述标注立面上存在一个被遮挡顶点时,将所述被遮挡顶点的对角顶点作为第一目标点;
在所述第一目标点对应的非邻接边上确定第一端点,使得以所述第一端点与所述第一目标点之间的线段为对角线的矩形,是所述标注立面的可视区域内面积最大的矩形;
将以所述第一端点与所述第一目标点之间的线段为对角线的矩形所在的区域,确定为所述标注区域。
在本申请实施例提供的方案中,当呈现标注信息的标注区域为矩形时,且根据投影区域确定的标注立面呈现在显示界面上时,存在一个顶点被遮挡,此时可以根据被遮挡顶点的对角顶点的非邻接边上确定标注区域对应矩形的第一端点,并根据第一端点和被遮挡顶点的对角线顶点,确定标注区域;其中,非邻接边是投影区域对应的标注立面的可视区域对应的形状中,与第一端点不直接相连的边。
在一种可能的实现方式中,所述第一形状为矩形,所述根据所述遮挡信息,在所述标注立面的可视区域中确定所述标注区域,包括:
当所述遮挡信息指示所述标注立面上存在两个被遮挡顶点,且所述两个被遮挡顶点之间的边被完全遮挡时,将所述标注立面的未被遮挡顶点中,邻边未被遮挡部分的长度之和最大的顶点,获取为第二目标点;
在所述第二目标点对应的非邻接边上确定第二端点;所述第二端点处于所述标注立面的可见区域,且以所述第二端点与所述第二目标点之间的线段为对角线的矩形,是所述标注立面的可视区域内面积最大的矩形;
将以所述第二端点与所述第二目标点之间的线段为对角线的矩形所在的区域,确定为所述标注区域。
在一种可能的实现方式中,所述第一形状为矩形,所述根据所述遮挡信息,在所述标注立面的可视区域中确定所述标注区域,包括:
当所述遮挡信息指示所述标注立面上存在两个被遮挡顶点,且不存在被完全遮挡的边时,获取目标点集;所述目标点集包括所述标注立面的未被遮挡顶点以及所述两个被遮挡顶点的邻边上的分界点;所述分界点用于区分所述可视立面的被遮挡区域和未被遮挡区域;
在所述标注立面的可视区域内确定第三端点;以所述第三端点与第三目标点之间的线段为对角线的矩形,是所述标注立面的可视区域内面积最大的矩形;所述第三目标点是所述目标点集中的一个;
将以所述第三端点与所述第三目标点之间的线段为对角线的矩形所在的区域,确定为所述标注区域。
在一种可能的实现方式中,所述第一形状为矩形,所述根据所述遮挡信息,在所述标注立面的可视区域中确定所述标注区域,包括:
当所述遮挡信息指示所述标注立面上存在三个被遮挡顶点时,将所述标注立面的未被遮挡顶点获取为第四目标点;
将所述第四目标点与所述第四目标点的两条邻边上的分界点构成的矩形所在的区域,确定为所述标注区域;所述分界点用于区分所述可视立面的被遮挡区域和未被遮挡区域。
在一种可能的实现方式中,所述在显示界面呈现的所述目标物体的标注立面上呈现所述目标物体的标注信息,包括:
基于所述标注区域的尺寸,生成所述标注信息的三维模型;
在所述显示界面呈现的所述标注区域的平行平面上呈现所述标注信息的三维模型;所述平行平面是位于所述标注立面前方,且平行于所述标注立面的平面。
在本申请实施例提供的方案中,该标注信息可以是以三维模型结构呈现在显示界面上的,并且该标注信息的三维模型的大小可以通过标注区域的尺寸决定。由于三维模型具有深度信息,因此首先需要获取标注立面前方的,与标注立面平行的平面,标注信息位于该平行平面上,根据该平行平面所在的位置以及标注信息的三维模型的大小,在显示界面上呈现该标注信息的三维模型。此时,显示界面上呈现的标注信息同样具有三维特点,提高了标注信息的显示效果。
在一种可能的实现方式中,所述根据所述至少两个可视立面的可视区域,获取所述至少两个可视立面的待投影区域,包括:
从所述至少两个可视立面的可视区域中分别确定候选标注区域;所述候选标注区域是在对应立面的可视区域包含的第二形状的区域中,面积最大的区域;
将所述至少两个可视立面各自对应的候选标注区域,获取为所述至少两个可视立面的待投影区域。
在一种可能的实现方式中,所述在显示界面呈现的所述目标物体的标注立面上呈现所述目标物体的标注信息,包括:
对应所述标注立面对应的候选标注区域,在所述显示界面呈现的所述标注立面上呈现所述目标物体的所述标注信息。
在本申请实施例提供的方案中,首先在目标物体的至少两个可视立面的可视区域中,将包含第二形状的面积最大的区域的可视区域获取为待投影区域,也就是将可视区域中的部分作为待投影区域,再投影至显示界面上;将在显示界面上面积最大的投影区域对应的候选标注区域作为用于呈现标注信息的区域,也就是首先比较每个可视立面的指定形状的最大区域,投影在显示界面上的面积,将面积最大的投影区域对应的可视立面的候选标注区域用于呈现标注信息,考虑每个可视立面投影在显示界面上的指定形状的最大面积,可以在显示界面上呈现最大的指定形状的信息标注,提高了标注信息的显示效果。
在一种可能的实现方式中,在显示界面呈现的所述目标物体的标注立面上呈现所述目标物体的标注信息,包括:
当所述标注立面的可视区域在所述显示界面上呈现的投影区域的面积大于指定面积阈值 时,在显示界面呈现的所述标注立面上呈现所述目标物体的标注信息。
又一方面,提供了一种物体标注信息呈现装置,所述装置包括:
目标物体获取单元,用于获取指定场景中的目标物体,所述指定场景是在目标位置呈现的场景;
标注信息呈现单元,用于在显示界面呈现的所述目标物体的标注立面上呈现所述目标物体的标注信息,所述标注立面为根据所述目标物体的至少两个可视立面各自在所述显示界面上呈现的投影区域,从所述至少两个可视立面中确定的;所述可视立面是所述目标物体的外立面中,对所述目标位置可见的立面。
再一方面,提供了一种电子设备,所述电子设备包含处理器和存储器,所述存储器中存储有计算机指令,所述计算机指令由所述处理器加载并执行以实现上述的物体标注信息呈现方法。
又一方面,提供了一种计算机可读存储介质,所述存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由处理器加载并执行以实现上述物体标注信息呈现方法。
再一方面,提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。终端的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该终端执行上述物体标注信息呈现方法。
本申请提供的技术方案可以包括以下有益效果:
通过在虚拟现实场景或增强现实场景中,获取目标物体的外立面中的可视立面,并根据可视立面对在显示界面上的投影,在该目标物体的可视立面中,确定标注立面,并将该目标物体的标注信息呈现在与该标注立面的可视区域对应的区域。通过上述方案,在显示目标物体的标注时,可以根据目标位置,以及可视立面在指定场景的显示界面上的投影,进行标注立面的选择,从而能够在指定场景中,动态的选择可视区域较大的立面来显示标注信息,从而提高了标注信息的显示效果。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本申请。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本申请的实施例,并与说明书一起用于解释本申请的原理。
图1是根据一示例性实施例示出的一种物体标注信息呈现系统的结构示意图;
图2是根据一示例性实施例示出的一种物体标注信息呈现方法的流程示意图;
图3是根据一示例性实施例提供的一种物体标注信息呈现方法的方法流程图;
图4示出了图3所示实施例涉及的一种立面顶点遮挡分类图;
图5示出了图3所示实施例涉及的一种获取单个顶点遮挡对应的立面可视区域的示意图;
图6示出了图3所示实施例涉及的一种标注范围的计算方法示意图;
图7示出了图3所示实施例涉及的一种标注范围的计算方法示意图;
图8示出了图3所示实施例涉及的一种标注范围的计算方法示意图;
图9示出了图3所示实施例涉及的一种数据资源流程图;
图10示出了图3所示实施例涉及的一种标注呈现方法结构图;
图11示出了图3所示实施例涉及的一种建筑物的可视立面计算流程图;
图12示出了图3所示实施例涉及的一种可视区域计算对应的方法流程图;
图13示出了图3所示实施例涉及的一种文本标注范围对应的计算流程图;
图14示出了图3所示实施例涉及的一种相机实时位姿模拟流程图;
图15示出了图3所示实施例与一种AR地图技术的对比示意图;
图16是根据一示例性实施例提供的一种物体标注信息呈现方法的方法流程图;
图17其示出了一种物体标注信息的呈现方法的流程示意图;
图18是根据一示例性实施例示出的一种物体标注信息呈现装置的结构方框图;
图19是示出了一示例性实施例提供的电子设备的示意性框图;
图20是示出了一示例性实施例提供的电子设备的结构示意图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本申请相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本申请的一些方面相一致的装置和方法的例子。
在对本申请所示的各个实施例进行说明之前,首先对本申请涉及到的几个概念进行介绍:
1)计算机视觉(CV,Computer Vision)
计算机视觉是一门研究如何使机器“看”的科学,就是指用摄影机和电脑代替人眼对目标进行识别、跟踪和测量等机器视觉,并进一步做图形处理,使电脑处理成为更适合人眼观察或传送给仪器检测的图像。视觉是各个应用领域,如制造业、检验、文档分析、医疗诊断,和军事等领域中各种智能/自主系统中不可分割的一部分。计算机视觉的挑战是要为计算机和机器人开发具有与人类水平相当的视觉能力。机器视觉需要图像信号,纹理和颜色建模,几何处理和推理,以及物体建模。
2)增强现实(AR,Augmented Reality)
增强现实技术是一种将虚拟信息与真实世界巧妙融合的技术,广泛运用了多媒体、三维建模、实时跟踪及注册、智能交互、传感等多种技术手段,将计算机生成的文字、图像、三维模型、音乐、视频等虚拟信息模拟仿真后,应用到真实世界中,两种信息互为补充,从而实现对真实世界的“增强”。增强现实技术也被称为扩增现实,是促使真实世界信息和虚拟世界信息内容之间综合在一起的较新的技术内容,其将原本在现实世界的空间范围中比较难以 进行体验的实体信息在电脑等科学技术的基础上,实施模拟仿真处理,叠加将虚拟信息内容在真实世界中加以有效应用,并且在这一过程中能够被人类感官所感知,从而实现超越现实的感官体验。
3)虚拟现实(Virtual Reality,VR)
虚拟现实技术,又称灵境技术,是20世纪发展起来的一项全新的实用技术。虚拟现实技术囊括计算机、电子信息、仿真技术于一体,其基本实现方式是计算机模拟虚拟环境从而给人以环境沉浸感。拟现实技术(VR)是一种可以创建和体验虚拟世界的计算机仿真系统,它利用计算机生成一种模拟环境,使用户沉浸到该环境中。虚拟现实技术就是利用现实生活中的数据,通过计算机技术产生的电子信号,将其与各种输出设备结合使其转化为能够让人们感受到的现象,这些现象可以是现实中真真切切的物体,也可以是肉眼所看不到的物质,通过三维模型表现出来。
图1是根据一示例性实施例示出的一种物体标注信息呈现系统的结构示意图。该系统包括:服务器120以及用户终端140。
服务器120是一台服务器,或者包括若干台服务器,或者是一个虚拟化平台,或者是一个云计算服务中心等,本申请不做限制。
用户终端140可以是具有显示功能的终端设备,也可以是具有实现VR或AR功能的终端设备,比如,用户终端可以是可穿戴设备(例如VR眼镜、AR眼镜、智能眼镜)、手机、平板电脑或电子书阅读器等等。用户终端140的数量不做限定。
其中,用户终端140中可以安装有客户端,该客户端可以是三维地图客户端、即时通信客户端、浏览器客户端等。本申请实施例不限定客户端的软件类型。
用户终端140与服务器120之间通过通信网络相连。可选的,通信网络是有线网络或无线网络。
在本申请实施例中,服务器120可以将目标物体的三维建模数据发送给用户终端140,由用户终端140根据该三维建模数据进行在VR场景中进行目标物体的三维建模,或在AR场景对应的计算机后台中进行目标物体的三维建模。
可选的,上述的无线网络或有线网络使用标准通信技术和/或协议。网络通常为因特网、但也可以是任何网络,包括但不限于局域网(Local Area Network,LAN)、城域网(Metropolitan Area Network,MAN)、广域网(Wide Area Network,WAN)、移动、有线或者无线网络、专用网络或者虚拟专用网络的任何组合)。在一些实施例中,使用包括超文本标记语言(Hyper Text Mark-up Language,HTML)、可扩展标记语言(Extensible Markup Language,XML)等的技术和/或格式来代表通过网络交换的数据。此外还可以使用诸如安全套接字层(Secure Socket Layer,SSL)、传输层安全(Transport Layer Security,TLS)、虚拟专用网络(Virtual Private Network,VPN)、网际协议安全(Internet Protocol Security,IPsec)等常规加密技术来加密所有或者一些链路。在另一些实施例中,还可以使用定制和/或专用数据通信技术取代或者补充上述数据通信技术。
请参考图2,其是根据一示例性实施例示出的一种物体标注信息呈现方法的流程示意图。该方法可以由电子设备执行,其中,该电子设备可以是终端,也可以是服务器,或者,该电 子设备可以包括终端和服务器,其中,该终端可以是上述图1所示的实施例中的用户终端140,服务器可以是上述图1所示实施例中的服务器120。如图2所示,该物体标注信息呈现方法的流程可以包括如下步骤:
步骤21,获取指定场景中的目标物体,该指定场景是在目标位置呈现的场景。
在一种可能的实现方式中,该指定场景是在目标位置呈现的增强现实场景或虚拟现实场景。
在一种可能的实现方式中,该增强现实场景或虚拟现实场景,可以是在用户终端对应的显示设备上呈现的场景。
步骤22,在显示界面呈现的该目标物体的标注立面上呈现该目标物体的标注信息;该标注立面为根据该目标物体的至少两个可视立面各自在该显示界面上呈现的投影区域,从该至少两个可视立面中确定的,该可视立面是该目标物体的外立面中,对该目标位置可见的立面。
在一种示例性的方案中,上述在目标位置呈现的场景,可以是指对应上述目标位置呈现的场景。也就是说,在显示界面上呈现的场景画面,是以上述目标位置为视点,观察上述指定场景所得到的画面。
其中,该标注立面是目标物体对应的可视立面中,用于显示该目标物体的标注信息的立面。该标注信息可以是文字标注信息,也可以是图片标注信息或者视频标注信息,以及可以用于显示信息的任意形式的标注信息,该标注信息可以是二维标注信息,也可以是三维标注信息,本申请对此不作限定。
综上所述,在本申请实施例所示方案中,通过在虚拟现实场景或增强现实场景中,获取目标物体的外立面中的可视立面,并根据可视立面对在显示界面上的投影,在该目标物体的可视立面中,确定标注立面,并将该目标物体的标注信息呈现在与该标注立面的可视区域对应的区域。通过上述方案,在显示目标物体的标注时,可以根据目标位置,以及可视立面在指定场景的显示界面上的投影,进行标注立面的选择,从而能够在虚拟现实/增强现实场景中,动态的选择可视区域较大的立面来显示标注信息,从而提高了标注信息的显示效果。
请参考图3,其是根据一示例性实施例提供的一种物体标注信息呈现方法的方法流程图。该方法可以由电子设备执行,其中,该电子设备可以是终端,也可以是服务器,或者,该电子设备可以包括终端和服务器,其中,该终端可以是上述图1所示的实施例中的用户终端140,服务器可以是上述图1所示实施例中的服务器120。如图3所示,以目标物体为建筑物对应的VR或AR地图应用场景为例,该物体标注信息呈现方法的流程可以包括如下步骤:
步骤301,获取指定场景中的目标物体。
其中,指定场景是在目标位置呈现的场景。
在一种可能的实现方式中,指定场景是在该目标位置呈现的增强现实场景或虚拟现实场景。
在一种可能的实现方式中,当该指定场景为AR场景时,该目标位置可以是该AR设备所在的位置;其中,该位置可以是绝对位置,例如当AR设备用于AR地图中,AR设备可以获取此时设备的定位信息,以获取该AR设备对应的绝对位置;该位置还可以是相对位置,例如当AR设备用于室内时,可以获取该AR设备与室内环境中某一点的相对位置信息,以获取该AR设备在该室内与该点的相对位置。
在一种可能的实现方式中,当该指定场景为VR场景时,该目标位置还可以是VR设备构建出的虚拟三维场景中,VR设备对应的虚拟角色或是虚拟摄像头对应的位置,此时虚拟三维场景中每一个点都可以有自己的坐标,根据该坐标数据,可以获取该目标位置对应的信息。
在一种可能的实现方式中,目标物体的可视立面,是目标物体呈现在目标位置上时,目标物体的外立面中可见的立面。
其中,该可视区域是指,目标物体的可视立面上的,呈现于显示界面的投影区域所对应的可视立面上的可视部分的区域。
在一种可能的实现方式中,当该指定场景为VR或者AR场景时,根据该目标物体对应的三维模型数据,构建该物体对应的三维模型。
在一种可能的实现方式中,当该指定场景为VR场景时,根据该目标物体对应的三维模型数据,在VR场景中构建该物体对应的三维模型;当该指定场景为AR场景时,根据该目标物体对应的三维模型数据,在AR场景对应的计算机后台中,构建该物体对应的三维模型。
其中,该目标物体对应的三维数据可以包括该目标物体的尺寸信息、坐标信息等。
在一种可能的实现方式中,获取指定场景中的目标物体的至少两个可视立面的可视区域。
在一种可能的实现方式中,根据该目标物体的三维模型,以及该目标物体的三维模型对应的三维数据,获取该目标物体对应的最小包围盒,该最小包围盒是该目标物体最小的外接长方体,再将该目标物体对应的最小包围盒作为该目标物体的近似模型,此时当目标物体的外部形状较为复杂时,通过该近似模型实现目标物体的相应计算,可以有效的减小计算量。
在一种可能的实现方式中,将该目标物体的最小包围盒对应的外立面获取为该目标物体对应的外立面。
在一种可能的实现方式中,获取该指定场景中该目标物体的各个外立面的法向量,将该目标物体的各个外立面中,法向量与该指定场景的显示界面对应的向量的内积为正值的外立面,获取为该可视立面。
在一种可能的实现方式中,该指定场景的显示界面对应的向量可以是该指定场景的显示界面对应的法向向量。
在一种可能的实现方式中,该指定场景的显示界面可以是在指定场景中,通过虚拟摄像头模拟出的用户的观察界面,也就是说,该指定场景的显示界面,可以是用户使用终端进行观察时对应的场景。以指定场景为AR地图为例,用户终端获取实时位置信息,并根据AR地图客户端,获取此时目标位置对应的周围环境的三维建筑物模型信息,此时指定场景的显示界面,即为用户终端上显示对应方位的建筑物的界面;该指定场景的显示界面对应的向量,即为用户终端朝向方位的向量。
在指定场景的显示界面中,终端可以根据当前终端的朝向,显示该方向对应的建筑物模型,并根据该建筑物模型对应的各个外立面的法向量,以及指定场景的显示界面对应的方向的方向向量的内积,判定该外立面与用户方位的关系,以此确定该外立面在用户视角中是否处于可视状态。
在一种可能的实现方式中,可以根据该目标位置与该可视立面上各点的连线,确定该可视立面的可视区域。
例如,可以在根据目标位置周围的三维模型数据构建的三维模型中,目标物体对应的三维模型,获取该目标物体对应的三维模型中的可视立面上的若干个点,将该若干个点分别于 该目标位置相连,当其中某一点与该目标位置的连线中没有其他建筑物对应的三维模型,则将该点对应的位置确定为可视区域;当其中某一点与该目标位置的连线中有其他建筑物对应的三维模型,则说明该点与目标位置对应的连线上,被其他建筑物所遮挡,因此,可以将该点确定为该目标位置对应的不可视区域,根据若干个可视区域点以及不可视区域点,可以获取该可视立面的可视区域。
在一种可能的实现方式中,可以根据该目标位置与该可视立面上的分界点点,确定该可视立面的可视区域。
其中,该分界点是该可视立面上的边上,用于划分可视区域与非可视区域的点。
步骤302,根据该至少两个可视立面的可视区域,获取该至少两个可视立面的待投影区域。
其中,该可视区域是对应的可视立面在该指定场景中,对该目标位置可见的区域。
在一种可能的实现方式中,将该至少两个可视立面的可视区域各自的全部区域,获取为该至少两个可视立面的待投影区域。
步骤303,将该至少两个可视立面的待投影区域向该指定场景的显示界面做投影,获取该至少两个可视立面的可视区域各自在该指定场景的显示界面上呈现的投影区域。
在一种可能的实现方式中,获取该指定场景的显示界面的法向量,根据该可视立面的法向量与该显示界面的法向量的夹角,获取该至少两个可视立面的可视区域各自在该指定场景的显示界面上的投影区域。
即将该至少两个可视立面对应的可视区域,向指定区域的显示界面对应的方向做投影,也就是说,当该可视立面与该指定区域的显示界面不是正对时,即该可视立面与该指定区域的显示界面具有一定的夹角时,可以根据该可视立面的法向量与该显示界面法向量对应的夹角值,将该可视立面对应的可视区域,投影至用户直接观察到的指定区域的显示界面对应的方向上。
步骤304,根据该至少两个可视立面的可视区域各自在该显示界面上呈现的投影区域,从该至少两个可视立面中确定标注立面。
在一种可能的实现方式中,该标注立面为该至少两个可视立面中,在该显示界面上呈现的投影区域的面积最大的一个。
当可视立面对应的可视区域在该指定场景的显示界面上呈现的投影区域最大时,也就是在用户视角上看到的该立面的可视面积是最大的,因此可以将该立面设置为标注立面。
在一种可能的实现方式中,获取该至少两个可视立面的可视区域面积;根据该至少两个可视立面的可视区域面积,以及该至少两个可视立面与该指定场景的显示界面的方位关系,获取该至少两个可视立面的可视区域各自在该指定场景的显示界面上的投影区域的面积。
在一种可能的实现方式中,获取该至少两个可视立面的遮挡信息,该遮挡信息用于指示该至少两个可视立面被遮挡的顶点和被遮挡的边;根据该遮挡信息,获取该至少两个可视立面各自对应的可视区域面积。
在获取该可视立面的可视区域面积时,可以根据该可视立面的被遮挡顶点的数量以及被遮挡的边的数量,获取该可视立面近似的可视区域,根据该近似的可视区域,获取该可视立面的可视区域面积。
在一种可能的实现方式中,当该可视立面的所有顶点都未被遮挡,将该可视立面的面积 获取为该可视立面的可视区域面积。
其中,当该可视立面的所有顶点未被遮挡时,可以认为在目标位置的显示界面上呈现该可视立面时,没有其他物体遮挡该立面,因此该可视立面所有区域都为可视区域,因此将该可视立面的面积获取为该可视立面的可视区域面积。
在一种可能的实现方式中,当该可视立面存在一个被遮挡顶点时,获取该被遮挡顶点的邻边上的分界点;该分界点用于区分该可视立面的被遮挡区域和未被遮挡区域;根据该分界点,以及该可视立面的未被遮挡顶点,获取该可视立面的可视区域面积。
在一种可能的实现方式中,当该可视立面存在两个被遮挡顶点时,且存在一条被遮挡边时,获取该被遮挡边的邻边上的分界点;根据该分界点,以及该可视立面的未被遮挡顶点,获取该可视立面的可视区域面积。
在一种可能的实现方式中,当该可视立面存在两个被遮挡顶点,且不存在被遮挡边时,分别获取该两个被遮挡顶点的邻边上对应的分界点;根据该分界点,以及该可视立面的未被遮挡顶点,获取该可视立面的可视区域面积。
在一种可能的实现方式中,当该可视立面存在三个被遮挡顶点时,获取该可视立面的未被遮挡顶点的邻边上的分界点;根据该分界点以及该未被遮挡顶点,获取该可视立面的可视区域面积。
请参考图4,其示出了本申请实施例涉及的一种立面顶点遮挡分类图。如图4所示,立面顶点被遮挡至少有如图所示的七种可视立面情况。
对于没有顶点被遮挡时的可视立面情况401,其没有顶点被其他物体遮挡,可以近似认为该立面图401对于的立面没有被遮挡,因此将该立面的面积获取为该立面的可视区域面积。
而对于有一个顶点被遮挡的可视立面情况402,如图5所示,其示出了本申请实施例涉及的一种获取单个顶点遮挡对应的立面可视区域的示意图。如图5所示,电子设备根据该物体立面的遮挡情况501,获取该物体其中某一立面对应的立面可视区域502。由立面可视区域502可知,立面中的c点被遮挡后,a,b,d点是可见的,因此根据被遮挡点c对应的邻边上的遮挡情况,可以通过二分法递归获取该邻边上的分界点c1和c2,将该分界点c1和c2作为可见端点,与未被遮挡的可见顶点a,b,d构成新的图形,并将该图形作为该立面对应的可视区域,将该图形对应的面积作为该立面的可视区域面积。
对于有两个顶点被遮挡的立面,且两个被遮挡顶点共享一条被遮挡的邻边时,如可视立面情况403所示,因此该两个被遮挡顶点的邻边上,具有与该两个被遮挡顶点对应的分界点,同图5所示方法,通过二分法递归获得与该两个被遮挡顶点对应的分界点,作为可见端点。将该可见端点与该两个未被遮挡的可见顶点构成新的图形,并将该图形作为该立面对应的可视区域,将该图形对应的面积作为该立面的可视区域面积。需要注意的是,在可视立面情况403中,当该两个分界点与该对应的被遮挡顶点的距离相同时,该可视区域可以呈现矩形。
如可视立面情况404所示,对于有两个顶点被遮挡的立面,且两个被遮挡顶点为非对角顶点时,即两个被遮挡顶点共享一条邻边,且该邻边未被完全遮挡时,每个被遮挡顶点对应的邻边上都分别具有与该被遮挡顶点对应的分界点。同理,获取该两个顶点分别对应的分界点作为可见端点。根据未被遮挡的两个顶点,以及该两个未被遮挡顶点对应的四个可见端点,构成新的图形,并将该图形作为该立面对应的可视区域,将该图形对应的面积作为该立面的可视区域面积。
如可视立面情况405所示,对于有两个顶点被遮挡的立面,且两个被遮挡顶点为对角顶点时,与可视立面情况404相似,其每个被遮挡顶点对应的邻边上都分别具有与该被遮挡顶点对应的分界点。同理,获取该两个顶点分别对应的分界点作为可见端点。根据未被遮挡的两个顶点,以及该两个未被遮挡顶点对应的四个可见端点,构成新的图形,并将该图形作为该立面对应的可视区域,将该图形对应的面积作为该立面的可视区域面积。
如可视立面情况406所示,对于有三个顶点被遮挡的立面,即只有一个顶点是可视顶点,此时在该可视顶点对应的邻边上,通过二分法获取该可视顶点对应的分界点,并根据该分界点与该可视顶点,形成新的图形,并将该图形作为该立面对应的可视区域,将该图形对应的面积作为该立面的可视区域面积。
如可视立面情况407所示,对于有四个顶点被遮挡的立面,此时可以认为该立面被完全遮挡,因此认为该立面没有对应的可视区域,即该立面的可视区域面积为0。
步骤305,从该标注立面的可视区域中确定标注区域;该标注区域是该标注立面的可视区域包含的第一形状的区域中,面积最大的区域。
在一种可能的实现方式中,获取该标注立面的遮挡信息,该遮挡信息用于指示该标注立面被遮挡的顶点和被遮挡的边;根据该遮挡信息,在该标注立面的可视区域中,确定该标注区域。
在一种可能的实现方式中,第一形状为矩形,当该遮挡信息指示该标注立面上存在一个被遮挡顶点时,将该被遮挡顶点的对角顶点作为第一目标点;在该第一目标点对应的非邻接边上确定第一端点,使得以该第一端点与该第一目标点之间的线段为对角线的矩形,是该标注立面的可视区域内面积最大的矩形;将以该第一端点与该第一目标点之间的线段为对角线的矩形所在的区域,确定为该标注区域。
请参考图6,其示出了本申请实施例涉及的一种标注范围的计算方法示意图。如图6所示,当只有一个被遮挡顶点时,获取该被遮挡顶点的对角顶点a(第一目标点),在a的非邻接线段(非邻接边)上找到另一个点(第一端点),让组成的长方形面积最大。其中,非邻接线段是构成该立面的可视区域中,不与a直接连接的线段。
在一种可能的实现方式中,第一形状为矩形,当该遮挡信息指示该标注立面上存在两个被遮挡顶点,且该两个被遮挡顶点之间的边被完全遮挡时,将该标注立面的未被遮挡顶点中,邻边未被遮挡部分的长度之和最大的顶点,获取为第二目标点;在该第二目标点对应的非邻接边上确定第二端点;该第二端点处于该标注立面的可见区域,且以该第二端点与该第二目标点之间的线段为对角线的矩形,是该标注立面的可视区域内面积最大的矩形;将以该第二端点与该第二目标点之间的线段为对角线的矩形所在的区域,确定为该标注区域。
在一种可能的实现方式中,第一形状为矩形,当该遮挡信息指示该标注立面上存在两个被遮挡顶点,且不存在被完全遮挡的边时,获取目标点集;该目标点集包括该标注立面的未被遮挡顶点以及该两个被遮挡顶点的邻边上的分界点;该分界点用于区分该可视立面的被遮挡区域和未被遮挡区域;在该标注立面的可视区域内确定第三端点;以该第三端点与第三目标点之间的线段为对角线的矩形,是该标注立面的可视区域内面积最大的矩形;该第三目标点是该目标点集中的一个;将以该第三端点与该第三目标点之间的线段为对角线的矩形所在的区域,确定为该标注区域。
请参考图7,其示出了本申请实施例涉及的一种标注范围的计算方法示意图。如图7所 示,当可视立面中有两个被遮挡顶点时,可以分为三种情况进行分析,第一种情况701,当被遮挡的区域超过了边线时,即两个被遮挡顶点之间的边被完全遮挡时,可视区域为四边形,其中a点与b点是标注立面对应的两个未被遮挡的顶点,从701可知,a对应的两条邻边的未被遮挡部分的长度之和,大于b对应的两条邻边的未被遮挡部分的长度之和,因此可以将a作为第二目标顶点(基点),在a的非邻接线段上找到与a组成最大长方形的第二端点,即701中的点c1,此时,以a和c1为对角点组成的长方形是701对应的可视区域中面积最大的长方形。
第二种情况702,当被遮挡的区域未超过边线,且可视立面的两个未被遮挡顶点是对角点时,其被遮挡的区域可能形成如702所示的八边形;第三种情况703,当被遮挡的区域未超过边线,且该可视立面的两个为被遮挡的顶点共享一条邻边时,其被遮挡的区域可能形成如703所示的八边形。
对于702与703所示的标注立面,获取该标注立面的目标顶点集合,即获取该标注立面的未被遮挡顶点以及该两个被遮挡顶点的邻边上对应的分界点,以目标顶点集合中的一点为基点,获取与该点对应的,该标注立面的可视区域中面积最大的矩形。
在一种可能的实现方式中,第一形状为矩形当该遮挡信息指示该标注立面上存在三个被遮挡顶点时,将该标注立面的未被遮挡顶点获取为第四目标点;将该第四目标点与该第四目标点的两条邻边上的分界点构成的矩形所在的区域,确定为该标注区域;该分界点用于区分该可视立面的被遮挡区域和未被遮挡区域。
请参考图8,其示出了本申请实施例涉及的一种标注范围的计算方法示意图。如图8所示,当三个顶点被遮挡时,将该未被遮挡顶点(即a点)作为基点,并根据a点在邻边上对应的分界点c1与c2,组成面积最大的矩形,并将该矩形对应的区域获取为标注区域。
在一种可能的实现方式中,第一形状为矩形当该遮挡信息指示该标注立面上存在三个被遮挡顶点时,将该标注立面的未被遮挡顶点获取为第四目标点;在该标注立面的可视区域内确定第四端点;以该第四端点与该第四目标点之间的线段为对角线的矩形,是该标注立面的可视区域内面积最大的矩形;将以该第四端点与该第四目标点之间的线段为对角线的矩形所在的区域,确定为该标注区域。
当三个顶点被遮挡时,也可以直接根据标注立面的未被遮挡顶点,获取标注区域的可视区域内的第四端点,将该第四端点与该未被遮挡顶点组成面积最大的矩形对应的区域,获取为标注区域。当被遮挡区域是矩形时,该第四端点与该未被遮挡顶点组成的矩形与图8对应的最大矩形相同。
在一种可能的实现方式中,该标注区域是该可视立面中指定形状的区域。
在图6至图8的示例中,该标注区域是可视立面中的矩形区域,在本申请实施例中,该标注区域还可以是圆形,三角形等指定形状的区域,本申请对此不作限制。
步骤306,在显示界面呈现的该目标物体的标注立面上呈现该目标物体的标注信息。
在一种可能的实现方式中,可以根据深度信息,在显示界面呈现的该目标物体的标注立面上呈现该目标物体的标注信息。
即在AR或VR场景中,目标物体对应的模型是基于三维数据构建的模型,因此该标注信息在标注立面的标注区域中,可能具有深度属性,即在该目标物体对应的模型的标注立面对应的标注区域中,呈现的标注信息,可以是一个具有三个维度属性的标注信息。
在一种可能的实现方式中,基于该标注区域的尺寸,生成该标注信息的三维模型;在显示界面呈现的该标注区域的平行平面上呈现该标注信息的三维模型;该平行平面是位于该标注立面前方,且平行于该标注立面的平面。
在一种可能的实现方式中,基于该标注区域的尺寸,生成该标注信息的尺寸;基于该标注信息的尺寸,在该标注立面的平行平面内,对于该标注区域展示该标注信息的三维模型,并呈现于目标位置对应显示界面上。即标注信息的尺寸与该标注区域的尺寸是相关的,当该标注区域的尺寸越大,该标注信息的尺寸也越大,该标注信息的三维模型呈现于显示界面上的尺寸也越大。
在一种可能的实现方式中,根据该标注立面的标注区域的水平方向长度和竖直方向长度,确定该标注信息的显示方向;根据该标注信息的显示方向,在显示界面呈现的该目标物体的标注立面上呈现该目标物体的标注信息。
在一种可能的实现方式中,当该标注信息是文本标注信息时,可以根据该标注立面的标注区域的水平方向长度和竖直方向长度,确定该标注信息的呈现方向。例如,当该标注区域的水平方向长度大于竖直方向长度时,该标注信息可以是以该标注区域的水平方向并呈现在显示界面所呈现的标注区域上;当该标注区域的竖直方向长度大于该标注区域的水平方向长度时,该标注信息可以是以标注区域的竖直方向呈现在显示界面所呈现的标注区域上。
在一种可能的实现方式中,获取该目标物体与该指定场景的显示界面对应的距离信息;当该距离信息小于阈值时,在显示界面呈现的对应该标注立面的标注区域上呈现该目标物体对应的标注信息。
当该目标物体与该指定场景的显示界面对应的距离太远时,从目标位置观察该目标物体时,观察到的视线范围较小,标注信息难以识别,此时不在显示界面呈现的目标物体对应的标注立面上呈现标注信息。
在一种可能的实现方式中,当该标注立面的可视区域在该显示界面上的投影区域的面积大于指定面积阈值时,对应该标注立面的可视区域在显示界面上呈现该目标物体的标注信息。
当该标注立面的标注区域面积太小时,此时在该显示界面中的标注区域上呈现的标注面积也可能会难以识别,此时不在显示界面呈现的目标物体对应的标注立面上呈现该标注信息。
请参考图9,其示出了本申请实施例涉及的一种数据资源流程图。如图9所示,本申请实施例所示方案可以通过部署AR SDK(Software Development Kit,软件开发工具包)平台软件和三维建筑物数据模型的终端进行实现。运行时,程序代码运行于终端设备的主机内存和/或GPU(Graphics Processing Unit,图形处理器)内存,从服务器上加载当前位置周围的建筑物模型。在终端提供的底层支持的基础上,终端可以执行文本标注的计算过程和渲染显示,包括计算建筑物的可视区域,投影可视区域的面积和文本标注的显示范围计算,以及文本标注的显示和冲突处理。
请参考图10,其示出了本申请实施例涉及的一种标注呈现方法结构图。如图10所示,以文本标注显示为例,该方法结构由底层支持1001、标注文本计算1002、渲染显示1003构成,且该方法由电子设备执行。本申请所示方法可以在三维渲染引擎unity软件上进行开发的,其中unity软件提供封装后的3D渲染管线,提供碰撞检测和三维渲染等功能,并根据三维建筑物对应的数据加载城市建筑物的三维模型,以此提供本方法的底层支持。
电子设备根据最小包围盒,确定该建筑物的可视立面;再根据该建筑物的最小包围盒, 判断该建筑物可视立面对应的可视顶点以及立面的可视区域范围;通过该可视区域在用户对应的方向的面积投影,选取其中面积投影最大的作为显示立面。
根据该确定的显示立面,计算在该显示立面上,最大的文本标注显示范围,在该最大的文本标建筑物的三维建筑物模型对应的文本标注显示范围上进行文本标注的显示。并且当文本标注的内容在立面上渲染显示完成后,在投影显示在屏幕上时可能会出现遮挡的情况。比如距离较近的文本标注遮挡了后方的文本标注,此时形成了冲突,因此需要对该文本标注的冲突显示进行处理。
并且,在上述方法执行的过程中,相机开始实时位姿模拟,通过电子设备模拟用户的实时位置与角度,并以此来验证该文本标注显示的正确性。
请参考图11,其示出了本申请实施例涉及的一种建筑物的可视立面计算流程图。如图11所示:
S1101,电子设备确定需要标注的建筑物。
S1102,根据该标注建筑物对应的三维模型,获取该建筑物构建三维模型对应的三角网结构,得到该建筑物对应的各顶点坐标,根据该建筑物对应的各顶点坐标,计算该建筑物对应的最小包围盒结构(即长方体结构),并将该最小包围盒获取为该建筑物对应的目标模型。
S1103,获取该建筑物中心到相机的向量Vcam,并计算该目标模型各立面的法线向量Vnor,计算Vnor在Vcam向量上的投影长度值L。
S1104,当该L大于0时,则代表Vnor向量在Vcam的投影向量与该Vcam向量的方向一致,此时,因此Vnor向量对应的立面在Vcam方向是可见的立面;当L小于0时,则代表Vnor向量在Vcam的投影向量与该Vcam向量的方向相反,此时Vnor向量在Vcam方向上是不可见的立面。S1105,将可见的立面添加到可见立面的集合中。
请参考图12,其示出了本申请实施例涉及的一种可视区域计算对应的方法流程图。
S1201,电子设备确定需要标注的建筑物。
S1202,根据该标注建筑物对应的三维模型,获取该建筑物构建三维模型对应的三角网结构,得到该建筑物对应的各顶点坐标,根据该建筑物对应的各顶点坐标,计算该建筑物对应的最小包围盒结构(即长方体结构),并将该最小包围盒获取为该建筑物对应的目标模型。
S1203,遍历该集合中的可见立面,根据该相机与该建筑物对应的立面顶点的位置关系,判断该相机与该建筑物的可视立面的立面顶点之间的连线是否被建筑物遮挡,判断立面顶点在相机视野中是否可见。
S1204,当顶点为可见顶点时,将该顶点直接输入可见顶点集合。当顶点为不可见顶点时,通过二分法,判断顶点相邻边上的遮挡区域与未被遮挡区域的分界点。
S1205,将该分界点作为可见顶点输入可见顶点集合。
S1206,当识别完该建筑物对应的所有可见顶点后,根据该可见顶点,组成该建筑物模型对应的可见区域,并根据该顶点被遮挡的情况,计算可见区域面积。
S1207,根据该可视区域在相机视线上的投影,得到可见区域在屏幕上的范围大小,也就用户实际上可以在屏幕上观察到的范围大小。
请参考图13,其示出了本申请实施例涉及的一种文本标注范围对应的计算流程图。对于上述得到可见区域面积在屏幕上观察到的范围大小(即立面可视区域投影面积)。
S1301,比较其中可视区域面积,将可视区域面积最大的立面作为标注立面。
S1302,根据顶点被遮挡的情况,计算可视区域中面积最大的长方形,并将该长方形对应的区域作为标注范围,即可以在该标注范围上,显示该建筑物对应的标注信息。
S1303,在该标注范围上,显示该建筑物对应的标注信息之前,需要对是否显示该标注信息判定,即综合标注范围面积、建筑物与相机距离进行评价、确定是否显示该标注信息。例如,当标注范围面积较小时,根据该标注范围显示的标注信息相对较小,此时显示出的标注信息用户可能难以看清,因此可以不显示该标注范围;当建筑物与相机距离较远时,此时建筑物在终端上的显示面积同样可能较小,用户可能同样无法看清显示的标注信息,因此可以不显示该标注范围。
请参考图14,其示出了本申请实施例涉及的一种相机实时位姿模拟流程图。如图14所示,在上述标注的显示过程中,可以实时运行如图14所示步骤,通过向终端输入相应的指令,控制相机的移动,视角旋转等响应事件,模拟人的场景浏览,以验证本申请实施例所示的建筑物的信息标注方法。以终端实现为个人计算机为例。
S1401,用户可以通过键盘、鼠标等外接设备输入对应的指令,控制模拟出的三维场景中的相机对象。
S1402,电子设备实时根据键盘、鼠标等用户输入,让相机进行平移、视角旋转动作事件的响应。
S1403,在运行时,控制外部输入让相机模拟人的视角进行城市视角的观看,实时响应建筑物的标注信息,并根据该建筑物标注信息的显示情况,判断本申请实施例对应的信息标注方法是否可以正常显示。
本申请除了可用于建筑物的文本标注,还可以应用图片、视频等多种形式进行信息标注。例如在AR场景中在建筑物立面上投放视频广告进行播放,放置新品汽车模型进行展览,在商场建筑物上介绍店铺信息,在博物馆上介绍馆藏文物历史等多种应用场景。在AR场景中识别到建筑物的可视平面,就可以在场景中展示多媒体信息,而且用户在不同的角度下都能接收到AR场景的信息,丰富AR的体验效果。
本申请实施例还提出计算三维场景中建筑物模型的可视区域,和可视区域在人眼视线方向下的投影面积,确定建筑物标注显示的立面方向。
首先计算三维场景中建筑物模型的可视区域。将建筑立面顶点被遮挡情况进行分类计算,计算可视区域范围内的面积,将可视区域的面积在人眼视线方向做面积投影,得到立面中可视区域在屏幕显示上的范围大小;然后计算得到建筑物立面中可视区域,将文本标注显示在可视区域中。此时的文本标注的位置在建筑物的可视区域中,并且文本标注的方向是建筑物的立面方向。文本标注是三维的,而且不会被当前帧的建筑物所遮挡。三维的文本标注在屏幕显示上会有远近变化,离用户越近的文字在屏幕上显示的尺寸更大,并且文本方向是与建筑物方向一致能更好的表达文本标注与被标注建筑物之间的所属关系。
并且,通过计算在可视区域中文本标注的显示范围,标注文本根据不同视角进行自适应显示。本申请实施例所示方案会更加可视区域的范围进行动态计算,计算其中面积最大的长方形。可视区域是二维平面,形状会随着用户视角变化而变化,需要根据立面顶点的可见性和顶点关系确定长方形的基点,判断长方形对称点的位置来确定面积最大的长方形;当用户在使用AR应用进行浏览时,场景中建筑物内容会发生变化,建筑物文本标注会随着场景变 化而变化,将标注以更佳的视角呈现给用户。
请参考图15,其示出了本申请实施例与一种AR地图技术的对比示意图。如图15所示,在本申请实施例所示方案1501中在不同视角下建筑物标注会发生变化,建筑物标注在场景的可见范围中,方向是在建筑物立面方向上,用户能简单的将标注与被标注的建筑物进行关联,而且提供给用户远近的变化,标注能提供一定程度的位置和方向指引,给用户带来更好的AR体验。但是在1502所示的AR地图技术中,标注与被标注的建筑物关联性不强。
在AR场景中进行标注有如下特点:1:用户在浏览时当前帧的建筑物标注信息不能过多,按照与用户的距离和可视区域的大小进行优先级排序,距离用户越近和可视区域越大,优先级越高。2:当可视区域竖轴长度高于横轴长度时,可以将建筑物信息竖向排列显示。3:当文本标注范围太小,不能有效显示建筑物信息时,可以忽略不计。
AR地图技术1502在AR场景中采用的信息标注方法是通过识别屏幕中建筑物的范围,将文本标注平铺在建筑物的屏幕范围,信息标注仍然是二维平面,没有利用AR场景中的深度信息,而且平铺后的信息标注与所属的建筑物关联性不强,会对场景的其他内容存在遮挡。
本申请实施例1501与AR地图技术1502不同之处在于使用三维的信息标注,让信息标注显示在建筑物的可视区域中,信息标注的法线方向和立面的法线一致,让信息标注与所属的建筑关联性更强,成为AR场景中的一部分,而且对场景中其他内容的遮挡更少,用户在AR体验时拥有更好的感受。
在AR地图技术1501中,三维场景中建筑物的标注是静态的,用户在不同的视角浏览标注时会存在盲区,或者标注信息反向错误;通过计算当前帧用户视角下建筑物的可视区域,将文本内容显示在可视区域中;本申请实施例所示方案中用户在不同的视角下浏览建筑物时,建筑物的标注信息会自动调整,随着视角的变化而变化。这种方式让标注的信息能动态呈现,让三维场景中信息标注更加灵活智能,建筑物的标注也可以采用图片、视频等多媒体方式进行表达,标注信息是AR应用中重要的交互入口。
AR地图技术1502中,三维场景中建筑物的标注是平铺在屏幕上的,与所属的建筑物关联性不强,而且会对其他的场景内容存在遮挡;而本申请实施例所示方案中,计算三维场景中的三维标注,让标注方向和建筑物的立面方向一致,能够在屏幕上呈现标注信息的远近变化,因此可以通过实现三维的建筑物标注,定向表达标注的位置和方向,让标注与被标注的建筑物关联性得到增强,而且文本是线性的表达,在一定程度上能指示建筑物的方位信息,用户在AR场景中的三维体验进一步丰富。
综上所述,在本申请实施例所示的方案中,通过在虚拟现实场景或增强现实场景中,获取目标物体的外立面中的可视立面,并根据可视立面对在显示界面上的投影,在该目标物体的可视立面中,确定标注立面,并将该目标物体的标注信息呈现在与该标注立面的可视区域对应的区域。通过上述方案,在显示目标物体的标注时,可以根据目标位置,以及可视立面在指定场景的显示界面上的投影,进行标注立面的选择,从而能够在虚拟现实/增强现实场景中,动态的选择可视区域较大的立面来显示标注信息,从而提高了标注信息的显示效果。
请参考图16,其是根据一示例性实施例提供的一种物体标注信息呈现方法的方法流程图。该方法可以由电子设备执行,其中,该电子设备可以是终端,也可以是服务器,或者,该电子设备可以包括终端和服务器,其中,该终端可以是上述图1所示的实施例中的用户终端140, 服务器可以是上述图1所示实施例中的服务器120。如图3所示,以目标物体为建筑物对应的VR或AR地图应用场景为例,该物体标注信息呈现方法的流程可以包括如下步骤:
步骤1601,获取指定场景中的目标物体。
在一种可能的实现方式中,当指定场景为VR场景时,该目标物体可以是VR设备根据三维模型数据构建的物体;当指定场景时AR场景时,该目标物体可以是AR设备通过摄像头组件拍摄到的物体。
步骤1602,从该至少两个可视立面的可视区域中分别确定候选标注区域。
其中,该候选标注区域是在对应立面的可视区域包含的第二形状的区域中,面积最大的区域。
该候选标注区域的确定方式,与图3对应的步骤305所示的在标注立面中确认标注区域的方法类似,此处不再赘述。
步骤1603,将该至少两个可视立面各自对应的候选标注区域,获取为该至少两个可视立面的待投影区域。
其中,该候选标注区域是在对应立面的可视区域,面积最大的第二形状区域,也就是说,在本申请实施例中,可以将可视立面的可视区域中的部分区域作为待投影区域,即可以将该候选标注区域作为待投影区域。
在一种可能的实现方式中,将该至少两个可视立面的待投影区域向该指定场景的显示界面做投影,获取该至少两个可视立面的可视区域各自在该指定场景的显示界面上的投影区域。
在一种可能的实现方式中,将该候选标注区域作为待投影区域,向该指定场景的显示界面做投影,获取该至少两个可视立面的候选标注区域各自在该指定场景的显示界面上的投影区域。
步骤1604,根据该至少两个可视立面的可视区域各自在该指定场景的显示界面上的投影区域,从该至少两个可视立面中确定标注立面。
在一种可能的实现方式中,根据该至少两个可视立面的候选标注区域各自在该指定场景的显示界面上的投影区域,从该至少两个可视立面中确定标注立面。
在一种可能的实现方式中,根据该至少两个可视立面的候选标注区域各自在该指定场景的显示界面上的投影区域对应的投影面积的大小,从该至少两个可视立面中确定标注立面。
在确定目标物体的标注立面之前,可以先获取每个可视立面指定形状的候选标注区域,将每个可视立面的候选标注区域作为待投影区域,投影至指定场景的显示界面上,再根据待投影区域在该指定场景的显示界面上的投影面积大小,确定标注立面。由于目标位置与不同的可视立面所成夹角不同,因此可能会产生某一可视立面的可视区域的投影面积较大,但其对应的标注区域对应的投影面积较小,因此也可以通过先对每个立面的标注立面的投影面积进行比较,再根据比较结果确定标注立面。
步骤1605,对应该标注立面对应的候选标注区域,在该显示界面呈现的该标注立面上呈现该目标物体的该标注信息。
在根据至少两个立面对应的候选标注区域,确定标注立面后,可以在该标注立面对应的候选标注区域在显示界面上的呈现区域中,呈现该目标物体的对应的标注信息。即首先比较每个可视立面的指定形状的最大区域,投影在显示界面上的面积,将面积最大的投影区域对应的可视立面的候选标注区域用于呈现标注信息。
综上所述,在本申请实施例所示的方案中,通过在指定场景中,获取目标物体的外立面中的可视立面,并根据可视立面对在显示界面上的投影,在该目标物体的可视立面中,确定标注立面,并将该目标物体的标注信息呈现在与该标注立面的可视区域对应的区域。通过上述方案,在显示目标物体的标注时,可以根据目标位置,以及可视立面在指定场景的显示界面上的投影,进行标注立面的选择,提高了标注信息的显示效果。
请参考图17,其示出了一种物体标注信息的呈现方法的流程示意图。如图17所示,以该方法运行在用户终端1700,且应用场景为AR地图为例,用户终端1700根据三维模型数据1701,以及该用户终端此时对应的位置信息与方位信息,构建该用户终端对应方向的若干个三维建筑模型1702。当对其中目标建筑模型进行识别时,获取该模型与该用户终端方向对应的可视立面1703,并根据该目标建筑物与该用户终端方向对应的可视立面,获取该各个可视立面对应的可视区域1704,以及该各个可视区域1704对应的可视区域面积。
根据该可视区域在用户终端方向上的投影1705,将该可视区域在该用户终端方向上,投影面积最大的可视区域对应的可见立面,获取为标注立面1706。根据该标注立面,获取该标注立面中指定形状的最大区域面积,作为该标注立面对应的标注区域1707,根据该标注区域1707,在建筑物对应的三维建筑模型上展示建筑物对应的标注信息1708并呈现于显示界面上。
图18是根据一示例性实施例示出的一种物体标注信息呈现装置的结构方框图。该物体标注信息呈现装置可以实现图2、图3或图16所示实施例提供的方法中的全部或者部分步骤。该物体标注信息呈现装置可以包括:
目标物体获取单元1801,用于获取指定场景中的目标物体,所述指定场景是在目标位置呈现的场景;
标注信息呈现单元1802,用于在显示界面呈现的所述目标物体的标注立面上呈现所述目标物体的标注信息,所述标注立面为根据所述目标物体的至少两个可视立面各自在所述显示界面上呈现的投影区域,从所述至少两个可视立面中确定的,所述可视立面是所述目标物体的外立面中,对所述目标位置可见的立面。
在一种可能的实现方式中,所述指定场景是在所述目标位置呈现的增强现实场景或虚拟现实场景。
在一种可能的实现方式中,所述标注立面为所述至少两个可视立面中,在所述显示界面上呈现的投影区域的面积最大的一个。
在一种可能的实现方式中,所述装置还包括:
待投影区域获取单元,用于根据所述至少两个可视立面的可视区域,获取所述至少两个可视立面的待投影区域;所述可视区域是对应的可视立面在所述指定场景中,对所述目标位置可见的区域;
投影区域获取单元,用于将所述至少两个可视立面的待投影区域向所述显示界面做投影,获得所述至少两个可视立面的可视区域各自在所述显示界面上呈现的投影区域。
在一种可能的实现方式中,所述待投影区域获取单元,用于,
将所述至少两个可视立面的可视区域各自的全部区域,获取为所述至少两个可视立面的待投影区域。
在一种可能的实现方式中,所述标注信息呈现单元1802,包括:
区域确定子单元,用于从所述标注立面的可视区域中确定标注区域;所述标注区域是所述标注立面的可视区域包含的第一形状的区域中,面积最大的区域;
标注信息呈现子单元,用于在显示界面呈现的所述目标物体的标注立面上呈现所述目标物体的标注信息。
在一种可能的实现方式中,所述区域确定子单元,包括:
遮挡信息获取子单元,用于获取所述标注立面的遮挡信息,所述遮挡信息用于指示所述标注立面被遮挡的顶点和被遮挡的边;
标注区域确定子单元,用于根据所述遮挡信息,在所述标注立面的可视区域中,确定所述标注区域。
在一种可能的实现方式中,所述第一形状为矩形,所述标注区域确定子单元,用于,
当所述遮挡信息指示所述标注立面上存在一个被遮挡顶点时,将所述被遮挡顶点的对角顶点作为第一目标点;
在所述第一目标点对应的非邻接边上确定第一端点,使得以所述第一端点与所述第一目标点之间的线段为对角线的矩形,是所述标注立面的可视区域内面积最大的矩形;
将以所述第一端点与所述第一目标点之间的线段为对角线的矩形所在的区域,确定为所述标注区域。
在一种可能的实现方式中,所述第一形状为矩形,所述标注区域确定子单元,用于,
当所述遮挡信息指示所述标注立面上存在两个被遮挡顶点,且所述两个被遮挡顶点之间的边被完全遮挡时,将所述标注立面的未被遮挡顶点中,邻边未被遮挡部分的长度之和最大的顶点,获取为第二目标点;
在所述第二目标点对应的非邻接边上确定第二端点;所述第二端点处于所述标注立面的可见区域,且以所述第二端点与所述第二目标点之间的线段为对角线的矩形,是所述标注立面的可视区域内面积最大的矩形;
将以所述第二端点与所述第二目标点之间的线段为对角线的矩形所在的区域,确定为所述标注区域。
在一种可能的实现方式中,所述第一形状为矩形,所述标注区域确定子单元,用于,
当所述遮挡信息指示所述标注立面上存在两个被遮挡顶点,且不存在被完全遮挡的边时,获取目标点集;所述目标点集包括所述标注立面的未被遮挡顶点以及所述两个被遮挡顶点的邻边上的分界点;所述分界点用于区分所述可视立面的被遮挡区域和未被遮挡区域;
在所述标注立面的可视区域内确定第三端点;以所述第三端点与第三目标点之间的线段为对角线的矩形,是所述标注立面的可视区域内面积最大的矩形;所述第三目标点是所述目标点集中的一个;
将以所述第三端点与所述第三目标点之间的线段为对角线的矩形所在的区域,确定为所述标注区域。
在一种可能的实现方式中,所述第一形状为矩形,所述标注区域确定子单元,用于,
当所述遮挡信息指示所述标注立面上存在三个被遮挡顶点时,将所述标注立面的未被遮挡顶点获取为第四目标点;
将所述第四目标点与所述第四目标点的两条邻边上的分界点构成的矩形所在的区域,确 定为所述标注区域;所述分界点用于区分所述可视立面的被遮挡区域和未被遮挡区域。
在一种可能的实现方式中,所述标注信息呈现子单元,包括:
标注信息模型生成子单元,用于基于所述标注区域的尺寸,生成所述标注信息的三维模型;
标注信息模型呈现子单元,用于在所述显示界面呈现的所述标注区域的平行平面上呈现所述标注信息的三维模型;所述平行平面是位于所述标注立面前方,且平行于所述标注立面的平面。
在一种可能的实现方式中,所述待投影区域获取单元,包括:
候选标注区域确定子单元,用于从所述至少两个可视立面的可视区域中分别确定候选标注区域;所述候选标注区域是在对应立面的可视区域包含的第二形状的区域中,面积最大的区域;
待投影区域获取子单元,用于将所述至少两个可视立面各自对应的候选标注区域,获取为所述至少两个可视立面的待投影区域。
在一种可能的实现方式中,所述标注信息呈现单元1802,还用于,
对应所述标注立面对应的候选标注区域,在所述显示界面呈现的所述标注立面上呈现所述目标物体的所述标注信息。
在一种可能的实现方式中,所述标注信息呈现单元1802,用于,
当所述标注立面的可视区域在所述显示界面上的投影区域的面积大于指定面积阈值时,在所述显示界面呈现的所述标注立面上呈现所述目标物体的所述标注信息。
综上所述,在本申请实施例所示的方案中,通过在虚拟现实场景或增强现实场景中,获取目标物体的外立面中的可视立面,并根据可视立面对在显示界面上的投影,在该目标物体的可视立面中,确定标注立面,并将该目标物体的标注信息呈现在与该标注立面的可视区域对应的区域。通过上述方案,在显示目标物体的标注时,可以根据目标位置,以及可视立面在指定场景的显示界面上的投影,进行标注立面的选择,从而能够在指定场景中,动态的选择可视区域较大的立面来显示标注信息,从而提高了标注信息的显示效果。
可以理解的是,电子设备为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块(或单元)。结合本申请中所公开的实施例描述的各示例的单元及算法步骤,本申请实施例能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。本领域技术人员可以对每个特定的应用来使用不同的方法来实现所描述的功能,但是这种实现不应认为超出本申请实施例的技术方案的范围。
本申请实施例可以根据上述方法示例对电子设备进行功能单元的划分,例如,可以对应各个功能划分各个功能单元,也可以将两个或两个以上的功能集成在一个处理单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。需要说明的是,本申请实施例中对单元的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
在采用集成的单元的情况下,图19示出了上述实施例中所涉及的电子设备的一种可能的结构示意图。电子设备1900包括:处理单元1902和通信单元1903。处理单元1902用于对 电子设备1900的动作进行控制管理。例如,当电子设备1900为用户终端时,处理单元1902用于支持电子设备1900执行图2所示实施例中的步骤21至步骤22,图3所示实施例中的步骤301至步骤306、图16中所示实施例中的步骤1601至步骤1605,和/或用于执行本文所描述的技术的其它步骤。电子设备1900还可以包括存储单元1901,用于存储电子设备1900的程序代码和数据。例如,当电子设备1900为用户终端时,存储单元1901中存储有上文介绍的三维模型数据。
其中,处理单元1902可以是处理器或控制器,例如可以是中央处理器(Central Processing Unit,CPU),通用处理器,数字信号处理器(Digital Signal Processor,DSP),专用集成电路(Application-Specific Integrated Circuit,ASIC),现场可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、晶体管逻辑器件、硬件部件或者其任意组合。其可以实现或执行结合本申请公开内容所描述的各种示例性的逻辑方框,模块和电路。所述处理器也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,DSP和微处理器的组合等等。通信单元1903可以是通信接口、收发器、收发电路等,其中,通信接口是统称,可以包括一个或多个接口。存储单元1901可以是存储器。
当处理单元1902为处理器,通信单元1903为通信接口,存储单元1901为存储器时,本申请实施例所涉及的电子设备可以为图20所示的电子设备。
参阅图20所示,该电子设备2010包括:处理器2012、通信接口2013、存储器2011。可选地,电子设备2010还可以包括总线2014。其中,通信接口2013、处理器2012以及存储器2011可以通过总线2014相互连接;总线2014可以是外设部件互连标准(Peripheral Component Interconnect,简称PCI)总线或扩展工业标准结构(Extended Industry Standard Architecture,简称EISA)总线等。所述总线2014可以分为地址总线、数据总线、控制总线等。为便于表示,图20中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
上述图19或图20所示的电子设备可以是用户终端或服务器。
结合本申请实施例公开内容所描述的方法或者算法的步骤可以硬件的方式来实现,也可以是由处理器执行软件指令的方式来实现。软件指令可以由相应的软件模块(或单元)组成,软件模块(或单元)可以被存放于随机存取存储器(Random Access Memory,RAM)、闪存、只读存储器(Read Only Memory,ROM)、可擦除可编程只读存储器(Erasable Programmable ROM,EPROM)、电可擦可编程只读存储器(Electrically EPROM,EEPROM)、寄存器、硬盘、移动硬盘、只读光盘(CD-ROM,Compact Disc Read-Only Memory)或者本领域熟知的任何其它形式的存储介质中。一种示例性的存储介质耦合至处理器,从而使处理器能够从该存储介质读取信息,且可向该存储介质写入信息。当然,存储介质也可以是处理器的组成部分。处理器和存储介质可以位于ASIC中。另外,该ASIC可以位于电子设备中。当然,处理器和存储介质也可以作为分立组件存在于电子设备中。
本申请还提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。电子设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该电子设备执行上述物体标注信息呈现方法。
本领域技术人员应该可以意识到,在上述一个或多个示例中,本申请实施例所描述的功能可以用硬件、软件、固件或它们的任意组合来实现。当使用软件实现时,可以将这些功能存储在计算机可读介质中或者作为计算机可读介质上的一个或多个指令或代码进行传输。计算机可读介质包括计算机存储介质和通信介质,其中通信介质包括便于从一个地方向另一个地方传送计算机程序的任何介质。存储介质可以是通用或专用计算机能够存取的任何可用介质。
以上所述的具体实施方式,对本申请实施例的目的、技术方案和有益效果进行了进一步详细说明,所应理解的是,以上所述仅为本申请实施例的具体实施方式而已,并不用于限定本申请实施例的保护范围,凡在本申请实施例的技术方案的基础之上,所做的任何修改、等同替换、改进等,均应包括在本申请实施例的保护范围之内。

Claims (33)

  1. 一种物体标注信息呈现方法,其特征在于,所述方法包括:
    获取指定场景中的目标物体,所述指定场景是在目标位置呈现的场景;
    在显示界面呈现的所述目标物体的标注立面上呈现所述目标物体的标注信息;所述标注立面为根据所述目标物体的至少两个可视立面各自在所述显示界面上呈现的投影区域,从所述至少两个可视立面中确定的;所述可视立面是所述目标物体的外立面中,对所述目标位置可见的立面。
  2. 根据权利要求1所述的方法,其特征在于,所述指定场景是在所述目标位置呈现的增强现实场景或虚拟现实场景。
  3. 根据权利要求1所述的方法,其特征在于,所述标注立面为所述至少两个可视立面中,在所述显示界面上呈现的投影区域的面积最大的一个。
  4. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    根据所述至少两个可视立面的可视区域,获取所述至少两个可视立面的待投影区域;所述可视区域是对应的可视立面在所述指定场景中,对所述目标位置可见的区域;
    将所述至少两个可视立面的待投影区域向所述显示界面做投影,获得所述至少两个可视立面的可视区域各自在所述显示界面上呈现的投影区域。
  5. 根据权利要求4所述的方法,其特征在于,所述根据所述至少两个可视立面的可视区域,获取所述至少两个可视立面的待投影区域,包括:
    将所述至少两个可视立面的可视区域各自的全部区域,获取为所述至少两个可视立面的待投影区域。
  6. 根据权利要求5所述的方法,其特征在于,所述在显示界面呈现的所述目标物体的标注立面上呈现所述目标物体的标注信息,包括:
    从所述标注立面的可视区域中确定标注区域;所述标注区域是所述标注立面的可视区域包含的第一形状的区域中,面积最大的区域;
    在所述显示界面呈现的所述标注立面中的所述标注区域上,呈现所述目标物体的所述标注信息。
  7. 根据权利要求6所述的方法,其特征在于,所述从所述标注立面的可视区域中确定标注区域,包括:
    获取所述标注立面的遮挡信息,所述遮挡信息用于指示所述标注立面被遮挡的顶点和被遮挡的边;
    根据所述遮挡信息,在所述标注立面的可视区域中,确定所述标注区域。
  8. 根据权利要求7所述的方法,其特征在于,所述第一形状为矩形,所述根据所述遮挡信息,在所述标注立面的可视区域中确定所述标注区域,包括:
    当所述遮挡信息指示所述标注立面上存在一个被遮挡顶点时,将所述被遮挡顶点的对角顶点作为第一目标点;
    在所述第一目标点对应的非邻接边上确定第一端点,使得以所述第一端点与所述第一目标点之间的线段为对角线的矩形,是所述标注立面的可视区域内面积最大的矩形;
    将以所述第一端点与所述第一目标点之间的线段为对角线的矩形所在的区域,确定为所述标注区域。
  9. 根据权利要求7所述的方法,其特征在于,所述第一形状为矩形,所述根据所述遮挡信息,在所述标注立面的可视区域中确定所述标注区域,包括:
    当所述遮挡信息指示所述标注立面上存在两个被遮挡顶点,且所述两个被遮挡顶点之间的边被完全遮挡时,将所述标注立面的未被遮挡顶点中,邻边未被遮挡部分的长度之和最大的顶点,获取为第二目标点;
    在所述第二目标点对应的非邻接边上确定第二端点;所述第二端点处于所述标注立面的可见区域,且以所述第二端点与所述第二目标点之间的线段为对角线的矩形,是所述标注立面的可视区域内面积最大的矩形;
    将以所述第二端点与所述第二目标点之间的线段为对角线的矩形所在的区域,确定为所述标注区域。
  10. 根据权利要求7所述的方法,其特征在于,所述第一形状为矩形,所述根据所述遮挡信息,在所述标注立面的可视区域中确定所述标注区域,包括:
    当所述遮挡信息指示所述标注立面上存在两个被遮挡顶点,且不存在被完全遮挡的边时,获取目标点集;所述目标点集包括所述标注立面的未被遮挡顶点以及所述两个被遮挡顶点的邻边上的分界点;所述分界点用于区分所述可视立面的被遮挡区域和未被遮挡区域;
    在所述标注立面的可视区域内确定第三端点;以所述第三端点与第三目标点之间的线段为对角线的矩形,是所述标注立面的可视区域内面积最大的矩形;所述第三目标点是所述目标点集中的一个;
    将以所述第三端点与所述第三目标点之间的线段为对角线的矩形所在的区域,确定为所述标注区域。
  11. 根据权利要求7所述的方法,其特征在于,所述第一形状为矩形,所述根据所述遮挡信息,在所述标注立面的可视区域中确定所述标注区域,包括:
    当所述遮挡信息指示所述标注立面上存在三个被遮挡顶点时,将所述标注立面的未被遮挡顶点获取为第四目标点;
    将所述第四目标点与所述第四目标点的两条邻边上的分界点构成的矩形所在的区域,确定为所述标注区域;所述分界点用于区分所述可视立面的被遮挡区域和未被遮挡区域。
  12. 根据权利要求6所述的方法,其特征在于,所述在显示界面呈现的所述目标物体的标注立面上呈现所述目标物体的标注信息,包括:
    基于所述标注区域的尺寸,生成所述标注信息的三维模型;
    在所述显示界面呈现的所述标注区域的平行平面上呈现所述标注信息的三维模型;所述平行平面是位于所述标注立面前方,且平行于所述标注立面的平面。
  13. 根据权利要求4所述的方法,其特征在于,所述根据所述至少两个可视立面的可视区域,获取所述至少两个可视立面的待投影区域,包括:
    从所述至少两个可视立面的可视区域中分别确定候选标注区域;所述候选标注区域是在对应立面的可视区域包含的第二形状的区域中,面积最大的区域;
    将所述至少两个可视立面各自对应的候选标注区域,获取为所述至少两个可视立面的待投影区域。
  14. 根据权利要求13所述的方法,其特征在于,所述在显示界面呈现的所述目标物体的标注立面上呈现所述目标物体的标注信息,包括:
    对应所述标注立面对应的候选标注区域,在所述显示界面呈现的所述标注立面上呈现所述目标物体的所述标注信息。
  15. 根据权利要求1至14任一所述的方法,其特征在于,在显示界面呈现的所述目标物体的标注立面上呈现所述目标物体的标注信息,包括:
    当所述标注立面的可视区域在所述显示界面上呈现的投影区域的面积大于指定面积阈值时,在显示界面呈现的所述标注立面上呈现所述目标物体的标注信息。
  16. 一种物体标注信息呈现装置,其特征在于,所述装置包括:
    目标物体获取单元,用于获取指定场景中的目标物体,所述指定场景是在目标位置呈现的场景;
    标注信息呈现单元,用于在显示界面呈现的所述目标物体的标注立面上呈现所述目标物体的标注信息,所述标注立面为根据所述目标物体的至少两个可视立面各自在所述显示界面上呈现的投影区域,从所述至少两个可视立面中确定的;所述可视立面是所述目标物体的外立面中,对所述目标位置可见的立面。
  17. 根据权利要求16所述的装置,其特征在于,所述指定场景是在所述目标位置呈现的增强现实场景或虚拟现实场景。
  18. 根据权利要求16所述的装置,其特征在于,所述标注立面为所述至少两个可视立面中,在所述显示界面上呈现的投影区域的面积最大的一个。
  19. 根据权利要求16所述的装置,其特征在于,所述装置还包括:
    待投影区域获取单元,用于根据所述至少两个可视立面的可视区域,获取所述至少两个可视立面的待投影区域;所述可视区域是对应的可视立面在所述指定场景中,对所述目标位置可见的区域;
    投影区域获取单元,用于将所述至少两个可视立面的待投影区域向所述显示界面做投影,获得所述至少两个可视立面的可视区域各自在所述显示界面上呈现的投影区域。
  20. 根据权利要求19所述的装置,其特征在于,所述待投影区域获取单元,用于,
    将所述至少两个可视立面的可视区域各自的全部区域,获取为所述至少两个可视立面的待投影区域。
  21. 根据权利要求20所述的装置,其特征在于,所述标注信息呈现单元,包括:
    区域确定子单元,用于从所述标注立面的可视区域中确定标注区域;所述标注区域是所述标注立面的可视区域包含的第一形状的区域中,面积最大的区域;
    标注信息呈现子单元,用于在所述显示界面呈现的所述标注立面上呈现所述目标物体的所述标注信息。
  22. 根据权利要求21所述的装置,其特征在于,所述区域确定子单元,包括:
    遮挡信息获取子单元,用于获取所述标注立面的遮挡信息,所述遮挡信息用于指示所述标注立面被遮挡的顶点和被遮挡的边;
    标注区域确定子单元,用于根据所述遮挡信息,在所述标注立面的可视区域中,确定所述标注区域。
  23. 根据权利要求22所述的装置,其特征在于,所述第一形状为矩形,所述标注区域确 定子单元,用于,
    当所述遮挡信息指示所述标注立面上存在一个被遮挡顶点时,将所述被遮挡顶点的对角顶点作为第一目标点;
    在所述第一目标点对应的非邻接边上确定第一端点,使得以所述第一端点与所述第一目标点之间的线段为对角线的矩形,是所述标注立面的可视区域内面积最大的矩形;
    将以所述第一端点与所述第一目标点之间的线段为对角线的矩形所在的区域,确定为所述标注区域。
  24. 根据权利要求22所述的装置,其特征在于,所述第一形状为矩形,所述标注区域确定子单元,用于,
    当所述遮挡信息指示所述标注立面上存在两个被遮挡顶点,且所述两个被遮挡顶点之间的边被完全遮挡时,将所述标注立面的未被遮挡顶点中,邻边未被遮挡部分的长度之和最大的顶点,获取为第二目标点;
    在所述第二目标点对应的非邻接边上确定第二端点;所述第二端点处于所述标注立面的可见区域,且以所述第二端点与所述第二目标点之间的线段为对角线的矩形,是所述标注立面的可视区域内面积最大的矩形;
    将以所述第二端点与所述第二目标点之间的线段为对角线的矩形所在的区域,确定为所述标注区域。
  25. 根据权利要求22所述的装置,其特征在于,所述第一形状为矩形,所述标注区域确定子单元,用于,
    当所述遮挡信息指示所述标注立面上存在两个被遮挡顶点,且不存在被完全遮挡的边时,获取目标点集;所述目标点集包括所述标注立面的未被遮挡顶点以及所述两个被遮挡顶点的邻边上的分界点;所述分界点用于区分所述可视立面的被遮挡区域和未被遮挡区域;
    在所述标注立面的可视区域内确定第三端点;以所述第三端点与第三目标点之间的线段为对角线的矩形,是所述标注立面的可视区域内面积最大的矩形;所述第三目标点是所述目标点集中的一个;
    将以所述第三端点与所述第三目标点之间的线段为对角线的矩形所在的区域,确定为所述标注区域。
  26. 根据权利要求22所述的装置,其特征在于,所述第一形状为矩形,所述标注区域确定子单元,用于,
    当所述遮挡信息指示所述标注立面上存在三个被遮挡顶点时,将所述标注立面的未被遮挡顶点获取为第四目标点;
    将所述第四目标点与所述第四目标点的两条邻边上的分界点构成的矩形所在的区域,确定为所述标注区域;所述分界点用于区分所述可视立面的被遮挡区域和未被遮挡区域。
  27. 根据权利要求21所述的装置,其特征在于,所述标注信息呈现子单元,包括:
    标注信息模型生成子单元,用于基于所述标注区域的尺寸,生成所述标注信息的三维模型;
    标注信息模型呈现子单元,用于在所述显示界面呈现的所述标注区域的平行平面上呈现所述标注信息的三维模型;所述平行平面是位于所述标注立面前方,且平行于所述标注立面的平面。
  28. 根据权利要求19所述的装置,其特征在于,所述待投影区域获取单元,包括:
    候选标注区域确定子单元,用于从所述至少两个可视立面的可视区域中分别确定候选标注区域;所述候选标注区域是在对应立面的可视区域包含的第二形状的区域中,面积最大的区域;
    待投影区域获取子单元,用于将所述至少两个可视立面各自对应的候选标注区域,获取为所述至少两个可视立面的待投影区域。
  29. 根据权利要求28所述的装置,其特征在于,所述标注信息呈现单元,还用于,
    对应所述标注立面对应的候选标注区域,在所述显示界面呈现的所述标注立面上呈现所述目标物体的所述标注信息。
  30. 根据权利要求16至29任一所述的装置,其特征在于,所述标注信息呈现单元,用于,
    当所述标注立面的可视区域在所述显示界面上的投影区域的面积大于指定面积阈值时,在所述显示界面呈现的所述标注立面上呈现所述目标物体的所述标注信息。
  31. 一种电子设备,其特征在于,所述电子设备包含处理器和存储器,所述存储器中存储有计算机指令,所述计算机指令由所述处理器加载并执行以实现如权利要求1至15任一所述的物体标注信息呈现方法。
  32. 一种计算机可读存储介质,其特征在于,所述存储介质中存储有计算机程序,所述计算机程序用于被处理器执行,以实现如权利要求1至15任一项所述的物体标注信息呈现方法。
  33. 一种计算机程序产品,其特征在于,所述计算机程序产品包括计算机指令,电子设备的处理器执行所述计算机指令,使得所述电子设备执行如权利要求1至15任一项所述的物体标注信息呈现方法。
PCT/CN2021/118121 2020-10-30 2021-09-14 物体标注信息呈现方法、装置、电子设备及存储介质 WO2022089061A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21884791.1A EP4227907A4 (en) 2020-10-30 2021-09-14 METHOD AND DEVICE FOR DISPLAYING OBJECT ANNOTATION INFORMATION, AS WELL AS ELECTRONIC DEVICE AND STORAGE MEDIUM
US18/307,386 US20230260218A1 (en) 2020-10-30 2023-04-26 Method and apparatus for presenting object annotation information, electronic device, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011191117.4 2020-10-30
CN202011191117.4A CN114445579A (zh) 2020-10-30 2020-10-30 物体标注信息呈现方法、装置、电子设备及存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/307,386 Continuation US20230260218A1 (en) 2020-10-30 2023-04-26 Method and apparatus for presenting object annotation information, electronic device, and storage medium

Publications (1)

Publication Number Publication Date
WO2022089061A1 true WO2022089061A1 (zh) 2022-05-05

Family

ID=81357289

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/118121 WO2022089061A1 (zh) 2020-10-30 2021-09-14 物体标注信息呈现方法、装置、电子设备及存储介质

Country Status (4)

Country Link
US (1) US20230260218A1 (zh)
EP (1) EP4227907A4 (zh)
CN (1) CN114445579A (zh)
WO (1) WO2022089061A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114840902B (zh) * 2022-05-19 2023-03-24 三一筑工科技股份有限公司 目标对象的绘制方法、装置、设备及存储介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150348324A1 (en) * 2014-06-03 2015-12-03 Robert L. Vaughn Projecting a virtual image at a physical surface
CN109446607A (zh) * 2018-10-16 2019-03-08 江南造船(集团)有限责任公司 船舶分段零件三维标注方法、电子装置及存储介质
CN110610045A (zh) * 2019-09-16 2019-12-24 杭州群核信息技术有限公司 选择橱衣柜生成三视图的智能云处理系统和方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150348324A1 (en) * 2014-06-03 2015-12-03 Robert L. Vaughn Projecting a virtual image at a physical surface
CN109446607A (zh) * 2018-10-16 2019-03-08 江南造船(集团)有限责任公司 船舶分段零件三维标注方法、电子装置及存储介质
CN110610045A (zh) * 2019-09-16 2019-12-24 杭州群核信息技术有限公司 选择橱衣柜生成三视图的智能云处理系统和方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4227907A4

Also Published As

Publication number Publication date
CN114445579A (zh) 2022-05-06
EP4227907A1 (en) 2023-08-16
EP4227907A4 (en) 2024-04-24
US20230260218A1 (en) 2023-08-17

Similar Documents

Publication Publication Date Title
CN107852573B (zh) 混合现实社交交互
US9613463B2 (en) Augmented reality extrapolation techniques
US9224237B2 (en) Simulating three-dimensional views using planes of content
JP6050518B2 (ja) 実環境に仮想情報を表現する方法
US9437038B1 (en) Simulating three-dimensional views using depth relationships among planes of content
JP7008733B2 (ja) 挿入される画像コンテンツについての影生成
JP5592011B2 (ja) マルチスケール3次元配向
Tian et al. Handling occlusions in augmented reality based on 3D reconstruction method
US20130016102A1 (en) Simulating three-dimensional features
JP2014525089A5 (zh)
US11562545B2 (en) Method and device for providing augmented reality, and computer program
JP7277548B2 (ja) サンプル画像生成方法、装置及び電子機器
WO2022089061A1 (zh) 物体标注信息呈现方法、装置、电子设备及存储介质
Sandnes Sketching 3D immersed experiences rapidly by hand through 2D cross sections
EP4272061A1 (en) Systems and methods for generating stabilized images of a real environment in artificial reality
JP3951362B2 (ja) 画像処理装置、ゲーム装置、その方法および記録媒体
Trapp et al. Strategies for visualising 3D points-of-interest on mobile devices
CN115965735B (zh) 纹理贴图的生成方法和装置
KR102197504B1 (ko) 사전 계산된 조명으로 증강 현실 환경을 구성하는 기법
CN109949396A (zh) 一种渲染方法、装置、设备和介质
WO2023119715A1 (ja) 映像生成方法及び画像生成プログラム
CN116030228B (zh) 一种基于web的mr虚拟画面展示方法及装置
CN115516517A (zh) 用于构建三维几何图形的方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21884791

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021884791

Country of ref document: EP

Effective date: 20230509

NENP Non-entry into the national phase

Ref country code: DE